Nov 22 02:49:09 localhost kernel: Linux version 5.14.0-639.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025
Nov 22 02:49:09 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 22 02:49:09 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 02:49:09 localhost kernel: BIOS-provided physical RAM map:
Nov 22 02:49:09 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 22 02:49:09 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 22 02:49:09 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 22 02:49:09 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 22 02:49:09 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 22 02:49:09 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 22 02:49:09 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 22 02:49:09 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 22 02:49:09 localhost kernel: NX (Execute Disable) protection: active
Nov 22 02:49:09 localhost kernel: APIC: Static calls initialized
Nov 22 02:49:09 localhost kernel: SMBIOS 2.8 present.
Nov 22 02:49:09 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 22 02:49:09 localhost kernel: Hypervisor detected: KVM
Nov 22 02:49:09 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 22 02:49:09 localhost kernel: kvm-clock: using sched offset of 4384803058 cycles
Nov 22 02:49:09 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 22 02:49:09 localhost kernel: tsc: Detected 2799.998 MHz processor
Nov 22 02:49:09 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Nov 22 02:49:09 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Nov 22 02:49:09 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 22 02:49:09 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 22 02:49:09 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 22 02:49:09 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 22 02:49:09 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 22 02:49:09 localhost kernel: Using GB pages for direct mapping
Nov 22 02:49:09 localhost kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 22 02:49:09 localhost kernel: ACPI: Early table checksum verification disabled
Nov 22 02:49:09 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 22 02:49:09 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:49:09 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:49:09 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:49:09 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 22 02:49:09 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:49:09 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 22 02:49:09 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 22 02:49:09 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 22 02:49:09 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 22 02:49:09 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 22 02:49:09 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 22 02:49:09 localhost kernel: No NUMA configuration found
Nov 22 02:49:09 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 22 02:49:09 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 22 02:49:09 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 22 02:49:09 localhost kernel: Zone ranges:
Nov 22 02:49:09 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 22 02:49:09 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 22 02:49:09 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 02:49:09 localhost kernel:   Device   empty
Nov 22 02:49:09 localhost kernel: Movable zone start for each node
Nov 22 02:49:09 localhost kernel: Early memory node ranges
Nov 22 02:49:09 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 22 02:49:09 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 22 02:49:09 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 22 02:49:09 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 22 02:49:09 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 22 02:49:09 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 22 02:49:09 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 22 02:49:09 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Nov 22 02:49:09 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 22 02:49:09 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 22 02:49:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 22 02:49:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 22 02:49:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 22 02:49:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 22 02:49:09 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 22 02:49:09 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 22 02:49:09 localhost kernel: TSC deadline timer available
Nov 22 02:49:09 localhost kernel: CPU topo: Max. logical packages:   8
Nov 22 02:49:09 localhost kernel: CPU topo: Max. logical dies:       8
Nov 22 02:49:09 localhost kernel: CPU topo: Max. dies per package:   1
Nov 22 02:49:09 localhost kernel: CPU topo: Max. threads per core:   1
Nov 22 02:49:09 localhost kernel: CPU topo: Num. cores per package:     1
Nov 22 02:49:09 localhost kernel: CPU topo: Num. threads per package:   1
Nov 22 02:49:09 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 22 02:49:09 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 22 02:49:09 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 22 02:49:09 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 22 02:49:09 localhost kernel: Booting paravirtualized kernel on KVM
Nov 22 02:49:09 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 22 02:49:09 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 22 02:49:09 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 22 02:49:09 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Nov 22 02:49:09 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Nov 22 02:49:09 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 22 02:49:09 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 02:49:09 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64", will be passed to user space.
Nov 22 02:49:09 localhost kernel: random: crng init done
Nov 22 02:49:09 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 22 02:49:09 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 22 02:49:09 localhost kernel: Fallback order for Node 0: 0 
Nov 22 02:49:09 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 22 02:49:09 localhost kernel: Policy zone: Normal
Nov 22 02:49:09 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 22 02:49:09 localhost kernel: software IO TLB: area num 8.
Nov 22 02:49:09 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 22 02:49:09 localhost kernel: ftrace: allocating 49298 entries in 193 pages
Nov 22 02:49:09 localhost kernel: ftrace: allocated 193 pages with 3 groups
Nov 22 02:49:09 localhost kernel: Dynamic Preempt: voluntary
Nov 22 02:49:09 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 22 02:49:09 localhost kernel: rcu:         RCU event tracing is enabled.
Nov 22 02:49:09 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 22 02:49:09 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Nov 22 02:49:09 localhost kernel:         Rude variant of Tasks RCU enabled.
Nov 22 02:49:09 localhost kernel:         Tracing variant of Tasks RCU enabled.
Nov 22 02:49:09 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 22 02:49:09 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 22 02:49:09 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 02:49:09 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 02:49:09 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 22 02:49:09 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 22 02:49:09 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 22 02:49:09 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 22 02:49:09 localhost kernel: Console: colour VGA+ 80x25
Nov 22 02:49:09 localhost kernel: printk: console [ttyS0] enabled
Nov 22 02:49:09 localhost kernel: ACPI: Core revision 20230331
Nov 22 02:49:09 localhost kernel: APIC: Switch to symmetric I/O mode setup
Nov 22 02:49:09 localhost kernel: x2apic enabled
Nov 22 02:49:09 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Nov 22 02:49:09 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 22 02:49:09 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 22 02:49:09 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 22 02:49:09 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 22 02:49:09 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 22 02:49:09 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 22 02:49:09 localhost kernel: Spectre V2 : Mitigation: Retpolines
Nov 22 02:49:09 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 22 02:49:09 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 22 02:49:09 localhost kernel: RETBleed: Mitigation: untrained return thunk
Nov 22 02:49:09 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 22 02:49:09 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 22 02:49:09 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 22 02:49:09 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 22 02:49:09 localhost kernel: x86/bugs: return thunk changed
Nov 22 02:49:09 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 22 02:49:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 22 02:49:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 22 02:49:09 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 22 02:49:09 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 22 02:49:09 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 22 02:49:09 localhost kernel: Freeing SMP alternatives memory: 40K
Nov 22 02:49:09 localhost kernel: pid_max: default: 32768 minimum: 301
Nov 22 02:49:09 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 22 02:49:09 localhost kernel: landlock: Up and running.
Nov 22 02:49:09 localhost kernel: Yama: becoming mindful.
Nov 22 02:49:09 localhost kernel: SELinux:  Initializing.
Nov 22 02:49:09 localhost kernel: LSM support for eBPF active
Nov 22 02:49:09 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 02:49:09 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 22 02:49:09 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 22 02:49:09 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 22 02:49:09 localhost kernel: ... version:                0
Nov 22 02:49:09 localhost kernel: ... bit width:              48
Nov 22 02:49:09 localhost kernel: ... generic registers:      6
Nov 22 02:49:09 localhost kernel: ... value mask:             0000ffffffffffff
Nov 22 02:49:09 localhost kernel: ... max period:             00007fffffffffff
Nov 22 02:49:09 localhost kernel: ... fixed-purpose events:   0
Nov 22 02:49:09 localhost kernel: ... event mask:             000000000000003f
Nov 22 02:49:09 localhost kernel: signal: max sigframe size: 1776
Nov 22 02:49:09 localhost kernel: rcu: Hierarchical SRCU implementation.
Nov 22 02:49:09 localhost kernel: rcu:         Max phase no-delay instances is 400.
Nov 22 02:49:09 localhost kernel: smp: Bringing up secondary CPUs ...
Nov 22 02:49:09 localhost kernel: smpboot: x86: Booting SMP configuration:
Nov 22 02:49:09 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 22 02:49:09 localhost kernel: smp: Brought up 1 node, 8 CPUs
Nov 22 02:49:09 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 22 02:49:09 localhost kernel: node 0 deferred pages initialised in 8ms
Nov 22 02:49:09 localhost kernel: Memory: 7765864K/8388068K available (16384K kernel code, 5786K rwdata, 13900K rodata, 4188K init, 7176K bss, 616280K reserved, 0K cma-reserved)
Nov 22 02:49:09 localhost kernel: devtmpfs: initialized
Nov 22 02:49:09 localhost kernel: x86/mm: Memory block size: 128MB
Nov 22 02:49:09 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 22 02:49:09 localhost kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 22 02:49:09 localhost kernel: pinctrl core: initialized pinctrl subsystem
Nov 22 02:49:09 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 22 02:49:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 22 02:49:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 22 02:49:09 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 22 02:49:09 localhost kernel: audit: initializing netlink subsys (disabled)
Nov 22 02:49:09 localhost kernel: audit: type=2000 audit(1763779747.698:1): state=initialized audit_enabled=0 res=1
Nov 22 02:49:09 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 22 02:49:09 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 22 02:49:09 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 22 02:49:09 localhost kernel: cpuidle: using governor menu
Nov 22 02:49:09 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 22 02:49:09 localhost kernel: PCI: Using configuration type 1 for base access
Nov 22 02:49:09 localhost kernel: PCI: Using configuration type 1 for extended access
Nov 22 02:49:09 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 22 02:49:09 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 22 02:49:09 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 22 02:49:09 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 22 02:49:09 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 22 02:49:09 localhost kernel: Demotion targets for Node 0: null
Nov 22 02:49:09 localhost kernel: cryptd: max_cpu_qlen set to 1000
Nov 22 02:49:09 localhost kernel: ACPI: Added _OSI(Module Device)
Nov 22 02:49:09 localhost kernel: ACPI: Added _OSI(Processor Device)
Nov 22 02:49:09 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 22 02:49:09 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 22 02:49:09 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 22 02:49:09 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 22 02:49:09 localhost kernel: ACPI: Interpreter enabled
Nov 22 02:49:09 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 22 02:49:09 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Nov 22 02:49:09 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 22 02:49:09 localhost kernel: PCI: Using E820 reservations for host bridge windows
Nov 22 02:49:09 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 22 02:49:09 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 22 02:49:09 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [3] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [4] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [5] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [6] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [7] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [8] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [9] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [10] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [11] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [12] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [13] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [14] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [15] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [16] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [17] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [18] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [19] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [20] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [21] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [22] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [23] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [24] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [25] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [26] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [27] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [28] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [29] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [30] registered
Nov 22 02:49:09 localhost kernel: acpiphp: Slot [31] registered
Nov 22 02:49:09 localhost kernel: PCI host bridge to bus 0000:00
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 22 02:49:09 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 22 02:49:09 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 22 02:49:09 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 22 02:49:09 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 22 02:49:09 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 22 02:49:09 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 22 02:49:09 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 22 02:49:09 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 22 02:49:09 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 22 02:49:09 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 22 02:49:09 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 22 02:49:09 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 22 02:49:09 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 22 02:49:09 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 22 02:49:09 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 22 02:49:09 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 22 02:49:09 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 22 02:49:09 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 22 02:49:09 localhost kernel: iommu: Default domain type: Translated
Nov 22 02:49:09 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 22 02:49:09 localhost kernel: SCSI subsystem initialized
Nov 22 02:49:09 localhost kernel: ACPI: bus type USB registered
Nov 22 02:49:09 localhost kernel: usbcore: registered new interface driver usbfs
Nov 22 02:49:09 localhost kernel: usbcore: registered new interface driver hub
Nov 22 02:49:09 localhost kernel: usbcore: registered new device driver usb
Nov 22 02:49:09 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 22 02:49:09 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 22 02:49:09 localhost kernel: PTP clock support registered
Nov 22 02:49:09 localhost kernel: EDAC MC: Ver: 3.0.0
Nov 22 02:49:09 localhost kernel: NetLabel: Initializing
Nov 22 02:49:09 localhost kernel: NetLabel:  domain hash size = 128
Nov 22 02:49:09 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 22 02:49:09 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Nov 22 02:49:09 localhost kernel: PCI: Using ACPI for IRQ routing
Nov 22 02:49:09 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Nov 22 02:49:09 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Nov 22 02:49:09 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 22 02:49:09 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 22 02:49:09 localhost kernel: vgaarb: loaded
Nov 22 02:49:09 localhost kernel: clocksource: Switched to clocksource kvm-clock
Nov 22 02:49:09 localhost kernel: VFS: Disk quotas dquot_6.6.0
Nov 22 02:49:09 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 22 02:49:09 localhost kernel: pnp: PnP ACPI init
Nov 22 02:49:09 localhost kernel: pnp 00:03: [dma 2]
Nov 22 02:49:09 localhost kernel: pnp: PnP ACPI: found 5 devices
Nov 22 02:49:09 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 22 02:49:09 localhost kernel: NET: Registered PF_INET protocol family
Nov 22 02:49:09 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 22 02:49:09 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 22 02:49:09 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 22 02:49:09 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 22 02:49:09 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 22 02:49:09 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 22 02:49:09 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 22 02:49:09 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 02:49:09 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 22 02:49:09 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 22 02:49:09 localhost kernel: NET: Registered PF_XDP protocol family
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 22 02:49:09 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 22 02:49:09 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 22 02:49:09 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 22 02:49:09 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 90966 usecs
Nov 22 02:49:09 localhost kernel: PCI: CLS 0 bytes, default 64
Nov 22 02:49:09 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 22 02:49:09 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 22 02:49:09 localhost kernel: ACPI: bus type thunderbolt registered
Nov 22 02:49:09 localhost kernel: Trying to unpack rootfs image as initramfs...
Nov 22 02:49:09 localhost kernel: Initialise system trusted keyrings
Nov 22 02:49:09 localhost kernel: Key type blacklist registered
Nov 22 02:49:09 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 22 02:49:09 localhost kernel: zbud: loaded
Nov 22 02:49:09 localhost kernel: integrity: Platform Keyring initialized
Nov 22 02:49:09 localhost kernel: integrity: Machine keyring initialized
Nov 22 02:49:09 localhost kernel: Freeing initrd memory: 85868K
Nov 22 02:49:09 localhost kernel: NET: Registered PF_ALG protocol family
Nov 22 02:49:09 localhost kernel: xor: automatically using best checksumming function   avx       
Nov 22 02:49:09 localhost kernel: Key type asymmetric registered
Nov 22 02:49:09 localhost kernel: Asymmetric key parser 'x509' registered
Nov 22 02:49:09 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 22 02:49:09 localhost kernel: io scheduler mq-deadline registered
Nov 22 02:49:09 localhost kernel: io scheduler kyber registered
Nov 22 02:49:09 localhost kernel: io scheduler bfq registered
Nov 22 02:49:09 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 22 02:49:09 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 22 02:49:09 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 22 02:49:09 localhost kernel: ACPI: button: Power Button [PWRF]
Nov 22 02:49:09 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 22 02:49:09 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 22 02:49:09 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 22 02:49:09 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 22 02:49:09 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 22 02:49:09 localhost kernel: Non-volatile memory driver v1.3
Nov 22 02:49:09 localhost kernel: rdac: device handler registered
Nov 22 02:49:09 localhost kernel: hp_sw: device handler registered
Nov 22 02:49:09 localhost kernel: emc: device handler registered
Nov 22 02:49:09 localhost kernel: alua: device handler registered
Nov 22 02:49:09 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 22 02:49:09 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 22 02:49:09 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 22 02:49:09 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 22 02:49:09 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 22 02:49:09 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 22 02:49:09 localhost kernel: usb usb1: Product: UHCI Host Controller
Nov 22 02:49:09 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-639.el9.x86_64 uhci_hcd
Nov 22 02:49:09 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 22 02:49:09 localhost kernel: hub 1-0:1.0: USB hub found
Nov 22 02:49:09 localhost kernel: hub 1-0:1.0: 2 ports detected
Nov 22 02:49:09 localhost kernel: usbcore: registered new interface driver usbserial_generic
Nov 22 02:49:09 localhost kernel: usbserial: USB Serial support registered for generic
Nov 22 02:49:09 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 22 02:49:09 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 22 02:49:09 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 22 02:49:09 localhost kernel: mousedev: PS/2 mouse device common for all mice
Nov 22 02:49:09 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 22 02:49:09 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 22 02:49:09 localhost kernel: rtc_cmos 00:04: registered as rtc0
Nov 22 02:49:09 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-11-22T02:49:08 UTC (1763779748)
Nov 22 02:49:09 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 22 02:49:09 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 22 02:49:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 22 02:49:09 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 22 02:49:09 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 22 02:49:09 localhost kernel: usbcore: registered new interface driver usbhid
Nov 22 02:49:09 localhost kernel: usbhid: USB HID core driver
Nov 22 02:49:09 localhost kernel: drop_monitor: Initializing network drop monitor service
Nov 22 02:49:09 localhost kernel: Initializing XFRM netlink socket
Nov 22 02:49:09 localhost kernel: NET: Registered PF_INET6 protocol family
Nov 22 02:49:09 localhost kernel: Segment Routing with IPv6
Nov 22 02:49:09 localhost kernel: NET: Registered PF_PACKET protocol family
Nov 22 02:49:09 localhost kernel: mpls_gso: MPLS GSO support
Nov 22 02:49:09 localhost kernel: IPI shorthand broadcast: enabled
Nov 22 02:49:09 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Nov 22 02:49:09 localhost kernel: AES CTR mode by8 optimization enabled
Nov 22 02:49:09 localhost kernel: sched_clock: Marking stable (1209009941, 148697456)->(1465180177, -107472780)
Nov 22 02:49:09 localhost kernel: registered taskstats version 1
Nov 22 02:49:09 localhost kernel: Loading compiled-in X.509 certificates
Nov 22 02:49:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 02:49:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 22 02:49:09 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 22 02:49:09 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 22 02:49:09 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 22 02:49:09 localhost kernel: Demotion targets for Node 0: null
Nov 22 02:49:09 localhost kernel: page_owner is disabled
Nov 22 02:49:09 localhost kernel: Key type .fscrypt registered
Nov 22 02:49:09 localhost kernel: Key type fscrypt-provisioning registered
Nov 22 02:49:09 localhost kernel: Key type big_key registered
Nov 22 02:49:09 localhost kernel: Key type encrypted registered
Nov 22 02:49:09 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 22 02:49:09 localhost kernel: Loading compiled-in module X.509 certificates
Nov 22 02:49:09 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: f7751431c703da8a75244ce96aad68601cf1c188'
Nov 22 02:49:09 localhost kernel: ima: Allocated hash algorithm: sha256
Nov 22 02:49:09 localhost kernel: ima: No architecture policies found
Nov 22 02:49:09 localhost kernel: evm: Initialising EVM extended attributes:
Nov 22 02:49:09 localhost kernel: evm: security.selinux
Nov 22 02:49:09 localhost kernel: evm: security.SMACK64 (disabled)
Nov 22 02:49:09 localhost kernel: evm: security.SMACK64EXEC (disabled)
Nov 22 02:49:09 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 22 02:49:09 localhost kernel: evm: security.SMACK64MMAP (disabled)
Nov 22 02:49:09 localhost kernel: evm: security.apparmor (disabled)
Nov 22 02:49:09 localhost kernel: evm: security.ima
Nov 22 02:49:09 localhost kernel: evm: security.capability
Nov 22 02:49:09 localhost kernel: evm: HMAC attrs: 0x1
Nov 22 02:49:09 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 22 02:49:09 localhost kernel: Running certificate verification RSA selftest
Nov 22 02:49:09 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 22 02:49:09 localhost kernel: Running certificate verification ECDSA selftest
Nov 22 02:49:09 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 22 02:49:09 localhost kernel: clk: Disabling unused clocks
Nov 22 02:49:09 localhost kernel: Freeing unused decrypted memory: 2028K
Nov 22 02:49:09 localhost kernel: Freeing unused kernel image (initmem) memory: 4188K
Nov 22 02:49:09 localhost kernel: Write protecting the kernel read-only data: 30720k
Nov 22 02:49:09 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 22 02:49:09 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 22 02:49:09 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 22 02:49:09 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Nov 22 02:49:09 localhost kernel: usb 1-1: Manufacturer: QEMU
Nov 22 02:49:09 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 22 02:49:09 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 22 02:49:09 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 22 02:49:09 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 22 02:49:09 localhost kernel: Run /init as init process
Nov 22 02:49:09 localhost kernel:   with arguments:
Nov 22 02:49:09 localhost kernel:     /init
Nov 22 02:49:09 localhost kernel:   with environment:
Nov 22 02:49:09 localhost kernel:     HOME=/
Nov 22 02:49:09 localhost kernel:     TERM=linux
Nov 22 02:49:09 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64
Nov 22 02:49:09 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 02:49:09 localhost systemd[1]: Detected virtualization kvm.
Nov 22 02:49:09 localhost systemd[1]: Detected architecture x86-64.
Nov 22 02:49:09 localhost systemd[1]: Running in initrd.
Nov 22 02:49:09 localhost systemd[1]: No hostname configured, using default hostname.
Nov 22 02:49:09 localhost systemd[1]: Hostname set to <localhost>.
Nov 22 02:49:09 localhost systemd[1]: Initializing machine ID from VM UUID.
Nov 22 02:49:09 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Nov 22 02:49:09 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 02:49:09 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 22 02:49:09 localhost systemd[1]: Reached target Initrd /usr File System.
Nov 22 02:49:09 localhost systemd[1]: Reached target Local File Systems.
Nov 22 02:49:09 localhost systemd[1]: Reached target Path Units.
Nov 22 02:49:09 localhost systemd[1]: Reached target Slice Units.
Nov 22 02:49:09 localhost systemd[1]: Reached target Swaps.
Nov 22 02:49:09 localhost systemd[1]: Reached target Timer Units.
Nov 22 02:49:09 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 22 02:49:09 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Nov 22 02:49:09 localhost systemd[1]: Listening on Journal Socket.
Nov 22 02:49:09 localhost systemd[1]: Listening on udev Control Socket.
Nov 22 02:49:09 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 22 02:49:09 localhost systemd[1]: Reached target Socket Units.
Nov 22 02:49:09 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 22 02:49:09 localhost systemd[1]: Starting Journal Service...
Nov 22 02:49:09 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 02:49:09 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 22 02:49:09 localhost systemd[1]: Starting Create System Users...
Nov 22 02:49:09 localhost systemd[1]: Starting Setup Virtual Console...
Nov 22 02:49:09 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 22 02:49:09 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 22 02:49:09 localhost systemd-journald[305]: Journal started
Nov 22 02:49:09 localhost systemd-journald[305]: Runtime Journal (/run/log/journal/cc28b99bcca84899a39d03c6a80b1444) is 8.0M, max 153.6M, 145.6M free.
Nov 22 02:49:09 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Nov 22 02:49:09 localhost systemd[1]: Started Journal Service.
Nov 22 02:49:09 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Nov 22 02:49:09 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 22 02:49:09 localhost systemd[1]: Finished Create System Users.
Nov 22 02:49:09 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 02:49:09 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 02:49:09 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 02:49:09 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 02:49:09 localhost systemd[1]: Finished Setup Virtual Console.
Nov 22 02:49:09 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 22 02:49:09 localhost systemd[1]: Starting dracut cmdline hook...
Nov 22 02:49:09 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Nov 22 02:49:09 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-639.el9.x86_64 root=UUID=47e3724e-7a1b-439a-9543-b98c9a290709 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 22 02:49:09 localhost systemd[1]: Finished dracut cmdline hook.
Nov 22 02:49:09 localhost systemd[1]: Starting dracut pre-udev hook...
Nov 22 02:49:09 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 22 02:49:09 localhost kernel: device-mapper: uevent: version 1.0.3
Nov 22 02:49:09 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 22 02:49:09 localhost kernel: RPC: Registered named UNIX socket transport module.
Nov 22 02:49:09 localhost kernel: RPC: Registered udp transport module.
Nov 22 02:49:09 localhost kernel: RPC: Registered tcp transport module.
Nov 22 02:49:09 localhost kernel: RPC: Registered tcp-with-tls transport module.
Nov 22 02:49:09 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 22 02:49:09 localhost rpc.statd[442]: Version 2.5.4 starting
Nov 22 02:49:09 localhost rpc.statd[442]: Initializing NSM state
Nov 22 02:49:09 localhost rpc.idmapd[447]: Setting log level to 0
Nov 22 02:49:10 localhost systemd[1]: Finished dracut pre-udev hook.
Nov 22 02:49:10 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 02:49:10 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 02:49:10 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 02:49:10 localhost systemd[1]: Starting dracut pre-trigger hook...
Nov 22 02:49:10 localhost systemd[1]: Finished dracut pre-trigger hook.
Nov 22 02:49:10 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 22 02:49:10 localhost systemd[1]: Created slice Slice /system/modprobe.
Nov 22 02:49:10 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 02:49:10 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 22 02:49:10 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 02:49:10 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 02:49:10 localhost systemd[1]: Mounting Kernel Configuration File System...
Nov 22 02:49:10 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 02:49:10 localhost systemd[1]: Reached target Network.
Nov 22 02:49:10 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 22 02:49:10 localhost systemd[1]: Starting dracut initqueue hook...
Nov 22 02:49:10 localhost systemd[1]: Mounted Kernel Configuration File System.
Nov 22 02:49:10 localhost systemd[1]: Reached target System Initialization.
Nov 22 02:49:10 localhost systemd[1]: Reached target Basic System.
Nov 22 02:49:10 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 22 02:49:10 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 22 02:49:10 localhost kernel:  vda: vda1
Nov 22 02:49:10 localhost kernel: libata version 3.00 loaded.
Nov 22 02:49:10 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Nov 22 02:49:10 localhost kernel: scsi host0: ata_piix
Nov 22 02:49:10 localhost kernel: scsi host1: ata_piix
Nov 22 02:49:10 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 22 02:49:10 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 22 02:49:10 localhost systemd[1]: Found device /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 02:49:10 localhost systemd[1]: Reached target Initrd Root Device.
Nov 22 02:49:10 localhost kernel: ata1: found unknown device (class 0)
Nov 22 02:49:10 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 22 02:49:10 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 22 02:49:10 localhost systemd-udevd[484]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 02:49:10 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 22 02:49:10 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 22 02:49:10 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 22 02:49:10 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Nov 22 02:49:10 localhost systemd[1]: Finished dracut initqueue hook.
Nov 22 02:49:10 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 02:49:10 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Nov 22 02:49:10 localhost systemd[1]: Reached target Remote File Systems.
Nov 22 02:49:10 localhost systemd[1]: Starting dracut pre-mount hook...
Nov 22 02:49:10 localhost systemd[1]: Finished dracut pre-mount hook.
Nov 22 02:49:10 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709...
Nov 22 02:49:10 localhost systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Nov 22 02:49:10 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709.
Nov 22 02:49:10 localhost systemd[1]: Mounting /sysroot...
Nov 22 02:49:11 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 22 02:49:11 localhost kernel: XFS (vda1): Mounting V5 Filesystem 47e3724e-7a1b-439a-9543-b98c9a290709
Nov 22 02:49:11 localhost kernel: XFS (vda1): Ending clean mount
Nov 22 02:49:11 localhost systemd[1]: Mounted /sysroot.
Nov 22 02:49:11 localhost systemd[1]: Reached target Initrd Root File System.
Nov 22 02:49:11 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 22 02:49:11 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 22 02:49:11 localhost systemd[1]: Reached target Initrd File Systems.
Nov 22 02:49:11 localhost systemd[1]: Reached target Initrd Default Target.
Nov 22 02:49:11 localhost systemd[1]: Starting dracut mount hook...
Nov 22 02:49:11 localhost systemd[1]: Finished dracut mount hook.
Nov 22 02:49:11 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 22 02:49:11 localhost rpc.idmapd[447]: exiting on signal 15
Nov 22 02:49:11 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 22 02:49:11 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 22 02:49:11 localhost systemd[1]: Stopped target Network.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Timer Units.
Nov 22 02:49:11 localhost systemd[1]: dbus.socket: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 22 02:49:11 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Initrd Default Target.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Basic System.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Initrd Root Device.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Initrd /usr File System.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Path Units.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Remote File Systems.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Slice Units.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Socket Units.
Nov 22 02:49:11 localhost systemd[1]: Stopped target System Initialization.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Local File Systems.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Swaps.
Nov 22 02:49:11 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped dracut mount hook.
Nov 22 02:49:11 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped dracut pre-mount hook.
Nov 22 02:49:11 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Nov 22 02:49:11 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 22 02:49:11 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped dracut initqueue hook.
Nov 22 02:49:11 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Apply Kernel Variables.
Nov 22 02:49:11 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Nov 22 02:49:11 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Coldplug All udev Devices.
Nov 22 02:49:11 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped dracut pre-trigger hook.
Nov 22 02:49:11 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 22 02:49:11 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Setup Virtual Console.
Nov 22 02:49:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 22 02:49:11 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Closed udev Control Socket.
Nov 22 02:49:11 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Closed udev Kernel Socket.
Nov 22 02:49:11 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped dracut pre-udev hook.
Nov 22 02:49:11 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped dracut cmdline hook.
Nov 22 02:49:11 localhost systemd[1]: Starting Cleanup udev Database...
Nov 22 02:49:11 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 22 02:49:11 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Nov 22 02:49:11 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Stopped Create System Users.
Nov 22 02:49:11 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 22 02:49:11 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 22 02:49:11 localhost systemd[1]: Finished Cleanup udev Database.
Nov 22 02:49:11 localhost systemd[1]: Reached target Switch Root.
Nov 22 02:49:11 localhost systemd[1]: Starting Switch Root...
Nov 22 02:49:11 localhost systemd[1]: Switching root.
Nov 22 02:49:11 localhost systemd-journald[305]: Journal stopped
Nov 22 02:49:12 localhost systemd-journald[305]: Received SIGTERM from PID 1 (systemd).
Nov 22 02:49:12 localhost kernel: audit: type=1404 audit(1763779751.776:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 22 02:49:12 localhost kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 02:49:12 localhost kernel: SELinux:  policy capability open_perms=1
Nov 22 02:49:12 localhost kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 02:49:12 localhost kernel: SELinux:  policy capability always_check_network=0
Nov 22 02:49:12 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 02:49:12 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 02:49:12 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 02:49:12 localhost kernel: audit: type=1403 audit(1763779751.958:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 22 02:49:12 localhost systemd[1]: Successfully loaded SELinux policy in 191.009ms.
Nov 22 02:49:12 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.068ms.
Nov 22 02:49:12 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 22 02:49:12 localhost systemd[1]: Detected virtualization kvm.
Nov 22 02:49:12 localhost systemd[1]: Detected architecture x86-64.
Nov 22 02:49:12 localhost systemd-rc-local-generator[636]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 02:49:12 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Nov 22 02:49:12 localhost systemd[1]: Stopped Switch Root.
Nov 22 02:49:12 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 22 02:49:12 localhost systemd[1]: Created slice Slice /system/getty.
Nov 22 02:49:12 localhost systemd[1]: Created slice Slice /system/serial-getty.
Nov 22 02:49:12 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Nov 22 02:49:12 localhost systemd[1]: Created slice User and Session Slice.
Nov 22 02:49:12 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Nov 22 02:49:12 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Nov 22 02:49:12 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 22 02:49:12 localhost systemd[1]: Reached target Local Encrypted Volumes.
Nov 22 02:49:12 localhost systemd[1]: Stopped target Switch Root.
Nov 22 02:49:12 localhost systemd[1]: Stopped target Initrd File Systems.
Nov 22 02:49:12 localhost systemd[1]: Stopped target Initrd Root File System.
Nov 22 02:49:12 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Nov 22 02:49:12 localhost systemd[1]: Reached target Path Units.
Nov 22 02:49:12 localhost systemd[1]: Reached target rpc_pipefs.target.
Nov 22 02:49:12 localhost systemd[1]: Reached target Slice Units.
Nov 22 02:49:12 localhost systemd[1]: Reached target Swaps.
Nov 22 02:49:12 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Nov 22 02:49:12 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Nov 22 02:49:12 localhost systemd[1]: Reached target RPC Port Mapper.
Nov 22 02:49:12 localhost systemd[1]: Listening on Process Core Dump Socket.
Nov 22 02:49:12 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Nov 22 02:49:12 localhost systemd[1]: Listening on udev Control Socket.
Nov 22 02:49:12 localhost systemd[1]: Listening on udev Kernel Socket.
Nov 22 02:49:12 localhost systemd[1]: Mounting Huge Pages File System...
Nov 22 02:49:12 localhost systemd[1]: Mounting POSIX Message Queue File System...
Nov 22 02:49:12 localhost systemd[1]: Mounting Kernel Debug File System...
Nov 22 02:49:12 localhost systemd[1]: Mounting Kernel Trace File System...
Nov 22 02:49:12 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 22 02:49:12 localhost systemd[1]: Starting Create List of Static Device Nodes...
Nov 22 02:49:12 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 02:49:12 localhost systemd[1]: Starting Load Kernel Module drm...
Nov 22 02:49:12 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Nov 22 02:49:12 localhost systemd[1]: Starting Load Kernel Module fuse...
Nov 22 02:49:12 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 22 02:49:12 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Nov 22 02:49:12 localhost systemd[1]: Stopped File System Check on Root Device.
Nov 22 02:49:12 localhost systemd[1]: Stopped Journal Service.
Nov 22 02:49:12 localhost systemd[1]: Starting Journal Service...
Nov 22 02:49:12 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 22 02:49:12 localhost systemd[1]: Starting Generate network units from Kernel command line...
Nov 22 02:49:12 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 02:49:12 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Nov 22 02:49:12 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 22 02:49:12 localhost systemd[1]: Starting Apply Kernel Variables...
Nov 22 02:49:12 localhost kernel: fuse: init (API version 7.37)
Nov 22 02:49:12 localhost systemd-journald[677]: Journal started
Nov 22 02:49:12 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 22 02:49:12 localhost systemd[1]: Queued start job for default target Multi-User System.
Nov 22 02:49:12 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 22 02:49:12 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 22 02:49:12 localhost systemd[1]: Starting Coldplug All udev Devices...
Nov 22 02:49:12 localhost systemd[1]: Started Journal Service.
Nov 22 02:49:12 localhost systemd[1]: Mounted Huge Pages File System.
Nov 22 02:49:12 localhost systemd[1]: Mounted POSIX Message Queue File System.
Nov 22 02:49:12 localhost systemd[1]: Mounted Kernel Debug File System.
Nov 22 02:49:12 localhost systemd[1]: Mounted Kernel Trace File System.
Nov 22 02:49:12 localhost systemd[1]: Finished Create List of Static Device Nodes.
Nov 22 02:49:12 localhost kernel: ACPI: bus type drm_connector registered
Nov 22 02:49:12 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 02:49:12 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 02:49:12 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 22 02:49:12 localhost systemd[1]: Finished Load Kernel Module drm.
Nov 22 02:49:12 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 22 02:49:12 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 22 02:49:12 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 22 02:49:12 localhost systemd[1]: Finished Load Kernel Module fuse.
Nov 22 02:49:12 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 22 02:49:12 localhost systemd[1]: Finished Generate network units from Kernel command line.
Nov 22 02:49:12 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 22 02:49:12 localhost systemd[1]: Finished Apply Kernel Variables.
Nov 22 02:49:12 localhost systemd[1]: Mounting FUSE Control File System...
Nov 22 02:49:12 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 22 02:49:12 localhost systemd[1]: Starting Rebuild Hardware Database...
Nov 22 02:49:12 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 22 02:49:12 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 22 02:49:12 localhost systemd[1]: Starting Load/Save OS Random Seed...
Nov 22 02:49:12 localhost systemd[1]: Starting Create System Users...
Nov 22 02:49:12 localhost systemd-journald[677]: Runtime Journal (/run/log/journal/fee38d0f94bf6f4b17ec77ba536bd6ab) is 8.0M, max 153.6M, 145.6M free.
Nov 22 02:49:12 localhost systemd-journald[677]: Received client request to flush runtime journal.
Nov 22 02:49:12 localhost systemd[1]: Mounted FUSE Control File System.
Nov 22 02:49:12 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 22 02:49:12 localhost systemd[1]: Finished Load/Save OS Random Seed.
Nov 22 02:49:12 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 22 02:49:12 localhost systemd[1]: Finished Coldplug All udev Devices.
Nov 22 02:49:12 localhost systemd[1]: Finished Create System Users.
Nov 22 02:49:12 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 22 02:49:12 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 22 02:49:12 localhost systemd[1]: Reached target Preparation for Local File Systems.
Nov 22 02:49:12 localhost systemd[1]: Reached target Local File Systems.
Nov 22 02:49:12 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 22 02:49:12 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 22 02:49:12 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 22 02:49:12 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 22 02:49:12 localhost systemd[1]: Starting Automatic Boot Loader Update...
Nov 22 02:49:12 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 22 02:49:12 localhost systemd[1]: Starting Create Volatile Files and Directories...
Nov 22 02:49:12 localhost bootctl[698]: Couldn't find EFI system partition, skipping.
Nov 22 02:49:12 localhost systemd[1]: Finished Automatic Boot Loader Update.
Nov 22 02:49:12 localhost systemd[1]: Finished Create Volatile Files and Directories.
Nov 22 02:49:12 localhost systemd[1]: Starting Security Auditing Service...
Nov 22 02:49:12 localhost systemd[1]: Starting RPC Bind...
Nov 22 02:49:12 localhost systemd[1]: Starting Rebuild Journal Catalog...
Nov 22 02:49:12 localhost auditd[704]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 22 02:49:12 localhost auditd[704]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 22 02:49:12 localhost systemd[1]: Finished Rebuild Journal Catalog.
Nov 22 02:49:12 localhost systemd[1]: Started RPC Bind.
Nov 22 02:49:13 localhost augenrules[709]: /sbin/augenrules: No change
Nov 22 02:49:13 localhost augenrules[724]: No rules
Nov 22 02:49:13 localhost augenrules[724]: enabled 1
Nov 22 02:49:13 localhost augenrules[724]: failure 1
Nov 22 02:49:13 localhost augenrules[724]: pid 704
Nov 22 02:49:13 localhost augenrules[724]: rate_limit 0
Nov 22 02:49:13 localhost augenrules[724]: backlog_limit 8192
Nov 22 02:49:13 localhost augenrules[724]: lost 0
Nov 22 02:49:13 localhost augenrules[724]: backlog 0
Nov 22 02:49:13 localhost augenrules[724]: backlog_wait_time 60000
Nov 22 02:49:13 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 22 02:49:13 localhost augenrules[724]: enabled 1
Nov 22 02:49:13 localhost augenrules[724]: failure 1
Nov 22 02:49:13 localhost augenrules[724]: pid 704
Nov 22 02:49:13 localhost augenrules[724]: rate_limit 0
Nov 22 02:49:13 localhost augenrules[724]: backlog_limit 8192
Nov 22 02:49:13 localhost augenrules[724]: lost 0
Nov 22 02:49:13 localhost augenrules[724]: backlog 0
Nov 22 02:49:13 localhost augenrules[724]: backlog_wait_time 60000
Nov 22 02:49:13 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 22 02:49:13 localhost augenrules[724]: enabled 1
Nov 22 02:49:13 localhost augenrules[724]: failure 1
Nov 22 02:49:13 localhost augenrules[724]: pid 704
Nov 22 02:49:13 localhost augenrules[724]: rate_limit 0
Nov 22 02:49:13 localhost augenrules[724]: backlog_limit 8192
Nov 22 02:49:13 localhost augenrules[724]: lost 0
Nov 22 02:49:13 localhost augenrules[724]: backlog 2
Nov 22 02:49:13 localhost augenrules[724]: backlog_wait_time 60000
Nov 22 02:49:13 localhost augenrules[724]: backlog_wait_time_actual 0
Nov 22 02:49:13 localhost systemd[1]: Started Security Auditing Service.
Nov 22 02:49:13 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 22 02:49:13 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 22 02:49:13 localhost systemd[1]: Finished Rebuild Hardware Database.
Nov 22 02:49:13 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 22 02:49:13 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 22 02:49:13 localhost systemd[1]: Starting Update is Completed...
Nov 22 02:49:13 localhost systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Nov 22 02:49:13 localhost systemd[1]: Finished Update is Completed.
Nov 22 02:49:13 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 22 02:49:13 localhost systemd[1]: Reached target System Initialization.
Nov 22 02:49:13 localhost systemd[1]: Started dnf makecache --timer.
Nov 22 02:49:13 localhost systemd[1]: Started Daily rotation of log files.
Nov 22 02:49:13 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 22 02:49:13 localhost systemd[1]: Reached target Timer Units.
Nov 22 02:49:13 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 22 02:49:13 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 22 02:49:13 localhost systemd[1]: Reached target Socket Units.
Nov 22 02:49:13 localhost systemd[1]: Starting D-Bus System Message Bus...
Nov 22 02:49:13 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 02:49:13 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 22 02:49:13 localhost systemd[1]: Starting Load Kernel Module configfs...
Nov 22 02:49:13 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 22 02:49:13 localhost systemd[1]: Finished Load Kernel Module configfs.
Nov 22 02:49:13 localhost systemd-udevd[742]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 02:49:13 localhost systemd[1]: Started D-Bus System Message Bus.
Nov 22 02:49:13 localhost systemd[1]: Reached target Basic System.
Nov 22 02:49:13 localhost systemd[1]: Starting NTP client/server...
Nov 22 02:49:13 localhost dbus-broker-lau[760]: Ready
Nov 22 02:49:13 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 22 02:49:13 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 22 02:49:13 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 22 02:49:13 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 22 02:49:13 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 22 02:49:13 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 22 02:49:13 localhost chronyd[786]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 02:49:13 localhost chronyd[786]: Loaded 0 symmetric keys
Nov 22 02:49:13 localhost chronyd[786]: Using right/UTC timezone to obtain leap second data
Nov 22 02:49:13 localhost chronyd[786]: Loaded seccomp filter (level 2)
Nov 22 02:49:13 localhost systemd[1]: Starting IPv4 firewall with iptables...
Nov 22 02:49:13 localhost systemd[1]: Started irqbalance daemon.
Nov 22 02:49:13 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 22 02:49:13 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 02:49:13 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 02:49:13 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 02:49:13 localhost systemd[1]: Reached target sshd-keygen.target.
Nov 22 02:49:13 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 22 02:49:13 localhost systemd[1]: Reached target User and Group Name Lookups.
Nov 22 02:49:13 localhost systemd[1]: Starting User Login Management...
Nov 22 02:49:13 localhost kernel: kvm_amd: TSC scaling supported
Nov 22 02:49:13 localhost kernel: kvm_amd: Nested Virtualization enabled
Nov 22 02:49:13 localhost kernel: kvm_amd: Nested Paging enabled
Nov 22 02:49:13 localhost kernel: kvm_amd: LBR virtualization supported
Nov 22 02:49:13 localhost systemd[1]: Started NTP client/server.
Nov 22 02:49:13 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 22 02:49:13 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 22 02:49:13 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 22 02:49:13 localhost kernel: Console: switching to colour dummy device 80x25
Nov 22 02:49:13 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 22 02:49:13 localhost kernel: [drm] features: -context_init
Nov 22 02:49:13 localhost kernel: [drm] number of scanouts: 1
Nov 22 02:49:13 localhost kernel: [drm] number of cap sets: 0
Nov 22 02:49:13 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 22 02:49:13 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 22 02:49:13 localhost kernel: Console: switching to colour frame buffer device 128x48
Nov 22 02:49:13 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 22 02:49:13 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 22 02:49:13 localhost systemd-logind[799]: New seat seat0.
Nov 22 02:49:13 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 22 02:49:13 localhost systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 02:49:13 localhost systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 02:49:13 localhost systemd[1]: Started User Login Management.
Nov 22 02:49:13 localhost iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Nov 22 02:49:13 localhost systemd[1]: Finished IPv4 firewall with iptables.
Nov 22 02:49:14 localhost cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 22 Nov 2025 02:49:14 +0000. Up 6.89 seconds.
Nov 22 02:49:14 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Nov 22 02:49:14 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Nov 22 02:49:14 localhost systemd[1]: run-cloud\x2dinit-tmp-tmps3w2_suj.mount: Deactivated successfully.
Nov 22 02:49:14 localhost systemd[1]: Starting Hostname Service...
Nov 22 02:49:14 localhost systemd[1]: Started Hostname Service.
Nov 22 02:49:14 np0005531666.novalocal systemd-hostnamed[855]: Hostname set to <np0005531666.novalocal> (static)
Nov 22 02:49:14 np0005531666.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 22 02:49:14 np0005531666.novalocal systemd[1]: Reached target Preparation for Network.
Nov 22 02:49:14 np0005531666.novalocal systemd[1]: Starting Network Manager...
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9255] NetworkManager (version 1.54.1-1.el9) is starting... (boot:bbdf02b3-deb9-47de-b411-3c25d6aa93d1)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9261] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9391] manager[0x5582b712b080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9438] hostname: hostname: using hostnamed
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9439] hostname: static hostname changed from (none) to "np0005531666.novalocal"
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9442] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9609] manager[0x5582b712b080]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9610] manager[0x5582b712b080]: rfkill: WWAN hardware radio set enabled
Nov 22 02:49:14 np0005531666.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9681] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9682] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9682] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9683] manager: Networking is enabled by state file
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9685] settings: Loaded settings plugin: keyfile (internal)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9721] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9745] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9767] dhcp: init: Using DHCP client 'internal'
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9769] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9782] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9802] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9809] device (lo): Activation: starting connection 'lo' (704fb092-bceb-43d7-a199-2a71be4392ac)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9817] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9820] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9843] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9847] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9849] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9851] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9852] device (eth0): carrier: link connected
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9855] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9861] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9866] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9869] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9870] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9872] manager: NetworkManager state is now CONNECTING
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9873] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9880] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 02:49:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779754.9882] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:49:14 np0005531666.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 02:49:14 np0005531666.novalocal systemd[1]: Started Network Manager.
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: Reached target Network.
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 02:49:15 np0005531666.novalocal NetworkManager[859]: <info>  [1763779755.0182] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 02:49:15 np0005531666.novalocal NetworkManager[859]: <info>  [1763779755.0184] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 02:49:15 np0005531666.novalocal NetworkManager[859]: <info>  [1763779755.0190] device (lo): Activation: successful, device activated.
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: Reached target NFS client services.
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: Reached target Remote File Systems.
Nov 22 02:49:15 np0005531666.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4450] dhcp4 (eth0): state changed new lease, address=38.102.83.177
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4474] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4517] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4560] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4564] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4571] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4577] device (eth0): Activation: successful, device activated.
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4586] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 02:49:16 np0005531666.novalocal NetworkManager[859]: <info>  [1763779756.4591] manager: startup complete
Nov 22 02:49:16 np0005531666.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 22 02:49:16 np0005531666.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 22 Nov 2025 02:49:16 +0000. Up 9.44 seconds.
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |  eth0  | True |        38.102.83.177         | 255.255.255.0 | global | fa:16:3e:0d:e4:96 |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |  eth0  | True | fe80::f816:3eff:fe0d:e496/64 |       .       |  link  | fa:16:3e:0d:e4:96 |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Nov 22 02:49:16 np0005531666.novalocal cloud-init[923]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 22 02:49:17 np0005531666.novalocal useradd[990]: new group: name=cloud-user, GID=1001
Nov 22 02:49:17 np0005531666.novalocal useradd[990]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Nov 22 02:49:17 np0005531666.novalocal useradd[990]: add 'cloud-user' to group 'adm'
Nov 22 02:49:17 np0005531666.novalocal useradd[990]: add 'cloud-user' to group 'systemd-journal'
Nov 22 02:49:17 np0005531666.novalocal useradd[990]: add 'cloud-user' to shadow group 'adm'
Nov 22 02:49:17 np0005531666.novalocal useradd[990]: add 'cloud-user' to shadow group 'systemd-journal'
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Generating public/private rsa key pair.
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: The key fingerprint is:
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: SHA256:hQUR2FI4xwPwlBf5jnjGdlkmKDPzh6dp0Na1EyTKP4w root@np0005531666.novalocal
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: The key's randomart image is:
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: +---[RSA 3072]----+
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |    ..oO**.      |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |     o* Bo. .    |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |      .*.=.o     |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |      = +.o =    |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |       XSO * o   |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |      o E @ o    |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |       * * . .   |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |        +        |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |       .         |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: +----[SHA256]-----+
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Generating public/private ecdsa key pair.
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: The key fingerprint is:
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: SHA256:JIVYweDzAKDVuXqob/NoYdxU6HlBGkI4RriCH4+aBy4 root@np0005531666.novalocal
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: The key's randomart image is:
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: +---[ECDSA 256]---+
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |+==.oO+o.        |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |=o +=o=.         |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |+o .=+...        |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |+ . =+.o         |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |.o O .. S        |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |. O +            |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |.* o             |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |E.=.             |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |.=oo.            |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: +----[SHA256]-----+
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Generating public/private ed25519 key pair.
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: The key fingerprint is:
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: SHA256:m5mUKVvXsPI3zHudqTDllRG6tvuLBkxbiMbs6M6wMt8 root@np0005531666.novalocal
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: The key's randomart image is:
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: +--[ED25519 256]--+
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |               . |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |              . .|
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |        o o .. . |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |         B * .. o|
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |      . S = ++ o |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |       * X *+ o  |
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |     .o = .o*o .o|
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |   o  =.   .o++o.|
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: |    +o.E    o=oo.|
Nov 22 02:49:18 np0005531666.novalocal cloud-init[923]: +----[SHA256]-----+
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Reached target Cloud-config availability.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Reached target Network is Online.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Starting Crash recovery kernel arming...
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Starting System Logging Service...
Nov 22 02:49:18 np0005531666.novalocal sm-notify[1006]: Version 2.5.4 starting
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Starting OpenSSH server daemon...
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Starting Permit User Sessions...
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Started Notify NFS peers of a restart.
Nov 22 02:49:18 np0005531666.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Nov 22 02:49:18 np0005531666.novalocal sshd[1008]: Server listening on :: port 22.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Started OpenSSH server daemon.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Finished Permit User Sessions.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Started Command Scheduler.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Started Getty on tty1.
Nov 22 02:49:18 np0005531666.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Nov 22 02:49:18 np0005531666.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 22 02:49:18 np0005531666.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Nov 22 02:49:18 np0005531666.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Started Serial Getty on ttyS0.
Nov 22 02:49:18 np0005531666.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 2% if used.)
Nov 22 02:49:18 np0005531666.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Reached target Login Prompts.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Started System Logging Service.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Reached target Multi-User System.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 22 02:49:18 np0005531666.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 02:49:18 np0005531666.novalocal kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Nov 22 02:49:18 np0005531666.novalocal kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-639.el9.x86_64kdump.img
Nov 22 02:49:18 np0005531666.novalocal cloud-init[1110]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 22 Nov 2025 02:49:18 +0000. Up 11.48 seconds.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Nov 22 02:49:18 np0005531666.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1184]: Connection closed by 38.102.83.114 port 57764 [preauth]
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1207]: Unable to negotiate with 38.102.83.114 port 57768: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1225]: Unable to negotiate with 38.102.83.114 port 57782: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1234]: Unable to negotiate with 38.102.83.114 port 57786: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1240]: Connection reset by 38.102.83.114 port 57800 [preauth]
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1254]: Connection reset by 38.102.83.114 port 57808 [preauth]
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1265]: Unable to negotiate with 38.102.83.114 port 57812: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1219]: Connection closed by 38.102.83.114 port 57778 [preauth]
Nov 22 02:49:19 np0005531666.novalocal sshd-session[1272]: Unable to negotiate with 38.102.83.114 port 57824: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Nov 22 02:49:19 np0005531666.novalocal dracut[1287]: dracut-057-102.git20250818.el9
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1302]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 22 Nov 2025 02:49:19 +0000. Up 11.93 seconds.
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1305]: #############################################################
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1306]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1308]: 256 SHA256:JIVYweDzAKDVuXqob/NoYdxU6HlBGkI4RriCH4+aBy4 root@np0005531666.novalocal (ECDSA)
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1310]: 256 SHA256:m5mUKVvXsPI3zHudqTDllRG6tvuLBkxbiMbs6M6wMt8 root@np0005531666.novalocal (ED25519)
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1312]: 3072 SHA256:hQUR2FI4xwPwlBf5jnjGdlkmKDPzh6dp0Na1EyTKP4w root@np0005531666.novalocal (RSA)
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1313]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1314]: #############################################################
Nov 22 02:49:19 np0005531666.novalocal dracut[1289]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/47e3724e-7a1b-439a-9543-b98c9a290709 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-639.el9.x86_64kdump.img 5.14.0-639.el9.x86_64
Nov 22 02:49:19 np0005531666.novalocal cloud-init[1302]: Cloud-init v. 24.4-7.el9 finished at Sat, 22 Nov 2025 02:49:19 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.11 seconds
Nov 22 02:49:19 np0005531666.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Nov 22 02:49:19 np0005531666.novalocal systemd[1]: Reached target Cloud-init target.
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: Module 'resume' will not be installed, because it's in the list to be omitted!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: memstrack is not available
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 22 02:49:20 np0005531666.novalocal dracut[1289]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: memstrack is not available
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: *** Including module: systemd ***
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: *** Including module: fips ***
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: *** Including module: systemd-initrd ***
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: *** Including module: i18n ***
Nov 22 02:49:21 np0005531666.novalocal dracut[1289]: *** Including module: drm ***
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]: *** Including module: prefixdevname ***
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]: *** Including module: kernel-modules ***
Nov 22 02:49:22 np0005531666.novalocal kernel: block vda: the capability attribute has been deprecated.
Nov 22 02:49:22 np0005531666.novalocal chronyd[786]: Selected source 174.138.193.90 (2.centos.pool.ntp.org)
Nov 22 02:49:22 np0005531666.novalocal chronyd[786]: System clock TAI offset set to 37 seconds
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]: *** Including module: kernel-modules-extra ***
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]: *** Including module: qemu ***
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]: *** Including module: fstab-sys ***
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]: *** Including module: rootfs-block ***
Nov 22 02:49:22 np0005531666.novalocal dracut[1289]: *** Including module: terminfo ***
Nov 22 02:49:23 np0005531666.novalocal dracut[1289]: *** Including module: udev-rules ***
Nov 22 02:49:23 np0005531666.novalocal dracut[1289]: Skipping udev rule: 91-permissions.rules
Nov 22 02:49:23 np0005531666.novalocal dracut[1289]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 22 02:49:23 np0005531666.novalocal dracut[1289]: *** Including module: virtiofs ***
Nov 22 02:49:23 np0005531666.novalocal dracut[1289]: *** Including module: dracut-systemd ***
Nov 22 02:49:23 np0005531666.novalocal dracut[1289]: *** Including module: usrmount ***
Nov 22 02:49:23 np0005531666.novalocal dracut[1289]: *** Including module: base ***
Nov 22 02:49:23 np0005531666.novalocal dracut[1289]: *** Including module: fs-lib ***
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]: *** Including module: kdumpbase ***
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: IRQ 25 affinity is now unmanaged
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: IRQ 31 affinity is now unmanaged
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: IRQ 28 affinity is now unmanaged
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: IRQ 32 affinity is now unmanaged
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: IRQ 30 affinity is now unmanaged
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 22 02:49:24 np0005531666.novalocal irqbalance[793]: IRQ 29 affinity is now unmanaged
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:   microcode_ctl module: mangling fw_dir
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 22 02:49:24 np0005531666.novalocal dracut[1289]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 22 02:49:25 np0005531666.novalocal dracut[1289]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 22 02:49:25 np0005531666.novalocal dracut[1289]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 22 02:49:25 np0005531666.novalocal dracut[1289]: *** Including module: openssl ***
Nov 22 02:49:25 np0005531666.novalocal dracut[1289]: *** Including module: shutdown ***
Nov 22 02:49:25 np0005531666.novalocal dracut[1289]: *** Including module: squash ***
Nov 22 02:49:25 np0005531666.novalocal dracut[1289]: *** Including modules done ***
Nov 22 02:49:25 np0005531666.novalocal dracut[1289]: *** Installing kernel module dependencies ***
Nov 22 02:49:26 np0005531666.novalocal dracut[1289]: *** Installing kernel module dependencies done ***
Nov 22 02:49:26 np0005531666.novalocal dracut[1289]: *** Resolving executable dependencies ***
Nov 22 02:49:26 np0005531666.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 02:49:28 np0005531666.novalocal dracut[1289]: *** Resolving executable dependencies done ***
Nov 22 02:49:28 np0005531666.novalocal dracut[1289]: *** Generating early-microcode cpio image ***
Nov 22 02:49:28 np0005531666.novalocal dracut[1289]: *** Store current command line parameters ***
Nov 22 02:49:28 np0005531666.novalocal dracut[1289]: Stored kernel commandline:
Nov 22 02:49:28 np0005531666.novalocal dracut[1289]: No dracut internal kernel commandline stored in the initramfs
Nov 22 02:49:28 np0005531666.novalocal dracut[1289]: *** Install squash loader ***
Nov 22 02:49:29 np0005531666.novalocal dracut[1289]: *** Squashing the files inside the initramfs ***
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: *** Squashing the files inside the initramfs done ***
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: *** Creating image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' ***
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: *** Hardlinking files ***
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: Mode:           real
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: Files:          50
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: Linked:         0 files
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: Compared:       0 xattrs
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: Compared:       0 files
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: Saved:          0 B
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: Duration:       0.000517 seconds
Nov 22 02:49:30 np0005531666.novalocal dracut[1289]: *** Hardlinking files done ***
Nov 22 02:49:31 np0005531666.novalocal dracut[1289]: *** Creating initramfs image file '/boot/initramfs-5.14.0-639.el9.x86_64kdump.img' done ***
Nov 22 02:49:31 np0005531666.novalocal kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Nov 22 02:49:31 np0005531666.novalocal kdumpctl[1015]: kdump: Starting kdump: [OK]
Nov 22 02:49:31 np0005531666.novalocal systemd[1]: Finished Crash recovery kernel arming.
Nov 22 02:49:31 np0005531666.novalocal systemd[1]: Startup finished in 1.570s (kernel) + 2.837s (initrd) + 19.883s (userspace) = 24.291s.
Nov 22 02:49:40 np0005531666.novalocal sshd-session[4297]: Accepted publickey for zuul from 38.102.83.114 port 40478 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Nov 22 02:49:40 np0005531666.novalocal systemd[1]: Created slice User Slice of UID 1000.
Nov 22 02:49:40 np0005531666.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 22 02:49:40 np0005531666.novalocal systemd-logind[799]: New session 1 of user zuul.
Nov 22 02:49:40 np0005531666.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 22 02:49:40 np0005531666.novalocal systemd[1]: Starting User Manager for UID 1000...
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Queued start job for default target Main User Target.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Created slice User Application Slice.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Reached target Paths.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Reached target Timers.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Starting D-Bus User Message Bus Socket...
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Starting Create User's Volatile Files and Directories...
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Listening on D-Bus User Message Bus Socket.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Reached target Sockets.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Finished Create User's Volatile Files and Directories.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Reached target Basic System.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Reached target Main User Target.
Nov 22 02:49:40 np0005531666.novalocal systemd[4301]: Startup finished in 160ms.
Nov 22 02:49:40 np0005531666.novalocal systemd[1]: Started User Manager for UID 1000.
Nov 22 02:49:40 np0005531666.novalocal systemd[1]: Started Session 1 of User zuul.
Nov 22 02:49:40 np0005531666.novalocal sshd-session[4297]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 02:49:41 np0005531666.novalocal python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 02:49:43 np0005531666.novalocal python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 02:49:44 np0005531666.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 02:49:50 np0005531666.novalocal python3[4471]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 02:49:50 np0005531666.novalocal python3[4511]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 22 02:49:52 np0005531666.novalocal python3[4537]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCyC9FkOek1uBQECyrQZI+UxKm4yx1Kr0ugrhnZY+5dHzAtdra3wKYwVSiCZF5AX6amXhWupJauOAoFCwxjd/zfTT/524/Css09Cr7OSkvjc5XyWqVZTor7iCVMTJ7zhHl9yP79jCRDCiUT4b58/pnWjcBhiyH28VF7zN2FlPunUXISJRCs/G0xbWqXKMFpWR97W0+Q/S8wZ+R9Fl5fBuQHSzSWutdJNXLbjz5ly8FnT0cypSiV5M+9rrXvCUxYy2OwKuiYyjwvwkHSASpVXFrj+p0Eu6KKg68J0lmC9B433MDmRf04EDH0EoacYXi1960IMs/autaz9wTnqvWjuXxIzZ4kXnCq6UfW35cRO8V782LMTSLEHsRCEwbM+eXBodClgEUr3V1XE8lpE6oWuC0IFUjQno8roDF0X1ooO88mRAzWO1ulFg21pw8wRwoSICYRyAqgv+AIdZ0h882lG1D2p/5UzRjTjkDUlokLp5yOw39i9ks84OQJrQyTEGMvolk= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:49:52 np0005531666.novalocal python3[4561]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:49:53 np0005531666.novalocal python3[4660]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:49:53 np0005531666.novalocal python3[4731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763779793.138741-207-253327339417252/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=ca57695f2efe48aba1200717ff614cff_id_rsa follow=False checksum=ee9bda573904a4f5fb14f41a9d9fd1fa223fc140 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:49:54 np0005531666.novalocal python3[4854]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:49:54 np0005531666.novalocal python3[4925]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763779794.1989639-240-21805650387186/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=ca57695f2efe48aba1200717ff614cff_id_rsa.pub follow=False checksum=bbf3003e8d25e812e9646a467244f67c3ca57a67 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:49:56 np0005531666.novalocal python3[4973]: ansible-ping Invoked with data=pong
Nov 22 02:49:57 np0005531666.novalocal python3[4997]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 02:49:59 np0005531666.novalocal python3[5055]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 22 02:49:59 np0005531666.novalocal python3[5087]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:00 np0005531666.novalocal python3[5111]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:00 np0005531666.novalocal python3[5135]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:00 np0005531666.novalocal python3[5159]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:01 np0005531666.novalocal python3[5183]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:01 np0005531666.novalocal python3[5207]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:02 np0005531666.novalocal sudo[5231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtzfgfqihgmximkgmvzztncpyyivtxxw ; /usr/bin/python3'
Nov 22 02:50:02 np0005531666.novalocal sudo[5231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:03 np0005531666.novalocal python3[5233]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:03 np0005531666.novalocal sudo[5231]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:03 np0005531666.novalocal sudo[5309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npdofyfrrigbugjnltuvglpqgonkcbnr ; /usr/bin/python3'
Nov 22 02:50:03 np0005531666.novalocal sudo[5309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:03 np0005531666.novalocal python3[5311]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:50:03 np0005531666.novalocal sudo[5309]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:04 np0005531666.novalocal sudo[5382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhqbjglvypjqummyvjkgogghefcytfoq ; /usr/bin/python3'
Nov 22 02:50:04 np0005531666.novalocal sudo[5382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:04 np0005531666.novalocal python3[5384]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1763779803.226805-21-10215817574128/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:04 np0005531666.novalocal sudo[5382]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:04 np0005531666.novalocal python3[5432]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:05 np0005531666.novalocal python3[5456]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:05 np0005531666.novalocal python3[5480]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:05 np0005531666.novalocal python3[5504]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:06 np0005531666.novalocal python3[5528]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:06 np0005531666.novalocal python3[5552]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:06 np0005531666.novalocal python3[5576]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:07 np0005531666.novalocal python3[5600]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:07 np0005531666.novalocal python3[5624]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:07 np0005531666.novalocal python3[5648]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:07 np0005531666.novalocal python3[5672]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:08 np0005531666.novalocal python3[5696]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:08 np0005531666.novalocal python3[5720]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:08 np0005531666.novalocal python3[5744]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:09 np0005531666.novalocal python3[5768]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:09 np0005531666.novalocal python3[5792]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:09 np0005531666.novalocal python3[5816]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:10 np0005531666.novalocal python3[5840]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:10 np0005531666.novalocal python3[5864]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:10 np0005531666.novalocal python3[5888]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:11 np0005531666.novalocal python3[5912]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:11 np0005531666.novalocal python3[5936]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:11 np0005531666.novalocal python3[5960]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:11 np0005531666.novalocal python3[5984]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:12 np0005531666.novalocal python3[6008]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:12 np0005531666.novalocal python3[6032]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 02:50:14 np0005531666.novalocal sudo[6056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bykvlupgycifgzvmpowbcnzqiggivoqf ; /usr/bin/python3'
Nov 22 02:50:14 np0005531666.novalocal sudo[6056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:14 np0005531666.novalocal python3[6058]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 02:50:14 np0005531666.novalocal systemd[1]: Starting Time & Date Service...
Nov 22 02:50:14 np0005531666.novalocal systemd[1]: Started Time & Date Service.
Nov 22 02:50:14 np0005531666.novalocal systemd-timedated[6060]: Changed time zone to 'UTC' (UTC).
Nov 22 02:50:14 np0005531666.novalocal sudo[6056]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:15 np0005531666.novalocal sudo[6087]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezpockufgoiixbolfpjrpyzcebydmzso ; /usr/bin/python3'
Nov 22 02:50:15 np0005531666.novalocal sudo[6087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:15 np0005531666.novalocal python3[6089]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:15 np0005531666.novalocal sudo[6087]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:15 np0005531666.novalocal python3[6165]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:50:16 np0005531666.novalocal python3[6236]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1763779815.4694877-153-50364665565000/source _original_basename=tmp69ja6633 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:16 np0005531666.novalocal python3[6336]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:50:17 np0005531666.novalocal python3[6407]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763779816.4099019-183-126438932107782/source _original_basename=tmp4raji43a follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:17 np0005531666.novalocal sudo[6507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygfkibxvmbcgyzemacglzjdzythgbbhq ; /usr/bin/python3'
Nov 22 02:50:17 np0005531666.novalocal sudo[6507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:17 np0005531666.novalocal python3[6509]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:50:17 np0005531666.novalocal sudo[6507]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:18 np0005531666.novalocal sudo[6580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmvpajvphxlckhoeuqdastdeokelwrru ; /usr/bin/python3'
Nov 22 02:50:18 np0005531666.novalocal sudo[6580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:18 np0005531666.novalocal python3[6582]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1763779817.5032306-231-64600495879230/source _original_basename=tmpfdpg7s_y follow=False checksum=1bcc824686558cc83916b394196cc422cefa4598 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:18 np0005531666.novalocal sudo[6580]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:18 np0005531666.novalocal python3[6630]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:50:19 np0005531666.novalocal python3[6656]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:50:19 np0005531666.novalocal sudo[6734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfgmiyeuhzcguhvoswtbtqwizlijlyiv ; /usr/bin/python3'
Nov 22 02:50:19 np0005531666.novalocal sudo[6734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:19 np0005531666.novalocal python3[6736]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:50:19 np0005531666.novalocal sudo[6734]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:19 np0005531666.novalocal sudo[6807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpfofyuaiecafevusudwkmqqfwnaowgc ; /usr/bin/python3'
Nov 22 02:50:19 np0005531666.novalocal sudo[6807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:19 np0005531666.novalocal python3[6809]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1763779819.361737-273-85813467672927/source _original_basename=tmpidtt67qt follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:19 np0005531666.novalocal sudo[6807]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:20 np0005531666.novalocal sudo[6858]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doyfgmwhmiqfzvihpcmgxrjvfqgwqvxv ; /usr/bin/python3'
Nov 22 02:50:20 np0005531666.novalocal sudo[6858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:20 np0005531666.novalocal python3[6860]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-783f-02f1-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:50:20 np0005531666.novalocal sudo[6858]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:21 np0005531666.novalocal python3[6888]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-783f-02f1-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 22 02:50:22 np0005531666.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:39 np0005531666.novalocal sudo[6940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxhutlkjbruavpgqmspomwxsrnqihnuk ; /usr/bin/python3'
Nov 22 02:50:39 np0005531666.novalocal sudo[6940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:50:39 np0005531666.novalocal python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:50:39 np0005531666.novalocal sudo[6940]: pam_unix(sudo:session): session closed for user root
Nov 22 02:50:44 np0005531666.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 22 02:51:14 np0005531666.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 22 02:51:14 np0005531666.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7614] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 02:51:14 np0005531666.novalocal systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7926] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7961] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7967] device (eth1): carrier: link connected
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7969] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7977] policy: auto-activating connection 'Wired connection 1' (0a1cf993-6cf3-38b0-82a8-4d66bef49908)
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7981] device (eth1): Activation: starting connection 'Wired connection 1' (0a1cf993-6cf3-38b0-82a8-4d66bef49908)
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7982] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7987] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7992] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 02:51:14 np0005531666.novalocal NetworkManager[859]: <info>  [1763779874.7996] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:51:15 np0005531666.novalocal python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-5591-43a9-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:51:25 np0005531666.novalocal sudo[7050]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sirlkefiksnxkvunrutdouvpinlpvhig ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 02:51:25 np0005531666.novalocal sudo[7050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:51:25 np0005531666.novalocal python3[7052]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:51:25 np0005531666.novalocal sudo[7050]: pam_unix(sudo:session): session closed for user root
Nov 22 02:51:26 np0005531666.novalocal sudo[7123]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qppistjepvvntxndgafniertwdtroxtz ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 02:51:26 np0005531666.novalocal sudo[7123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:51:26 np0005531666.novalocal python3[7125]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763779885.4379685-102-72320655016255/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=06bd4dfe65cca7ce0057ee31fa8f2d19b58ed2ba backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:51:26 np0005531666.novalocal sudo[7123]: pam_unix(sudo:session): session closed for user root
Nov 22 02:51:26 np0005531666.novalocal sudo[7173]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwzhgajflgagpufhsytjtkrffllqsgcn ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 02:51:26 np0005531666.novalocal sudo[7173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:51:27 np0005531666.novalocal python3[7175]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Stopped Network Manager Wait Online.
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Stopping Network Manager Wait Online...
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[859]: <info>  [1763779887.1610] caught SIGTERM, shutting down normally.
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Stopping Network Manager...
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[859]: <info>  [1763779887.1627] dhcp4 (eth0): canceled DHCP transaction
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[859]: <info>  [1763779887.1628] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[859]: <info>  [1763779887.1628] dhcp4 (eth0): state changed no lease
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[859]: <info>  [1763779887.1631] manager: NetworkManager state is now CONNECTING
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[859]: <info>  [1763779887.1714] dhcp4 (eth1): canceled DHCP transaction
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[859]: <info>  [1763779887.1715] dhcp4 (eth1): state changed no lease
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[859]: <info>  [1763779887.1770] exiting (success)
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Stopped Network Manager.
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: NetworkManager.service: Consumed 1.245s CPU time, 10.0M memory peak.
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Starting Network Manager...
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.2448] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:bbdf02b3-deb9-47de-b411-3c25d6aa93d1)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.2453] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.2551] manager[0x562dbbab0070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Starting Hostname Service...
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Started Hostname Service.
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3584] hostname: hostname: using hostnamed
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3584] hostname: static hostname changed from (none) to "np0005531666.novalocal"
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3592] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3599] manager[0x562dbbab0070]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3599] manager[0x562dbbab0070]: rfkill: WWAN hardware radio set enabled
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3630] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3630] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3631] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3631] manager: Networking is enabled by state file
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3633] settings: Loaded settings plugin: keyfile (internal)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3637] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3662] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3672] dhcp: init: Using DHCP client 'internal'
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3674] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3678] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3682] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3689] device (lo): Activation: starting connection 'lo' (704fb092-bceb-43d7-a199-2a71be4392ac)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3694] device (eth0): carrier: link connected
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3698] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3701] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3701] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3706] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3710] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3715] device (eth1): carrier: link connected
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3718] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3722] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (0a1cf993-6cf3-38b0-82a8-4d66bef49908) (indicated)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3722] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3726] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3731] device (eth1): Activation: starting connection 'Wired connection 1' (0a1cf993-6cf3-38b0-82a8-4d66bef49908)
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Started Network Manager.
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3745] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3753] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3757] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3759] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3761] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3765] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3768] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3770] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3773] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3778] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3781] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3790] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3794] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3817] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3818] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3824] device (lo): Activation: successful, device activated.
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3830] dhcp4 (eth0): state changed new lease, address=38.102.83.177
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.3838] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 02:51:27 np0005531666.novalocal systemd[1]: Starting Network Manager Wait Online...
Nov 22 02:51:27 np0005531666.novalocal sudo[7173]: pam_unix(sudo:session): session closed for user root
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.6179] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.6213] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.6221] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.6226] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.6240] device (eth0): Activation: successful, device activated.
Nov 22 02:51:27 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779887.6251] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 02:51:27 np0005531666.novalocal python3[7240]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-5591-43a9-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:51:32 np0005531666.novalocal chronyd[786]: Selected source 149.56.19.163 (2.centos.pool.ntp.org)
Nov 22 02:51:37 np0005531666.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 02:51:57 np0005531666.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.3479] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 02:52:12 np0005531666.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 02:52:12 np0005531666.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.3887] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.3891] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.3906] device (eth1): Activation: successful, device activated.
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.3916] manager: startup complete
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.3920] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <warn>  [1763779932.3956] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 22 02:52:12 np0005531666.novalocal systemd[1]: Finished Network Manager Wait Online.
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.3969] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4115] dhcp4 (eth1): canceled DHCP transaction
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4116] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4117] dhcp4 (eth1): state changed no lease
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4139] policy: auto-activating connection 'ci-private-network' (d949c173-9bb4-5028-b379-646626090b3a)
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4147] device (eth1): Activation: starting connection 'ci-private-network' (d949c173-9bb4-5028-b379-646626090b3a)
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4148] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4154] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4164] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4177] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4237] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4240] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 02:52:12 np0005531666.novalocal NetworkManager[7180]: <info>  [1763779932.4251] device (eth1): Activation: successful, device activated.
Nov 22 02:52:22 np0005531666.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 02:52:24 np0005531666.novalocal sshd-session[7288]: Connection closed by 168.138.202.218 port 58214
Nov 22 02:52:25 np0005531666.novalocal sshd-session[7289]: Invalid user a from 168.138.202.218 port 58216
Nov 22 02:52:25 np0005531666.novalocal sshd-session[7289]: Connection closed by invalid user a 168.138.202.218 port 58216 [preauth]
Nov 22 02:52:27 np0005531666.novalocal sshd-session[4310]: Received disconnect from 38.102.83.114 port 40478:11: disconnected by user
Nov 22 02:52:27 np0005531666.novalocal sshd-session[4310]: Disconnected from user zuul 38.102.83.114 port 40478
Nov 22 02:52:27 np0005531666.novalocal sshd-session[4297]: pam_unix(sshd:session): session closed for user zuul
Nov 22 02:52:27 np0005531666.novalocal systemd-logind[799]: Session 1 logged out. Waiting for processes to exit.
Nov 22 02:52:34 np0005531666.novalocal sshd-session[7291]: Accepted publickey for zuul from 38.102.83.114 port 52748 ssh2: RSA SHA256:eVsZt2yHpbTNrFfKVGH3GdI61kssxBz29Cce2alCemw
Nov 22 02:52:34 np0005531666.novalocal systemd-logind[799]: New session 3 of user zuul.
Nov 22 02:52:34 np0005531666.novalocal systemd[1]: Started Session 3 of User zuul.
Nov 22 02:52:34 np0005531666.novalocal sshd-session[7291]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 02:52:34 np0005531666.novalocal sudo[7370]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baaxufjfemfbtgalmrhzpcbjtzkpdqbe ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 02:52:34 np0005531666.novalocal sudo[7370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:52:34 np0005531666.novalocal python3[7372]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:52:34 np0005531666.novalocal sudo[7370]: pam_unix(sudo:session): session closed for user root
Nov 22 02:52:34 np0005531666.novalocal sudo[7443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjwrvtdzjpvmpwcfzotskycchlxkasdt ; OS_CLOUD=vexxhost /usr/bin/python3'
Nov 22 02:52:34 np0005531666.novalocal sudo[7443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:52:35 np0005531666.novalocal python3[7445]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763779954.2629793-267-70908052957163/source _original_basename=tmpxuf3ylah follow=False checksum=9d180f36cd8c1c83bf6976bd7f1657f6617eef93 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:52:35 np0005531666.novalocal sudo[7443]: pam_unix(sudo:session): session closed for user root
Nov 22 02:52:37 np0005531666.novalocal sshd-session[7294]: Connection closed by 38.102.83.114 port 52748
Nov 22 02:52:37 np0005531666.novalocal sshd-session[7291]: pam_unix(sshd:session): session closed for user zuul
Nov 22 02:52:37 np0005531666.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Nov 22 02:52:37 np0005531666.novalocal systemd-logind[799]: Session 3 logged out. Waiting for processes to exit.
Nov 22 02:52:37 np0005531666.novalocal systemd-logind[799]: Removed session 3.
Nov 22 02:52:40 np0005531666.novalocal systemd[4301]: Starting Mark boot as successful...
Nov 22 02:52:40 np0005531666.novalocal systemd[4301]: Finished Mark boot as successful.
Nov 22 02:55:40 np0005531666.novalocal systemd[4301]: Created slice User Background Tasks Slice.
Nov 22 02:55:40 np0005531666.novalocal systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Nov 22 02:55:40 np0005531666.novalocal systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Nov 22 02:58:30 np0005531666.novalocal sshd-session[7477]: Accepted publickey for zuul from 38.102.83.114 port 33192 ssh2: RSA SHA256:eVsZt2yHpbTNrFfKVGH3GdI61kssxBz29Cce2alCemw
Nov 22 02:58:30 np0005531666.novalocal systemd-logind[799]: New session 4 of user zuul.
Nov 22 02:58:30 np0005531666.novalocal systemd[1]: Started Session 4 of User zuul.
Nov 22 02:58:30 np0005531666.novalocal sshd-session[7477]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 02:58:31 np0005531666.novalocal sudo[7504]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plosbremgqunlnqhjhtvbfgiikimjwlq ; /usr/bin/python3'
Nov 22 02:58:31 np0005531666.novalocal sudo[7504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:31 np0005531666.novalocal python3[7506]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-14f3-b427-000000001cc6-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:58:31 np0005531666.novalocal sudo[7504]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:31 np0005531666.novalocal sudo[7533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swkliybsprccededgdydnuqahtuifbdd ; /usr/bin/python3'
Nov 22 02:58:31 np0005531666.novalocal sudo[7533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:31 np0005531666.novalocal python3[7535]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:58:31 np0005531666.novalocal sudo[7533]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:31 np0005531666.novalocal sudo[7559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxkirhvbnseaqqsnhqtpklnmuhcxemer ; /usr/bin/python3'
Nov 22 02:58:31 np0005531666.novalocal sudo[7559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:31 np0005531666.novalocal python3[7561]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:58:31 np0005531666.novalocal sudo[7559]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:32 np0005531666.novalocal sudo[7585]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbasxnbigamsqlduyufjvemztcrurach ; /usr/bin/python3'
Nov 22 02:58:32 np0005531666.novalocal sudo[7585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:32 np0005531666.novalocal python3[7587]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:58:32 np0005531666.novalocal sudo[7585]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:32 np0005531666.novalocal sudo[7611]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mumcpfceprjqtdozlrkatrrpkhivpgnf ; /usr/bin/python3'
Nov 22 02:58:32 np0005531666.novalocal sudo[7611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:32 np0005531666.novalocal python3[7613]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:58:32 np0005531666.novalocal sudo[7611]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:32 np0005531666.novalocal sudo[7637]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvpsbcwpukjooseicohgolcvlsxhtmrp ; /usr/bin/python3'
Nov 22 02:58:32 np0005531666.novalocal sudo[7637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:33 np0005531666.novalocal python3[7639]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:58:33 np0005531666.novalocal sudo[7637]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:33 np0005531666.novalocal sudo[7715]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzuknfmnyqpgyrhdesqpoefbevzvbizo ; /usr/bin/python3'
Nov 22 02:58:33 np0005531666.novalocal sudo[7715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:33 np0005531666.novalocal python3[7717]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 02:58:33 np0005531666.novalocal sudo[7715]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:33 np0005531666.novalocal sudo[7788]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnrqtozwxxxqmwbqhnwyxwcrlfxgxbca ; /usr/bin/python3'
Nov 22 02:58:33 np0005531666.novalocal sudo[7788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:34 np0005531666.novalocal python3[7790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763780313.3216107-475-188532564926463/source _original_basename=tmpeld015t9 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:58:34 np0005531666.novalocal sudo[7788]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:34 np0005531666.novalocal sudo[7838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-farhbuguucbdnwnonvbgeuappqsiksip ; /usr/bin/python3'
Nov 22 02:58:34 np0005531666.novalocal sudo[7838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:34 np0005531666.novalocal python3[7840]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 02:58:34 np0005531666.novalocal systemd[1]: Reloading.
Nov 22 02:58:35 np0005531666.novalocal systemd-rc-local-generator[7860]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 02:58:35 np0005531666.novalocal sudo[7838]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:36 np0005531666.novalocal sudo[7893]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwzqbvxnoraicuryhsaungwfgncqcff ; /usr/bin/python3'
Nov 22 02:58:36 np0005531666.novalocal sudo[7893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:36 np0005531666.novalocal python3[7895]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 22 02:58:36 np0005531666.novalocal sudo[7893]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:36 np0005531666.novalocal sudo[7919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvyljvazdbwznrngsluswzvfdgxcdglc ; /usr/bin/python3'
Nov 22 02:58:36 np0005531666.novalocal sudo[7919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:37 np0005531666.novalocal python3[7921]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:58:37 np0005531666.novalocal sudo[7919]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:37 np0005531666.novalocal sudo[7947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgtmnpjmdshmqlxpcorfoxehahhtdpks ; /usr/bin/python3'
Nov 22 02:58:37 np0005531666.novalocal sudo[7947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:37 np0005531666.novalocal python3[7949]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:58:37 np0005531666.novalocal sudo[7947]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:37 np0005531666.novalocal sudo[7975]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lechuvgarvavfjmoruslxjuuiyukuxyl ; /usr/bin/python3'
Nov 22 02:58:37 np0005531666.novalocal sudo[7975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:37 np0005531666.novalocal python3[7977]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:58:37 np0005531666.novalocal sudo[7975]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:37 np0005531666.novalocal sudo[8003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewegkkmzfdchtqlwiomsgbuaahponzff ; /usr/bin/python3'
Nov 22 02:58:37 np0005531666.novalocal sudo[8003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:37 np0005531666.novalocal python3[8005]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:58:37 np0005531666.novalocal sudo[8003]: pam_unix(sudo:session): session closed for user root
Nov 22 02:58:38 np0005531666.novalocal python3[8032]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-14f3-b427-000000001ccd-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:58:39 np0005531666.novalocal python3[8062]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 02:58:40 np0005531666.novalocal sshd-session[7480]: Connection closed by 38.102.83.114 port 33192
Nov 22 02:58:40 np0005531666.novalocal sshd-session[7477]: pam_unix(sshd:session): session closed for user zuul
Nov 22 02:58:40 np0005531666.novalocal systemd-logind[799]: Session 4 logged out. Waiting for processes to exit.
Nov 22 02:58:40 np0005531666.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Nov 22 02:58:40 np0005531666.novalocal systemd[1]: session-4.scope: Consumed 4.468s CPU time.
Nov 22 02:58:40 np0005531666.novalocal systemd-logind[799]: Removed session 4.
Nov 22 02:58:42 np0005531666.novalocal sshd-session[8068]: Accepted publickey for zuul from 38.102.83.114 port 45322 ssh2: RSA SHA256:eVsZt2yHpbTNrFfKVGH3GdI61kssxBz29Cce2alCemw
Nov 22 02:58:42 np0005531666.novalocal systemd-logind[799]: New session 5 of user zuul.
Nov 22 02:58:42 np0005531666.novalocal systemd[1]: Started Session 5 of User zuul.
Nov 22 02:58:42 np0005531666.novalocal sshd-session[8068]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 02:58:42 np0005531666.novalocal sudo[8095]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vepsndabckqpuiikmbnwfjusfcvzeawo ; /usr/bin/python3'
Nov 22 02:58:42 np0005531666.novalocal sudo[8095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:58:42 np0005531666.novalocal python3[8097]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 02:59:01 np0005531666.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 02:59:01 np0005531666.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 02:59:01 np0005531666.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 02:59:01 np0005531666.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 02:59:01 np0005531666.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 02:59:01 np0005531666.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 02:59:01 np0005531666.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 02:59:01 np0005531666.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 02:59:10 np0005531666.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 02:59:10 np0005531666.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 02:59:10 np0005531666.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 02:59:10 np0005531666.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 02:59:10 np0005531666.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 02:59:10 np0005531666.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 02:59:10 np0005531666.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 02:59:10 np0005531666.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 02:59:19 np0005531666.novalocal kernel: SELinux:  Converting 385 SID table entries...
Nov 22 02:59:19 np0005531666.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 02:59:19 np0005531666.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 02:59:19 np0005531666.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 02:59:19 np0005531666.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 02:59:19 np0005531666.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 02:59:19 np0005531666.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 02:59:19 np0005531666.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 02:59:21 np0005531666.novalocal setsebool[8165]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 22 02:59:21 np0005531666.novalocal setsebool[8165]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 22 02:59:31 np0005531666.novalocal kernel: SELinux:  Converting 388 SID table entries...
Nov 22 02:59:31 np0005531666.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 02:59:31 np0005531666.novalocal kernel: SELinux:  policy capability open_perms=1
Nov 22 02:59:31 np0005531666.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 02:59:31 np0005531666.novalocal kernel: SELinux:  policy capability always_check_network=0
Nov 22 02:59:31 np0005531666.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 02:59:31 np0005531666.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 02:59:31 np0005531666.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 02:59:51 np0005531666.novalocal dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 22 02:59:51 np0005531666.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 02:59:51 np0005531666.novalocal systemd[1]: Starting man-db-cache-update.service...
Nov 22 02:59:51 np0005531666.novalocal systemd[1]: Reloading.
Nov 22 02:59:52 np0005531666.novalocal systemd-rc-local-generator[8922]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 02:59:52 np0005531666.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 02:59:53 np0005531666.novalocal sudo[8095]: pam_unix(sudo:session): session closed for user root
Nov 22 02:59:54 np0005531666.novalocal python3[10534]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-c329-8399-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 02:59:55 np0005531666.novalocal kernel: evm: overlay not supported
Nov 22 02:59:55 np0005531666.novalocal systemd[4301]: Starting D-Bus User Message Bus...
Nov 22 02:59:55 np0005531666.novalocal dbus-broker-launch[11477]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 22 02:59:55 np0005531666.novalocal dbus-broker-launch[11477]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 22 02:59:55 np0005531666.novalocal systemd[4301]: Started D-Bus User Message Bus.
Nov 22 02:59:55 np0005531666.novalocal dbus-broker-lau[11477]: Ready
Nov 22 02:59:55 np0005531666.novalocal systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 22 02:59:55 np0005531666.novalocal systemd[4301]: Created slice Slice /user.
Nov 22 02:59:55 np0005531666.novalocal systemd[4301]: podman-11365.scope: unit configures an IP firewall, but not running as root.
Nov 22 02:59:55 np0005531666.novalocal systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Nov 22 02:59:55 np0005531666.novalocal systemd[4301]: Started podman-11365.scope.
Nov 22 02:59:55 np0005531666.novalocal systemd[4301]: Started podman-pause-5f84c7c8.scope.
Nov 22 02:59:56 np0005531666.novalocal sudo[11992]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltlntnceofhqfbkjmddbfjxlwvupmtua ; /usr/bin/python3'
Nov 22 02:59:56 np0005531666.novalocal sudo[11992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 02:59:56 np0005531666.novalocal python3[12014]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.110:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.110:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 02:59:56 np0005531666.novalocal python3[12014]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 22 02:59:56 np0005531666.novalocal sudo[11992]: pam_unix(sudo:session): session closed for user root
Nov 22 02:59:56 np0005531666.novalocal sshd-session[8071]: Connection closed by 38.102.83.114 port 45322
Nov 22 02:59:56 np0005531666.novalocal sshd-session[8068]: pam_unix(sshd:session): session closed for user zuul
Nov 22 02:59:56 np0005531666.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Nov 22 02:59:56 np0005531666.novalocal systemd[1]: session-5.scope: Consumed 1min 1.806s CPU time.
Nov 22 02:59:56 np0005531666.novalocal systemd-logind[799]: Session 5 logged out. Waiting for processes to exit.
Nov 22 02:59:56 np0005531666.novalocal systemd-logind[799]: Removed session 5.
Nov 22 03:00:04 np0005531666.novalocal irqbalance[793]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 22 03:00:04 np0005531666.novalocal irqbalance[793]: IRQ 27 affinity is now unmanaged
Nov 22 03:00:15 np0005531666.novalocal sshd-session[18838]: Connection closed by 38.102.83.143 port 39354 [preauth]
Nov 22 03:00:15 np0005531666.novalocal sshd-session[18836]: Connection closed by 38.102.83.143 port 39350 [preauth]
Nov 22 03:00:15 np0005531666.novalocal sshd-session[18839]: Unable to negotiate with 38.102.83.143 port 39356: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 22 03:00:15 np0005531666.novalocal sshd-session[18842]: Unable to negotiate with 38.102.83.143 port 39370: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 22 03:00:15 np0005531666.novalocal sshd-session[18844]: Unable to negotiate with 38.102.83.143 port 39376: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 22 03:00:19 np0005531666.novalocal sshd-session[20258]: Accepted publickey for zuul from 38.102.83.114 port 42108 ssh2: RSA SHA256:eVsZt2yHpbTNrFfKVGH3GdI61kssxBz29Cce2alCemw
Nov 22 03:00:19 np0005531666.novalocal systemd-logind[799]: New session 6 of user zuul.
Nov 22 03:00:19 np0005531666.novalocal systemd[1]: Started Session 6 of User zuul.
Nov 22 03:00:19 np0005531666.novalocal sshd-session[20258]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:00:20 np0005531666.novalocal python3[20360]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFJQciW36MmDKtbqKGu6DZW2tpL7zKkQMLHPrrX3yeH/ZIAGwzeylhNcJVel2KOlyZyrUhByuIax0i8sBr5EXqY= zuul@np0005531665.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 03:00:20 np0005531666.novalocal sudo[20496]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiuesyybwlamnutowcyuvroxwieeisgx ; /usr/bin/python3'
Nov 22 03:00:20 np0005531666.novalocal sudo[20496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:00:20 np0005531666.novalocal python3[20507]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFJQciW36MmDKtbqKGu6DZW2tpL7zKkQMLHPrrX3yeH/ZIAGwzeylhNcJVel2KOlyZyrUhByuIax0i8sBr5EXqY= zuul@np0005531665.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 03:00:20 np0005531666.novalocal sudo[20496]: pam_unix(sudo:session): session closed for user root
Nov 22 03:00:20 np0005531666.novalocal sudo[20758]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jofkcglfhbworsvlxampyzigkcztuybt ; /usr/bin/python3'
Nov 22 03:00:20 np0005531666.novalocal sudo[20758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:00:21 np0005531666.novalocal python3[20768]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005531666.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 22 03:00:21 np0005531666.novalocal useradd[20833]: new group: name=cloud-admin, GID=1002
Nov 22 03:00:21 np0005531666.novalocal useradd[20833]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Nov 22 03:00:21 np0005531666.novalocal sudo[20758]: pam_unix(sudo:session): session closed for user root
Nov 22 03:00:21 np0005531666.novalocal sudo[20951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tchevmcekjwmpsbyakecklwwdumvcpiw ; /usr/bin/python3'
Nov 22 03:00:21 np0005531666.novalocal sudo[20951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:00:21 np0005531666.novalocal python3[20962]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFJQciW36MmDKtbqKGu6DZW2tpL7zKkQMLHPrrX3yeH/ZIAGwzeylhNcJVel2KOlyZyrUhByuIax0i8sBr5EXqY= zuul@np0005531665.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 22 03:00:21 np0005531666.novalocal sudo[20951]: pam_unix(sudo:session): session closed for user root
Nov 22 03:00:21 np0005531666.novalocal sudo[21197]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maqlrxgmemuoyadwazazsxjcsbhgdwgp ; /usr/bin/python3'
Nov 22 03:00:21 np0005531666.novalocal sudo[21197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:00:22 np0005531666.novalocal python3[21204]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:00:22 np0005531666.novalocal sudo[21197]: pam_unix(sudo:session): session closed for user root
Nov 22 03:00:22 np0005531666.novalocal sudo[21453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikrfpgncdvtjpwkyfmgqtgpdskyfxebg ; /usr/bin/python3'
Nov 22 03:00:22 np0005531666.novalocal sudo[21453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:00:22 np0005531666.novalocal python3[21463]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763780421.8432581-135-144726607247153/source _original_basename=tmp08994aoa follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:00:22 np0005531666.novalocal sudo[21453]: pam_unix(sudo:session): session closed for user root
Nov 22 03:00:23 np0005531666.novalocal sudo[21750]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxklrzxhvlcksyajjrfnrwpkaakafgnp ; /usr/bin/python3'
Nov 22 03:00:23 np0005531666.novalocal sudo[21750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:00:23 np0005531666.novalocal python3[21758]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 22 03:00:23 np0005531666.novalocal systemd[1]: Starting Hostname Service...
Nov 22 03:00:23 np0005531666.novalocal systemd[1]: Started Hostname Service.
Nov 22 03:00:23 np0005531666.novalocal systemd-hostnamed[21862]: Changed pretty hostname to 'compute-0'
Nov 22 03:00:23 compute-0 systemd-hostnamed[21862]: Hostname set to <compute-0> (static)
Nov 22 03:00:23 compute-0 NetworkManager[7180]: <info>  [1763780423.6289] hostname: static hostname changed from "np0005531666.novalocal" to "compute-0"
Nov 22 03:00:23 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 03:00:23 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 03:00:23 compute-0 sudo[21750]: pam_unix(sudo:session): session closed for user root
Nov 22 03:00:24 compute-0 sshd-session[20304]: Connection closed by 38.102.83.114 port 42108
Nov 22 03:00:24 compute-0 sshd-session[20258]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:00:24 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Nov 22 03:00:24 compute-0 systemd[1]: session-6.scope: Consumed 2.633s CPU time.
Nov 22 03:00:24 compute-0 systemd-logind[799]: Session 6 logged out. Waiting for processes to exit.
Nov 22 03:00:24 compute-0 systemd-logind[799]: Removed session 6.
Nov 22 03:00:24 compute-0 irqbalance[793]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 22 03:00:24 compute-0 irqbalance[793]: IRQ 26 affinity is now unmanaged
Nov 22 03:00:33 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 03:00:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:00:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:00:49 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 7.423s CPU time.
Nov 22 03:00:49 compute-0 systemd[1]: run-r6da24c58624d482d872781ed90707cbb.service: Deactivated successfully.
Nov 22 03:00:53 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 03:01:01 compute-0 CROND[29924]: (root) CMD (run-parts /etc/cron.hourly)
Nov 22 03:01:01 compute-0 run-parts[29927]: (/etc/cron.hourly) starting 0anacron
Nov 22 03:01:01 compute-0 anacron[29935]: Anacron started on 2025-11-22
Nov 22 03:01:01 compute-0 anacron[29935]: Will run job `cron.daily' in 16 min.
Nov 22 03:01:01 compute-0 anacron[29935]: Will run job `cron.weekly' in 36 min.
Nov 22 03:01:01 compute-0 anacron[29935]: Will run job `cron.monthly' in 56 min.
Nov 22 03:01:01 compute-0 anacron[29935]: Jobs will be executed sequentially
Nov 22 03:01:01 compute-0 run-parts[29937]: (/etc/cron.hourly) finished 0anacron
Nov 22 03:01:01 compute-0 CROND[29923]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 22 03:03:26 compute-0 sshd-session[29942]: Received disconnect from 193.46.255.7 port 27400:11:  [preauth]
Nov 22 03:03:26 compute-0 sshd-session[29942]: Disconnected from authenticating user root 193.46.255.7 port 27400 [preauth]
Nov 22 03:04:04 compute-0 sshd-session[29945]: Accepted publickey for zuul from 38.102.83.143 port 35452 ssh2: RSA SHA256:eVsZt2yHpbTNrFfKVGH3GdI61kssxBz29Cce2alCemw
Nov 22 03:04:05 compute-0 systemd-logind[799]: New session 7 of user zuul.
Nov 22 03:04:05 compute-0 systemd[1]: Started Session 7 of User zuul.
Nov 22 03:04:05 compute-0 sshd-session[29945]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:04:05 compute-0 python3[30021]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:04:07 compute-0 sudo[30135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irjnbsllfcqgjmnmxfyfznrrzsigglks ; /usr/bin/python3'
Nov 22 03:04:07 compute-0 sudo[30135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:07 compute-0 python3[30137]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:04:07 compute-0 sudo[30135]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:07 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 22 03:04:07 compute-0 sudo[30208]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrjkauhaiqbyrxpujfcexnjycaeszgtm ; /usr/bin/python3'
Nov 22 03:04:07 compute-0 sudo[30208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:07 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 22 03:04:07 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 22 03:04:07 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 22 03:04:07 compute-0 python3[30211]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763780647.0247386-33547-100321958650385/source mode=0755 _original_basename=delorean.repo follow=False checksum=1830be8248976a7f714fb01ca8550e92dfc79ad2 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:04:07 compute-0 sudo[30208]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:08 compute-0 sudo[30236]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wztesqnayponnktrpnzuggmrukihfhwt ; /usr/bin/python3'
Nov 22 03:04:08 compute-0 sudo[30236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:08 compute-0 python3[30238]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:04:08 compute-0 sudo[30236]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:08 compute-0 sudo[30309]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqitlqisbmzwfuwlxgwupqykocastnqm ; /usr/bin/python3'
Nov 22 03:04:08 compute-0 sudo[30309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:08 compute-0 python3[30311]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763780647.0247386-33547-100321958650385/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:04:08 compute-0 sudo[30309]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:08 compute-0 sudo[30335]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kteqrxowsfendspqvjmcjytktnawktyy ; /usr/bin/python3'
Nov 22 03:04:08 compute-0 sudo[30335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:08 compute-0 python3[30337]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:04:08 compute-0 sudo[30335]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:09 compute-0 sudo[30408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qziksnsjkigpvshknqcbtyvxeplgunat ; /usr/bin/python3'
Nov 22 03:04:09 compute-0 sudo[30408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:09 compute-0 python3[30410]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763780647.0247386-33547-100321958650385/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:04:09 compute-0 sudo[30408]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:09 compute-0 sudo[30434]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owesnbemalzfgzfbtozelhytbwurvifi ; /usr/bin/python3'
Nov 22 03:04:09 compute-0 sudo[30434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:09 compute-0 python3[30436]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:04:09 compute-0 sudo[30434]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:10 compute-0 sudo[30507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xunkcovmhbaodjhdzojecjnztbozylbq ; /usr/bin/python3'
Nov 22 03:04:10 compute-0 sudo[30507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:10 compute-0 python3[30509]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763780647.0247386-33547-100321958650385/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:04:10 compute-0 sudo[30507]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:10 compute-0 sudo[30533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aurvsiveczcwgkxozqxqezcgkvcbywke ; /usr/bin/python3'
Nov 22 03:04:10 compute-0 sudo[30533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:10 compute-0 python3[30535]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:04:10 compute-0 sudo[30533]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:10 compute-0 sudo[30606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygicrjornkfxljvdedzlyctjbqoxfmir ; /usr/bin/python3'
Nov 22 03:04:10 compute-0 sudo[30606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:10 compute-0 python3[30608]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763780647.0247386-33547-100321958650385/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:04:11 compute-0 sudo[30606]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:11 compute-0 sudo[30632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haaypdobybfnishryquxufsgtuvnzcvf ; /usr/bin/python3'
Nov 22 03:04:11 compute-0 sudo[30632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:11 compute-0 python3[30634]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:04:11 compute-0 sudo[30632]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:11 compute-0 sudo[30705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjnszogcnwaircxzfsebholhjmzpcsl ; /usr/bin/python3'
Nov 22 03:04:11 compute-0 sudo[30705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:11 compute-0 python3[30707]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763780647.0247386-33547-100321958650385/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:04:11 compute-0 sudo[30705]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:11 compute-0 sudo[30731]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqxjdbwagpkdmasmjkownzhhascsjapw ; /usr/bin/python3'
Nov 22 03:04:11 compute-0 sudo[30731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:12 compute-0 python3[30733]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:04:12 compute-0 sudo[30731]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:12 compute-0 sudo[30804]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svfrtvywgimlihuvwwvmmyefchkvunht ; /usr/bin/python3'
Nov 22 03:04:12 compute-0 sudo[30804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:04:12 compute-0 python3[30806]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1763780647.0247386-33547-100321958650385/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=6646317362318a9831d66a1804f6bb7dd1b97cd5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:04:12 compute-0 sudo[30804]: pam_unix(sudo:session): session closed for user root
Nov 22 03:04:14 compute-0 sshd-session[30832]: Connection closed by 192.168.122.11 port 58474 [preauth]
Nov 22 03:04:14 compute-0 sshd-session[30831]: Connection closed by 192.168.122.11 port 58458 [preauth]
Nov 22 03:04:14 compute-0 sshd-session[30833]: Unable to negotiate with 192.168.122.11 port 58482: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Nov 22 03:04:14 compute-0 sshd-session[30834]: Unable to negotiate with 192.168.122.11 port 58494: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Nov 22 03:04:14 compute-0 sshd-session[30835]: Unable to negotiate with 192.168.122.11 port 58496: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Nov 22 03:04:24 compute-0 python3[30864]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:09:23 compute-0 sshd-session[29948]: Received disconnect from 38.102.83.143 port 35452:11: disconnected by user
Nov 22 03:09:23 compute-0 sshd-session[29948]: Disconnected from user zuul 38.102.83.143 port 35452
Nov 22 03:09:23 compute-0 sshd-session[29945]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:09:23 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Nov 22 03:09:23 compute-0 systemd[1]: session-7.scope: Consumed 6.123s CPU time.
Nov 22 03:09:23 compute-0 systemd-logind[799]: Session 7 logged out. Waiting for processes to exit.
Nov 22 03:09:23 compute-0 systemd-logind[799]: Removed session 7.
Nov 22 03:15:11 compute-0 sshd-session[30870]: Accepted publickey for zuul from 192.168.122.30 port 50212 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:15:11 compute-0 systemd-logind[799]: New session 8 of user zuul.
Nov 22 03:15:11 compute-0 systemd[1]: Started Session 8 of User zuul.
Nov 22 03:15:11 compute-0 sshd-session[30870]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:15:13 compute-0 python3.9[31023]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:15:14 compute-0 sudo[31202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhyrxqgavzlhnndcupbvvebnsuecztqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781313.8690526-32-4671974650137/AnsiballZ_command.py'
Nov 22 03:15:14 compute-0 sudo[31202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:14 compute-0 python3.9[31204]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:15:22 compute-0 sudo[31202]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:22 compute-0 sshd-session[30873]: Connection closed by 192.168.122.30 port 50212
Nov 22 03:15:22 compute-0 sshd-session[30870]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:15:22 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Nov 22 03:15:22 compute-0 systemd[1]: session-8.scope: Consumed 8.394s CPU time.
Nov 22 03:15:22 compute-0 systemd-logind[799]: Session 8 logged out. Waiting for processes to exit.
Nov 22 03:15:22 compute-0 systemd-logind[799]: Removed session 8.
Nov 22 03:15:38 compute-0 sshd-session[31261]: Accepted publickey for zuul from 192.168.122.30 port 52670 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:15:38 compute-0 systemd-logind[799]: New session 9 of user zuul.
Nov 22 03:15:38 compute-0 systemd[1]: Started Session 9 of User zuul.
Nov 22 03:15:38 compute-0 sshd-session[31261]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:15:39 compute-0 python3.9[31414]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 03:15:40 compute-0 python3.9[31588]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:15:41 compute-0 sudo[31738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwkbzakvuiyzvdyggdyletpnyfaaykii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781341.1801546-45-38086572342362/AnsiballZ_command.py'
Nov 22 03:15:41 compute-0 sudo[31738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:41 compute-0 python3.9[31740]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:15:41 compute-0 sudo[31738]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:43 compute-0 sudo[31891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kknivcczbummteadizvnkcfrvkctratg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781342.5910773-57-154803960873318/AnsiballZ_stat.py'
Nov 22 03:15:43 compute-0 sudo[31891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:43 compute-0 python3.9[31893]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:15:43 compute-0 sudo[31891]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:44 compute-0 sudo[32043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nerzlrrogacscqdmvhmwvptmwxctxxfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781343.5919354-65-244350581637530/AnsiballZ_file.py'
Nov 22 03:15:44 compute-0 sudo[32043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:44 compute-0 python3.9[32045]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:15:44 compute-0 sudo[32043]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:44 compute-0 sudo[32195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ossgcbvnoakweevqkaoxmsdetljcilml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781344.5385299-73-222248646750639/AnsiballZ_stat.py'
Nov 22 03:15:44 compute-0 sudo[32195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:45 compute-0 python3.9[32197]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:15:45 compute-0 sudo[32195]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:45 compute-0 sudo[32318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcqhzdwyxluzaymnwfvqwqhqoacaugpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781344.5385299-73-222248646750639/AnsiballZ_copy.py'
Nov 22 03:15:45 compute-0 sudo[32318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:45 compute-0 python3.9[32320]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781344.5385299-73-222248646750639/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:15:46 compute-0 sudo[32318]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:46 compute-0 sudo[32470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxbthttksnecrdmumaypelfyuigzeskh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781346.2058516-88-216314531601137/AnsiballZ_setup.py'
Nov 22 03:15:46 compute-0 sudo[32470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:46 compute-0 python3.9[32472]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:15:47 compute-0 sudo[32470]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:47 compute-0 sudo[32626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uusnaymbfryjbuvrxngoqwesnwvqbhtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781347.2821138-96-54256765725155/AnsiballZ_file.py'
Nov 22 03:15:47 compute-0 sudo[32626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:48 compute-0 python3.9[32628]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:15:48 compute-0 sudo[32626]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:48 compute-0 sudo[32779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stztnmazfliqlzbumyoknvbsblrsddbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781348.2939725-105-29373659267623/AnsiballZ_file.py'
Nov 22 03:15:48 compute-0 sudo[32779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:48 compute-0 python3.9[32781]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:15:48 compute-0 sudo[32779]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:49 compute-0 python3.9[32931]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:15:53 compute-0 python3.9[33184]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:15:54 compute-0 python3.9[33334]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:15:55 compute-0 python3.9[33489]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:15:56 compute-0 sudo[33645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmdlcofwqebnhqarsuygbjrvmrtmggae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781356.4878256-153-247648444997668/AnsiballZ_setup.py'
Nov 22 03:15:56 compute-0 sudo[33645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:57 compute-0 python3.9[33647]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:15:57 compute-0 sudo[33645]: pam_unix(sudo:session): session closed for user root
Nov 22 03:15:57 compute-0 sudo[33729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaotqfbfhljytnrkbljjdfrlzsbwntgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781356.4878256-153-247648444997668/AnsiballZ_dnf.py'
Nov 22 03:15:57 compute-0 sudo[33729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:15:58 compute-0 python3.9[33731]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:16:40 compute-0 systemd[1]: Starting dnf makecache...
Nov 22 03:16:40 compute-0 dnf[33898]: Failed determining last makecache time.
Nov 22 03:16:40 compute-0 systemd[1]: Reloading.
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-barbican-42b4c41831408a8e323 148 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 143 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-cinder-1c00d6490d88e436f26ef 178 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-python-stevedore-c4acc5639fd2329372142 156 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 systemd-rc-local-generator[33932]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-python-observabilityclient-2f31846d73c 181 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-os-net-config-bbae2ed8a159b0435a473f38 155 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 186 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-python-designate-tests-tempest-347fdbc 195 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-glance-1fd12c29b339f30fe823e 196 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 195 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-manila-3c01b7181572c95dac462 196 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-python-whitebox-neutron-tests-tempest- 196 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-octavia-ba397f07a7331190208c 204 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-watcher-c014f81a8647287f6dcc 204 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-python-tcib-1124124ec06aadbac34f0d340b 206 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 202 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-swift-dc98a8463506ac520c469a 183 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-python-tempestconf-8515371b7cceebd4282 194 kB/s | 3.0 kB     00:00
Nov 22 03:16:40 compute-0 dnf[33898]: delorean-openstack-heat-ui-013accbfd179753bc3f0 200 kB/s | 3.0 kB     00:00
Nov 22 03:16:41 compute-0 dnf[33898]: CentOS Stream 9 - BaseOS                         73 kB/s | 7.3 kB     00:00
Nov 22 03:16:41 compute-0 systemd[1]: Reloading.
Nov 22 03:16:41 compute-0 systemd-rc-local-generator[33986]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:16:41 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 22 03:16:41 compute-0 dnf[33898]: CentOS Stream 9 - AppStream                      21 kB/s | 7.4 kB     00:00
Nov 22 03:16:41 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 22 03:16:41 compute-0 systemd[1]: Reloading.
Nov 22 03:16:41 compute-0 systemd-rc-local-generator[34028]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:16:41 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 22 03:16:41 compute-0 dnf[33898]: CentOS Stream 9 - CRB                            46 kB/s | 7.2 kB     00:00
Nov 22 03:16:41 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Nov 22 03:16:41 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Nov 22 03:16:41 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Nov 22 03:16:42 compute-0 dnf[33898]: CentOS Stream 9 - Extras packages                24 kB/s | 8.3 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: dlrn-antelope-testing                           182 kB/s | 3.0 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: dlrn-antelope-build-deps                        204 kB/s | 3.0 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: centos9-rabbitmq                                124 kB/s | 3.0 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: centos9-storage                                 125 kB/s | 3.0 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: centos9-opstools                                117 kB/s | 3.0 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: NFV SIG OpenvSwitch                             109 kB/s | 3.0 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: repo-setup-centos-appstream                     180 kB/s | 4.4 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: repo-setup-centos-baseos                        143 kB/s | 3.9 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: repo-setup-centos-highavailability              143 kB/s | 3.9 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: repo-setup-centos-powertools                    184 kB/s | 4.3 kB     00:00
Nov 22 03:16:42 compute-0 dnf[33898]: Extra Packages for Enterprise Linux 9 - x86_64  151 kB/s |  35 kB     00:00
Nov 22 03:16:43 compute-0 dnf[33898]: Metadata cache created.
Nov 22 03:16:43 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 22 03:16:43 compute-0 systemd[1]: Finished dnf makecache.
Nov 22 03:16:43 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.802s CPU time.
Nov 22 03:17:01 compute-0 anacron[29935]: Job `cron.daily' started
Nov 22 03:17:01 compute-0 anacron[29935]: Job `cron.daily' terminated
Nov 22 03:17:46 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Nov 22 03:17:46 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:17:46 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:17:46 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:17:46 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:17:46 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:17:46 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:17:46 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:17:46 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 22 03:17:46 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:17:46 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:17:46 compute-0 systemd[1]: Reloading.
Nov 22 03:17:47 compute-0 systemd-rc-local-generator[34385]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:17:47 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:17:47 compute-0 sudo[33729]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:48 compute-0 sudo[35294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szohqfkbgnoxelmgbkfxpymstrjyytyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781467.7645884-165-183670360042984/AnsiballZ_command.py'
Nov 22 03:17:48 compute-0 sudo[35294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:48 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:17:48 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:17:48 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.412s CPU time.
Nov 22 03:17:48 compute-0 systemd[1]: run-r9bad119ad8914517b0e918b405197418.service: Deactivated successfully.
Nov 22 03:17:48 compute-0 python3.9[35296]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:17:49 compute-0 sudo[35294]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:50 compute-0 sudo[35577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjbjupudruybbczdwtvlluhfoxeirao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781469.6540022-173-6021595499584/AnsiballZ_selinux.py'
Nov 22 03:17:50 compute-0 sudo[35577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:50 compute-0 python3.9[35579]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 03:17:50 compute-0 sudo[35577]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:51 compute-0 sudo[35729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkbggtzjskinqxjzmvyntidesekfpvfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781470.958521-184-167009805016045/AnsiballZ_command.py'
Nov 22 03:17:51 compute-0 sudo[35729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:51 compute-0 python3.9[35731]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 03:17:52 compute-0 sudo[35729]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:52 compute-0 sudo[35882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znzazupuynpmojhxkgoyfcnltuwqzrtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781472.6124744-192-281299418888107/AnsiballZ_file.py'
Nov 22 03:17:52 compute-0 sudo[35882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:54 compute-0 python3.9[35884]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:17:54 compute-0 sudo[35882]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:54 compute-0 sudo[36034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhryuvlhtqjxqeqrhqqkywrivexqdcoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781474.2982466-200-49471107098774/AnsiballZ_mount.py'
Nov 22 03:17:54 compute-0 sudo[36034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:55 compute-0 python3.9[36036]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 03:17:55 compute-0 sudo[36034]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:56 compute-0 sudo[36186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kicsnfgiveffwdpmcaujcdadgyxbhojs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781475.7971015-228-195606807254551/AnsiballZ_file.py'
Nov 22 03:17:56 compute-0 sudo[36186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:56 compute-0 python3.9[36188]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:17:56 compute-0 sudo[36186]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:56 compute-0 sudo[36338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drivjmkbjkxjpmsvdurkmujqsvcbkkfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781476.5108855-236-183130041976604/AnsiballZ_stat.py'
Nov 22 03:17:56 compute-0 sudo[36338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:57 compute-0 python3.9[36340]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:17:57 compute-0 sudo[36338]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:57 compute-0 sudo[36461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xunktrxkjqngemonjdsveyoqahefexda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781476.5108855-236-183130041976604/AnsiballZ_copy.py'
Nov 22 03:17:57 compute-0 sudo[36461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:57 compute-0 python3.9[36463]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781476.5108855-236-183130041976604/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=45ba49572078f7d059c2266cdeaaa0793c1b0c16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:17:57 compute-0 sudo[36461]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:58 compute-0 sudo[36613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wimiynkcsqdmltkssacrylurwounhraj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781478.1475177-260-268429203729591/AnsiballZ_stat.py'
Nov 22 03:17:58 compute-0 sudo[36613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:58 compute-0 python3.9[36615]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:17:58 compute-0 sudo[36613]: pam_unix(sudo:session): session closed for user root
Nov 22 03:17:59 compute-0 sudo[36765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krhvicvzjfflszvbtbcavcqdhydxfygl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781478.8726847-268-196072474076580/AnsiballZ_command.py'
Nov 22 03:17:59 compute-0 sudo[36765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:17:59 compute-0 python3.9[36767]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:17:59 compute-0 sudo[36765]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:00 compute-0 sudo[36918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdrmaoybljabgccyaieydxbwbgqlfjus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781479.6825013-276-20495247616554/AnsiballZ_file.py'
Nov 22 03:18:00 compute-0 sudo[36918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:00 compute-0 python3.9[36920]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:18:00 compute-0 sudo[36918]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:01 compute-0 sudo[37070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsmzhukgjlawsqegphncoqfdfybtbdlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781480.8193288-287-114154686885351/AnsiballZ_getent.py'
Nov 22 03:18:01 compute-0 sudo[37070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:01 compute-0 python3.9[37072]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 03:18:01 compute-0 sudo[37070]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:01 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:18:01 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:18:02 compute-0 sudo[37224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfetnnkxeiaixlzaglogdnwwhqxzgtav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781481.6358533-295-270965448845518/AnsiballZ_group.py'
Nov 22 03:18:02 compute-0 sudo[37224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:05 compute-0 python3.9[37226]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:18:05 compute-0 groupadd[37227]: group added to /etc/group: name=qemu, GID=107
Nov 22 03:18:05 compute-0 groupadd[37227]: group added to /etc/gshadow: name=qemu
Nov 22 03:18:05 compute-0 groupadd[37227]: new group: name=qemu, GID=107
Nov 22 03:18:05 compute-0 sudo[37224]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:06 compute-0 sudo[37382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeohfvqbuluqehymkivqnkinnevjcdrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781485.594321-303-207153579336683/AnsiballZ_user.py'
Nov 22 03:18:06 compute-0 sudo[37382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:06 compute-0 python3.9[37384]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 03:18:06 compute-0 useradd[37386]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 03:18:06 compute-0 sudo[37382]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:06 compute-0 sudo[37542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuitfpajoxshvptksrjafrnoktqidmmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781486.6396105-311-232840899057697/AnsiballZ_getent.py'
Nov 22 03:18:06 compute-0 sudo[37542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:07 compute-0 python3.9[37544]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 03:18:07 compute-0 sudo[37542]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:07 compute-0 sudo[37695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qilcfjkyygxeptwtsgcjnrjdiblvqfdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781487.347865-319-145827800917701/AnsiballZ_group.py'
Nov 22 03:18:07 compute-0 sudo[37695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:07 compute-0 python3.9[37697]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:18:07 compute-0 groupadd[37698]: group added to /etc/group: name=hugetlbfs, GID=42477
Nov 22 03:18:07 compute-0 groupadd[37698]: group added to /etc/gshadow: name=hugetlbfs
Nov 22 03:18:07 compute-0 groupadd[37698]: new group: name=hugetlbfs, GID=42477
Nov 22 03:18:07 compute-0 sudo[37695]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:08 compute-0 sudo[37853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfwimwlibsxcrcnxlexptrxsavsyyjjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781488.1064692-328-227877920677717/AnsiballZ_file.py'
Nov 22 03:18:08 compute-0 sudo[37853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:08 compute-0 python3.9[37855]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 03:18:08 compute-0 sudo[37853]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:09 compute-0 sudo[38005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vntaoklnwabqkyvwdtfnetamvujlgmna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781488.912944-339-280756442078112/AnsiballZ_dnf.py'
Nov 22 03:18:09 compute-0 sudo[38005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:09 compute-0 python3.9[38007]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:18:11 compute-0 sudo[38005]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:12 compute-0 sudo[38158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwfesjrigrqrtnyakpxlqjbdnbzwyxah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781491.8166344-347-214269080301185/AnsiballZ_file.py'
Nov 22 03:18:12 compute-0 sudo[38158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:12 compute-0 python3.9[38160]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:18:12 compute-0 sudo[38158]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:12 compute-0 sudo[38310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyojgckkdprbdxohswytkkiestmhohbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781492.4672318-355-141778299236202/AnsiballZ_stat.py'
Nov 22 03:18:12 compute-0 sudo[38310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:13 compute-0 python3.9[38312]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:18:13 compute-0 sudo[38310]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:13 compute-0 sudo[38433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbtfcejqpvsbatehlecplmmzqbepqmvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781492.4672318-355-141778299236202/AnsiballZ_copy.py'
Nov 22 03:18:13 compute-0 sudo[38433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:13 compute-0 python3.9[38435]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763781492.4672318-355-141778299236202/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:18:13 compute-0 sudo[38433]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:14 compute-0 sudo[38585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duzkvynyaexcpvtdqhgkmoxypgvbfjvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781493.8176656-370-148043565585695/AnsiballZ_systemd.py'
Nov 22 03:18:14 compute-0 sudo[38585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:14 compute-0 python3.9[38587]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:18:14 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 03:18:14 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 22 03:18:14 compute-0 kernel: Bridge firewalling registered
Nov 22 03:18:14 compute-0 systemd-modules-load[38591]: Inserted module 'br_netfilter'
Nov 22 03:18:14 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 03:18:14 compute-0 sudo[38585]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:15 compute-0 sudo[38745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikxaocxufdtxrxfmayieyuekmwxsuyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781495.0789459-378-113386206051511/AnsiballZ_stat.py'
Nov 22 03:18:15 compute-0 sudo[38745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:15 compute-0 python3.9[38747]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:18:15 compute-0 sudo[38745]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:16 compute-0 sudo[38868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvqljvevbkasjrpwqrbwgdanqeiceshv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781495.0789459-378-113386206051511/AnsiballZ_copy.py'
Nov 22 03:18:16 compute-0 sudo[38868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:16 compute-0 python3.9[38870]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763781495.0789459-378-113386206051511/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:18:16 compute-0 sudo[38868]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:16 compute-0 sudo[39020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnkyianlvgkadkfmedtihldqvlethcma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781496.5639172-396-218168897112573/AnsiballZ_dnf.py'
Nov 22 03:18:16 compute-0 sudo[39020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:17 compute-0 python3.9[39022]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:18:21 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Nov 22 03:18:21 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Nov 22 03:18:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:18:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:18:21 compute-0 systemd[1]: Reloading.
Nov 22 03:18:21 compute-0 systemd-rc-local-generator[39086]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:18:22 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:18:22 compute-0 sudo[39020]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:23 compute-0 python3.9[40181]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:18:24 compute-0 python3.9[41071]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 03:18:24 compute-0 python3.9[41777]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:18:25 compute-0 sudo[42683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsmrgrrntcuvjkpimthblkyttaswvyac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781505.099019-435-255167185677466/AnsiballZ_command.py'
Nov 22 03:18:25 compute-0 sudo[42683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:25 compute-0 python3.9[42705]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:18:25 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 03:18:26 compute-0 systemd[1]: Starting Authorization Manager...
Nov 22 03:18:26 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 03:18:26 compute-0 polkitd[43398]: Started polkitd version 0.117
Nov 22 03:18:26 compute-0 polkitd[43398]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 03:18:26 compute-0 polkitd[43398]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 03:18:26 compute-0 polkitd[43398]: Finished loading, compiling and executing 2 rules
Nov 22 03:18:26 compute-0 systemd[1]: Started Authorization Manager.
Nov 22 03:18:26 compute-0 polkitd[43398]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Nov 22 03:18:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:18:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:18:26 compute-0 systemd[1]: man-db-cache-update.service: Consumed 5.529s CPU time.
Nov 22 03:18:26 compute-0 systemd[1]: run-r2b5a24c6e414452d8eae3afe8a6b0dd1.service: Deactivated successfully.
Nov 22 03:18:26 compute-0 sudo[42683]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:26 compute-0 sudo[43567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znaitscnpqfuzvepxhtmtpyeosqooyfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781506.5395923-444-261303892821830/AnsiballZ_systemd.py'
Nov 22 03:18:26 compute-0 sudo[43567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:27 compute-0 python3.9[43569]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:18:27 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 03:18:27 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 03:18:27 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 03:18:27 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 03:18:27 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 03:18:27 compute-0 sudo[43567]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:28 compute-0 python3.9[43730]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 03:18:30 compute-0 sudo[43880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfvwwidnhuyiwvwvukjualocbbqucjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781510.2052119-501-208290395702961/AnsiballZ_systemd.py'
Nov 22 03:18:30 compute-0 sudo[43880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:30 compute-0 python3.9[43882]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:18:30 compute-0 systemd[1]: Reloading.
Nov 22 03:18:31 compute-0 systemd-rc-local-generator[43913]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:18:31 compute-0 sudo[43880]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:31 compute-0 sudo[44070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kczdgkdwkxdorgttssjvppmwikyjtrir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781511.3112493-501-10831223630034/AnsiballZ_systemd.py'
Nov 22 03:18:31 compute-0 sudo[44070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:31 compute-0 python3.9[44072]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:18:32 compute-0 systemd[1]: Reloading.
Nov 22 03:18:32 compute-0 systemd-rc-local-generator[44103]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:18:32 compute-0 sudo[44070]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:32 compute-0 sudo[44260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfivoeblxzuvaqjbhrhyrslufbepjrmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781512.519853-517-99981707911384/AnsiballZ_command.py'
Nov 22 03:18:32 compute-0 sudo[44260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:33 compute-0 python3.9[44262]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:18:33 compute-0 sudo[44260]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:33 compute-0 sudo[44413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldmwmmdqeyoydqnuywrzaksbxvkoieuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781513.2481158-525-13933294200119/AnsiballZ_command.py'
Nov 22 03:18:33 compute-0 sudo[44413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:33 compute-0 python3.9[44415]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:18:33 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 22 03:18:33 compute-0 sudo[44413]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:34 compute-0 sudo[44566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqinctanolyllvadcmyhodecrumxpgdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781514.0249283-533-79144055635248/AnsiballZ_command.py'
Nov 22 03:18:34 compute-0 sudo[44566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:34 compute-0 python3.9[44568]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:18:36 compute-0 sudo[44566]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:36 compute-0 sudo[44728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hthiaybgesnsfmmmlfpevsmdancarbkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781516.4859922-541-50004524591639/AnsiballZ_command.py'
Nov 22 03:18:36 compute-0 sudo[44728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:36 compute-0 python3.9[44730]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:18:36 compute-0 sudo[44728]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:37 compute-0 sudo[44881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sajmtlybyqehwhgsfhxxggfszbafscsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781517.1667492-549-169328101560448/AnsiballZ_systemd.py'
Nov 22 03:18:37 compute-0 sudo[44881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:37 compute-0 python3.9[44883]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:18:37 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 22 03:18:37 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Nov 22 03:18:37 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Nov 22 03:18:37 compute-0 systemd[1]: Starting Apply Kernel Variables...
Nov 22 03:18:37 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 22 03:18:37 compute-0 systemd[1]: Finished Apply Kernel Variables.
Nov 22 03:18:37 compute-0 sudo[44881]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:38 compute-0 sshd-session[31264]: Connection closed by 192.168.122.30 port 52670
Nov 22 03:18:38 compute-0 sshd-session[31261]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:18:38 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Nov 22 03:18:38 compute-0 systemd[1]: session-9.scope: Consumed 2min 17.338s CPU time.
Nov 22 03:18:38 compute-0 systemd-logind[799]: Session 9 logged out. Waiting for processes to exit.
Nov 22 03:18:38 compute-0 systemd-logind[799]: Removed session 9.
Nov 22 03:18:43 compute-0 sshd-session[44913]: Accepted publickey for zuul from 192.168.122.30 port 35192 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:18:43 compute-0 systemd-logind[799]: New session 10 of user zuul.
Nov 22 03:18:43 compute-0 systemd[1]: Started Session 10 of User zuul.
Nov 22 03:18:43 compute-0 sshd-session[44913]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:18:44 compute-0 python3.9[45066]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:18:46 compute-0 sudo[45220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaeygberagfkpywhfozzryxtofidovor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781525.5375135-36-166985453905693/AnsiballZ_getent.py'
Nov 22 03:18:46 compute-0 sudo[45220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:46 compute-0 python3.9[45223]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 03:18:46 compute-0 sudo[45220]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:47 compute-0 sudo[45374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juzlkqoweefyzujkytwimgpgsqjpjnxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781526.5536404-44-280869797885182/AnsiballZ_group.py'
Nov 22 03:18:47 compute-0 sudo[45374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:47 compute-0 python3.9[45376]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:18:47 compute-0 groupadd[45377]: group added to /etc/group: name=openvswitch, GID=42476
Nov 22 03:18:47 compute-0 groupadd[45377]: group added to /etc/gshadow: name=openvswitch
Nov 22 03:18:47 compute-0 groupadd[45377]: new group: name=openvswitch, GID=42476
Nov 22 03:18:47 compute-0 sudo[45374]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:48 compute-0 sudo[45532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxysjyljeakxinhfgqoxqzshbleihkbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781527.6131504-52-151543816339348/AnsiballZ_user.py'
Nov 22 03:18:48 compute-0 sudo[45532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:48 compute-0 python3.9[45534]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 03:18:48 compute-0 useradd[45536]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 03:18:48 compute-0 useradd[45536]: add 'openvswitch' to group 'hugetlbfs'
Nov 22 03:18:48 compute-0 useradd[45536]: add 'openvswitch' to shadow group 'hugetlbfs'
Nov 22 03:18:48 compute-0 sudo[45532]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:49 compute-0 sudo[45692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqeitcowacihukxxuunscxdfuwdcuzeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781528.8846967-62-103634946359757/AnsiballZ_setup.py'
Nov 22 03:18:49 compute-0 sudo[45692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:49 compute-0 python3.9[45694]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:18:49 compute-0 sudo[45692]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:50 compute-0 sudo[45776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znxvjwiaueesutaxbsvdrtdvxzzreqxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781528.8846967-62-103634946359757/AnsiballZ_dnf.py'
Nov 22 03:18:50 compute-0 sudo[45776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:50 compute-0 python3.9[45778]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 03:18:52 compute-0 sudo[45776]: pam_unix(sudo:session): session closed for user root
Nov 22 03:18:53 compute-0 sudo[45940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvplllbtpjejhovflonwxtdecylzdeqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781533.2651904-76-122439224423158/AnsiballZ_dnf.py'
Nov 22 03:18:53 compute-0 sudo[45940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:18:53 compute-0 python3.9[45942]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:19:04 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Nov 22 03:19:04 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:19:04 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:19:04 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:19:04 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:19:04 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:19:04 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:19:04 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:19:04 compute-0 groupadd[45965]: group added to /etc/group: name=unbound, GID=993
Nov 22 03:19:04 compute-0 groupadd[45965]: group added to /etc/gshadow: name=unbound
Nov 22 03:19:04 compute-0 groupadd[45965]: new group: name=unbound, GID=993
Nov 22 03:19:05 compute-0 useradd[45972]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Nov 22 03:19:05 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 22 03:19:05 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 22 03:19:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:19:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:19:06 compute-0 systemd[1]: Reloading.
Nov 22 03:19:07 compute-0 systemd-rc-local-generator[46471]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:19:07 compute-0 systemd-sysv-generator[46489]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:19:07 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:19:07 compute-0 sudo[45940]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:19:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:19:07 compute-0 systemd[1]: run-r74871e6767c045c09947520862e0e112.service: Deactivated successfully.
Nov 22 03:19:08 compute-0 sudo[47038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqbhqdrqapucuukrlzfnbslgrkloubvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781547.8710327-84-154905943492412/AnsiballZ_systemd.py'
Nov 22 03:19:08 compute-0 sudo[47038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:08 compute-0 python3.9[47040]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:19:08 compute-0 systemd[1]: Reloading.
Nov 22 03:19:09 compute-0 systemd-rc-local-generator[47066]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:19:09 compute-0 systemd-sysv-generator[47071]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:19:09 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Nov 22 03:19:09 compute-0 chown[47082]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 22 03:19:09 compute-0 ovs-ctl[47087]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 22 03:19:09 compute-0 ovs-ctl[47087]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 22 03:19:09 compute-0 ovs-ctl[47087]: Starting ovsdb-server [  OK  ]
Nov 22 03:19:09 compute-0 ovs-vsctl[47136]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 22 03:19:09 compute-0 ovs-vsctl[47155]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"7d76f7df-fc3b-449d-b505-65b8b0ef9c3a\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 22 03:19:09 compute-0 ovs-ctl[47087]: Configuring Open vSwitch system IDs [  OK  ]
Nov 22 03:19:09 compute-0 ovs-ctl[47087]: Enabling remote OVSDB managers [  OK  ]
Nov 22 03:19:09 compute-0 ovs-vsctl[47161]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 03:19:09 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Nov 22 03:19:09 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 22 03:19:09 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 22 03:19:09 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 22 03:19:09 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Nov 22 03:19:09 compute-0 ovs-ctl[47206]: Inserting openvswitch module [  OK  ]
Nov 22 03:19:10 compute-0 ovs-ctl[47175]: Starting ovs-vswitchd [  OK  ]
Nov 22 03:19:10 compute-0 ovs-vsctl[47226]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 22 03:19:10 compute-0 ovs-ctl[47175]: Enabling remote OVSDB managers [  OK  ]
Nov 22 03:19:10 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 22 03:19:10 compute-0 systemd[1]: Starting Open vSwitch...
Nov 22 03:19:10 compute-0 systemd[1]: Finished Open vSwitch.
Nov 22 03:19:10 compute-0 sudo[47038]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:11 compute-0 python3.9[47378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:19:12 compute-0 sudo[47528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtephnbaciwcofpwpbjjcxaopyuyyvov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781551.4033475-102-89781644547304/AnsiballZ_sefcontext.py'
Nov 22 03:19:12 compute-0 sudo[47528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:12 compute-0 python3.9[47530]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 03:19:13 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Nov 22 03:19:13 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:19:13 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:19:13 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:19:13 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:19:13 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:19:13 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:19:13 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:19:13 compute-0 sudo[47528]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:14 compute-0 python3.9[47685]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:19:15 compute-0 sudo[47841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjdynmcacxqvjapxuwjnpkiujkmjwepv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781555.3668542-120-217335253772856/AnsiballZ_dnf.py'
Nov 22 03:19:15 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 22 03:19:15 compute-0 sudo[47841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:16 compute-0 python3.9[47843]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:19:17 compute-0 sudo[47841]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:17 compute-0 sudo[47994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcgxewymeobyjcwoxxwquzwpagwxjjsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781557.5040083-128-15708914694609/AnsiballZ_command.py'
Nov 22 03:19:17 compute-0 sudo[47994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:18 compute-0 python3.9[47996]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:19:18 compute-0 sudo[47994]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:19 compute-0 sudo[48281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxyzpqtpvusddjckbjejtvgiolnpijpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781559.1527863-136-238620111958423/AnsiballZ_file.py'
Nov 22 03:19:19 compute-0 sudo[48281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:19 compute-0 python3.9[48283]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 03:19:19 compute-0 sudo[48281]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:20 compute-0 python3.9[48433]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:19:21 compute-0 sudo[48585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoiiuracagmqvyelddbnhguibtiuumcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781560.9480515-152-276106117268401/AnsiballZ_dnf.py'
Nov 22 03:19:21 compute-0 sudo[48585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:21 compute-0 python3.9[48587]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:19:23 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:19:23 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:19:23 compute-0 systemd[1]: Reloading.
Nov 22 03:19:23 compute-0 systemd-rc-local-generator[48628]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:19:23 compute-0 systemd-sysv-generator[48631]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:19:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:19:24 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:19:24 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:19:24 compute-0 systemd[1]: run-r966f7a2f487f4e55bb53e7a5b059826c.service: Deactivated successfully.
Nov 22 03:19:24 compute-0 sudo[48585]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:25 compute-0 sudo[48903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtnpdmdmqtevmdttpcuaqblwfynweif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781564.644671-160-117805389414964/AnsiballZ_systemd.py'
Nov 22 03:19:25 compute-0 sudo[48903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:25 compute-0 python3.9[48905]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:19:25 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 22 03:19:25 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Nov 22 03:19:25 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Nov 22 03:19:25 compute-0 systemd[1]: Stopping Network Manager...
Nov 22 03:19:25 compute-0 NetworkManager[7180]: <info>  [1763781565.3624] caught SIGTERM, shutting down normally.
Nov 22 03:19:25 compute-0 NetworkManager[7180]: <info>  [1763781565.3655] dhcp4 (eth0): canceled DHCP transaction
Nov 22 03:19:25 compute-0 NetworkManager[7180]: <info>  [1763781565.3656] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 03:19:25 compute-0 NetworkManager[7180]: <info>  [1763781565.3656] dhcp4 (eth0): state changed no lease
Nov 22 03:19:25 compute-0 NetworkManager[7180]: <info>  [1763781565.3663] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 03:19:25 compute-0 NetworkManager[7180]: <info>  [1763781565.3776] exiting (success)
Nov 22 03:19:25 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 03:19:25 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 03:19:25 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 22 03:19:25 compute-0 systemd[1]: Stopped Network Manager.
Nov 22 03:19:25 compute-0 systemd[1]: NetworkManager.service: Consumed 15.538s CPU time, 4.1M memory peak, read 0B from disk, written 30.0K to disk.
Nov 22 03:19:25 compute-0 systemd[1]: Starting Network Manager...
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.4700] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:bbdf02b3-deb9-47de-b411-3c25d6aa93d1)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.4701] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.4773] manager[0x559fbb1b9090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 22 03:19:25 compute-0 systemd[1]: Starting Hostname Service...
Nov 22 03:19:25 compute-0 systemd[1]: Started Hostname Service.
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5631] hostname: hostname: using hostnamed
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5632] hostname: static hostname changed from (none) to "compute-0"
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5640] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5647] manager[0x559fbb1b9090]: rfkill: Wi-Fi hardware radio set enabled
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5647] manager[0x559fbb1b9090]: rfkill: WWAN hardware radio set enabled
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5680] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5695] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5696] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5697] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5698] manager: Networking is enabled by state file
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5701] settings: Loaded settings plugin: keyfile (internal)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5706] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5745] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5757] dhcp: init: Using DHCP client 'internal'
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5760] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5765] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5771] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5779] device (lo): Activation: starting connection 'lo' (704fb092-bceb-43d7-a199-2a71be4392ac)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5785] device (eth0): carrier: link connected
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5789] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5793] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5793] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5799] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5805] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5811] device (eth1): carrier: link connected
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5814] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5818] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (d949c173-9bb4-5028-b379-646626090b3a) (indicated)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5818] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5823] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5829] device (eth1): Activation: starting connection 'ci-private-network' (d949c173-9bb4-5028-b379-646626090b3a)
Nov 22 03:19:25 compute-0 systemd[1]: Started Network Manager.
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5845] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5857] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5860] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5862] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5863] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5866] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5868] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5870] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5873] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5896] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5902] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5919] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 systemd[1]: Starting Network Manager Wait Online...
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5947] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5968] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5973] dhcp4 (eth0): state changed new lease, address=38.102.83.177
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5979] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.5988] device (lo): Activation: successful, device activated.
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6008] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6115] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6126] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6131] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6137] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6142] device (eth1): Activation: successful, device activated.
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6159] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6161] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6167] manager: NetworkManager state is now CONNECTED_SITE
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6175] device (eth0): Activation: successful, device activated.
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6188] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 22 03:19:25 compute-0 NetworkManager[48916]: <info>  [1763781565.6215] manager: startup complete
Nov 22 03:19:25 compute-0 systemd[1]: Finished Network Manager Wait Online.
Nov 22 03:19:25 compute-0 sudo[48903]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:26 compute-0 sudo[49129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdegnjcgbildxmelohpjbybxxahdqjls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781565.876277-168-136558062039376/AnsiballZ_dnf.py'
Nov 22 03:19:26 compute-0 sudo[49129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:26 compute-0 python3.9[49131]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:19:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:19:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:19:32 compute-0 systemd[1]: Reloading.
Nov 22 03:19:32 compute-0 systemd-sysv-generator[49186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:19:32 compute-0 systemd-rc-local-generator[49183]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:19:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:19:32 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:19:32 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:19:32 compute-0 systemd[1]: run-r3683e03d64f5453abb78f99582695ca5.service: Deactivated successfully.
Nov 22 03:19:33 compute-0 sudo[49129]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:33 compute-0 sudo[49590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zemfpaxmadwyurrosajpgyknawbwadpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781573.5223188-180-154154631641825/AnsiballZ_stat.py'
Nov 22 03:19:33 compute-0 sudo[49590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:33 compute-0 python3.9[49592]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:19:34 compute-0 sudo[49590]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:34 compute-0 sudo[49742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcdiksbzdrjcansflugysgfsxvbcayly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781574.2403843-189-17663865157671/AnsiballZ_ini_file.py'
Nov 22 03:19:34 compute-0 sudo[49742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:35 compute-0 python3.9[49744]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:35 compute-0 sudo[49742]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:35 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 03:19:35 compute-0 sudo[49896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jisooexfvmpjbkxeedtzktfbyyetbbek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781575.4088843-199-271137612250702/AnsiballZ_ini_file.py'
Nov 22 03:19:35 compute-0 sudo[49896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:36 compute-0 python3.9[49898]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:36 compute-0 sudo[49896]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:36 compute-0 sudo[50048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scrphqyirsgpphbdqvdmsnnrmjtjyzgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781576.19016-199-11684715915624/AnsiballZ_ini_file.py'
Nov 22 03:19:36 compute-0 sudo[50048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:36 compute-0 python3.9[50050]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:36 compute-0 sudo[50048]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:37 compute-0 sudo[50200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlzbvdudvvigewawuepcipqchaipjqvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781577.004123-214-176857046522766/AnsiballZ_ini_file.py'
Nov 22 03:19:37 compute-0 sudo[50200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:37 compute-0 python3.9[50202]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:37 compute-0 sudo[50200]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:38 compute-0 sudo[50352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xypzgfvpukzufudvnlnwdvsmsrzirnne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781577.7863595-214-238503430794248/AnsiballZ_ini_file.py'
Nov 22 03:19:38 compute-0 sudo[50352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:38 compute-0 python3.9[50354]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:38 compute-0 sudo[50352]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:38 compute-0 sudo[50504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnidbnaamnrxtfwualamwthwwwyekhro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781578.4900925-229-141758547197000/AnsiballZ_stat.py'
Nov 22 03:19:38 compute-0 sudo[50504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:38 compute-0 python3.9[50506]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:19:38 compute-0 sudo[50504]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:39 compute-0 sudo[50627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgwatjunroekbyzxsbrvldbfldvewffg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781578.4900925-229-141758547197000/AnsiballZ_copy.py'
Nov 22 03:19:39 compute-0 sudo[50627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:39 compute-0 python3.9[50629]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781578.4900925-229-141758547197000/.source _original_basename=.fxi0mg52 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:39 compute-0 sudo[50627]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:40 compute-0 sudo[50779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufmxalzqiwllrrkheadhfdancrxtflcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781580.0117574-244-170464432861937/AnsiballZ_file.py'
Nov 22 03:19:40 compute-0 sudo[50779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:40 compute-0 python3.9[50781]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:40 compute-0 sudo[50779]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:41 compute-0 sudo[50931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iliblqxhrbgutbvtqyigaxuxtxualawt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781580.853221-252-78101355736195/AnsiballZ_edpm_os_net_config_mappings.py'
Nov 22 03:19:41 compute-0 sudo[50931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:41 compute-0 python3.9[50933]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 22 03:19:41 compute-0 sudo[50931]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:42 compute-0 sudo[51083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjppptpvebhkkqhseiwzlozdajcnlvdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781581.899921-261-51565285024747/AnsiballZ_file.py'
Nov 22 03:19:42 compute-0 sudo[51083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:42 compute-0 python3.9[51085]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:42 compute-0 sudo[51083]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:43 compute-0 sudo[51235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejrqeqjifphdqvvppxqsvwltetbbnkyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781582.8882377-271-21069596017035/AnsiballZ_stat.py'
Nov 22 03:19:43 compute-0 sudo[51235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:43 compute-0 sudo[51235]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:43 compute-0 sudo[51358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klipjgkspumvjidvcbwedhrnewozuevl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781582.8882377-271-21069596017035/AnsiballZ_copy.py'
Nov 22 03:19:43 compute-0 sudo[51358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:44 compute-0 sudo[51358]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:44 compute-0 sudo[51510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlkucluoqhmdeddfergpwnykevwhulyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781584.3696265-286-198887819168204/AnsiballZ_slurp.py'
Nov 22 03:19:44 compute-0 sudo[51510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:45 compute-0 python3.9[51512]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 22 03:19:45 compute-0 sudo[51510]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:46 compute-0 sudo[51685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmusiarumvfukzzwsqfvlnshuebbqpjc ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781585.3795192-295-112275947776097/async_wrapper.py j589542713072 300 /home/zuul/.ansible/tmp/ansible-tmp-1763781585.3795192-295-112275947776097/AnsiballZ_edpm_os_net_config.py _'
Nov 22 03:19:46 compute-0 sudo[51685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:46 compute-0 ansible-async_wrapper.py[51687]: Invoked with j589542713072 300 /home/zuul/.ansible/tmp/ansible-tmp-1763781585.3795192-295-112275947776097/AnsiballZ_edpm_os_net_config.py _
Nov 22 03:19:46 compute-0 ansible-async_wrapper.py[51690]: Starting module and watcher
Nov 22 03:19:46 compute-0 ansible-async_wrapper.py[51690]: Start watching 51691 (300)
Nov 22 03:19:46 compute-0 ansible-async_wrapper.py[51691]: Start module (51691)
Nov 22 03:19:46 compute-0 ansible-async_wrapper.py[51687]: Return async_wrapper task started.
Nov 22 03:19:46 compute-0 sudo[51685]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:46 compute-0 python3.9[51692]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 22 03:19:47 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 22 03:19:47 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 22 03:19:47 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 22 03:19:47 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 22 03:19:47 compute-0 kernel: cfg80211: failed to load regulatory.db
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.8222] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.8250] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9085] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9087] audit: op="connection-add" uuid="faaa9333-97a8-4d0a-840d-0d033303934b" name="br-ex-br" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9109] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9111] audit: op="connection-add" uuid="98f68b47-0405-4b0e-8e98-d52399a272ba" name="br-ex-port" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9132] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9134] audit: op="connection-add" uuid="a4625e05-f912-430c-b308-f006f678bd0a" name="eth1-port" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9155] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9157] audit: op="connection-add" uuid="49ca63b9-dad4-4ee6-8aa5-5989e9de5f55" name="vlan20-port" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9178] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9180] audit: op="connection-add" uuid="2f564cf8-c244-4e45-8cbf-3ff3c65b3dd8" name="vlan21-port" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9200] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9202] audit: op="connection-add" uuid="d9013601-a4c2-4dd1-940d-9cb54f603cf6" name="vlan22-port" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9221] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9224] audit: op="connection-add" uuid="fcccfe22-43e2-4e78-92ec-8915b0d0f900" name="vlan23-port" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9258] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9288] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9290] audit: op="connection-add" uuid="1b400552-c431-4695-85ea-5df552ec169a" name="br-ex-if" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9347] audit: op="connection-update" uuid="d949c173-9bb4-5028-b379-646626090b3a" name="ci-private-network" args="ovs-interface.type,ovs-external-ids.data,ipv4.routes,ipv4.method,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.never-default,ipv6.routes,ipv6.method,ipv6.routing-rules,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,connection.master,connection.port-type,connection.timestamp,connection.controller,connection.slave-type" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9381] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9385] audit: op="connection-add" uuid="0881fbc8-fdcf-4cc7-b039-95ffc7be49fc" name="vlan20-if" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9418] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9420] audit: op="connection-add" uuid="1f14c6b0-34d4-47a8-8608-f5cf15c9ef18" name="vlan21-if" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9451] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9454] audit: op="connection-add" uuid="5f9b0ec4-6fb7-4bcb-b6c7-9447d52188b5" name="vlan22-if" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9486] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9488] audit: op="connection-add" uuid="d96a82bb-9097-4741-b0f9-62b58fa51b78" name="vlan23-if" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9513] audit: op="connection-delete" uuid="0a1cf993-6cf3-38b0-82a8-4d66bef49908" name="Wired connection 1" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9535] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9551] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9558] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (faaa9333-97a8-4d0a-840d-0d033303934b)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9560] audit: op="connection-activate" uuid="faaa9333-97a8-4d0a-840d-0d033303934b" name="br-ex-br" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9563] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9574] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9581] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (98f68b47-0405-4b0e-8e98-d52399a272ba)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9583] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9594] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9601] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (a4625e05-f912-430c-b308-f006f678bd0a)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9605] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9616] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9622] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (49ca63b9-dad4-4ee6-8aa5-5989e9de5f55)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9626] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9636] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9643] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (2f564cf8-c244-4e45-8cbf-3ff3c65b3dd8)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9647] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9658] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9664] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (d9013601-a4c2-4dd1-940d-9cb54f603cf6)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9668] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9679] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9687] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (fcccfe22-43e2-4e78-92ec-8915b0d0f900)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9687] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9689] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9690] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9696] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9699] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9703] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (1b400552-c431-4695-85ea-5df552ec169a)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9703] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9706] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9707] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9708] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9709] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9719] device (eth1): disconnecting for new activation request.
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9719] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9722] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9730] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9740] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9758] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9764] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9768] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (0881fbc8-fdcf-4cc7-b039-95ffc7be49fc)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9769] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9771] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9773] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9774] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9777] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9781] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9785] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (1f14c6b0-34d4-47a8-8608-f5cf15c9ef18)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9786] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9788] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9790] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9791] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9794] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9798] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9802] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (5f9b0ec4-6fb7-4bcb-b6c7-9447d52188b5)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9803] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9806] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9809] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9810] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9813] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9819] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9824] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (d96a82bb-9097-4741-b0f9-62b58fa51b78)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9824] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9827] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9830] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9831] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9833] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9849] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority" pid=51693 uid=0 result="success"
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9851] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9855] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9857] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9865] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9869] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9873] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9878] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9880] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9886] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 kernel: ovs-system: entered promiscuous mode
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9891] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9895] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9897] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9903] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 kernel: Timeout policy base is empty
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9908] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9912] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9914] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 systemd-udevd[51698]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9921] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9926] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9929] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9931] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9936] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9941] dhcp4 (eth0): canceled DHCP transaction
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9941] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9942] dhcp4 (eth0): state changed no lease
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9943] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9957] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 22 03:19:48 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 22 03:19:48 compute-0 NetworkManager[48916]: <info>  [1763781588.9961] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51693 uid=0 result="fail" reason="Device is not activated"
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0006] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0015] dhcp4 (eth0): state changed new lease, address=38.102.83.177
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0019] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0026] device (eth1): disconnecting for new activation request.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0027] audit: op="connection-activate" uuid="d949c173-9bb4-5028-b379-646626090b3a" name="ci-private-network" pid=51693 uid=0 result="success"
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0027] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0142] device (eth1): Activation: starting connection 'ci-private-network' (d949c173-9bb4-5028-b379-646626090b3a)
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0148] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0193] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0206] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0212] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0215] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0219] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0222] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0226] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0227] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0228] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0230] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0231] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0232] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0235] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0237] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0242] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0245] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0248] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0252] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0255] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0259] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0263] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0268] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0272] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 kernel: br-ex: entered promiscuous mode
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0277] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0281] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0286] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0292] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0296] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0338] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0339] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0343] device (eth1): Activation: successful, device activated.
Nov 22 03:19:49 compute-0 kernel: vlan22: entered promiscuous mode
Nov 22 03:19:49 compute-0 systemd-udevd[51699]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:19:49 compute-0 kernel: vlan21: entered promiscuous mode
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0461] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:19:49 compute-0 systemd-udevd[51697]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0475] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0502] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0503] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0509] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0523] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0532] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 kernel: vlan20: entered promiscuous mode
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0593] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0598] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0602] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 kernel: vlan23: entered promiscuous mode
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0615] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0640] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0672] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0673] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0677] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0689] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0698] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0707] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0716] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0741] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0742] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0748] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0777] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0778] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 22 03:19:49 compute-0 NetworkManager[48916]: <info>  [1763781589.0783] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 22 03:19:50 compute-0 NetworkManager[48916]: <info>  [1763781590.2211] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 22 03:19:50 compute-0 sudo[52049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txwagbwfqahcuayxssruglmvgonmwpjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781589.6788704-295-78900533219552/AnsiballZ_async_status.py'
Nov 22 03:19:50 compute-0 sudo[52049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:50 compute-0 NetworkManager[48916]: <info>  [1763781590.4196] checkpoint[0x559fbb18f950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 22 03:19:50 compute-0 NetworkManager[48916]: <info>  [1763781590.4200] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51693 uid=0 result="success"
Nov 22 03:19:50 compute-0 python3.9[52051]: ansible-ansible.legacy.async_status Invoked with jid=j589542713072.51687 mode=status _async_dir=/root/.ansible_async
Nov 22 03:19:50 compute-0 sudo[52049]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:50 compute-0 NetworkManager[48916]: <info>  [1763781590.9780] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51693 uid=0 result="success"
Nov 22 03:19:50 compute-0 NetworkManager[48916]: <info>  [1763781590.9807] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51693 uid=0 result="success"
Nov 22 03:19:51 compute-0 NetworkManager[48916]: <info>  [1763781591.3062] audit: op="networking-control" arg="global-dns-configuration" pid=51693 uid=0 result="success"
Nov 22 03:19:51 compute-0 NetworkManager[48916]: <info>  [1763781591.3094] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 22 03:19:51 compute-0 NetworkManager[48916]: <info>  [1763781591.3121] audit: op="networking-control" arg="global-dns-configuration" pid=51693 uid=0 result="success"
Nov 22 03:19:51 compute-0 NetworkManager[48916]: <info>  [1763781591.3163] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51693 uid=0 result="success"
Nov 22 03:19:51 compute-0 ansible-async_wrapper.py[51690]: 51691 still running (300)
Nov 22 03:19:51 compute-0 NetworkManager[48916]: <info>  [1763781591.5197] checkpoint[0x559fbb18fa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 22 03:19:51 compute-0 NetworkManager[48916]: <info>  [1763781591.5205] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51693 uid=0 result="success"
Nov 22 03:19:51 compute-0 ansible-async_wrapper.py[51691]: Module complete (51691)
Nov 22 03:19:53 compute-0 sudo[52156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eikzyxfgvyjlvkbdmkjmvmsarqbacypj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781589.6788704-295-78900533219552/AnsiballZ_async_status.py'
Nov 22 03:19:53 compute-0 sudo[52156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:54 compute-0 python3.9[52158]: ansible-ansible.legacy.async_status Invoked with jid=j589542713072.51687 mode=status _async_dir=/root/.ansible_async
Nov 22 03:19:54 compute-0 sudo[52156]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:54 compute-0 sudo[52255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azoocwohetqoteluazkrqcrkxshzgvvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781589.6788704-295-78900533219552/AnsiballZ_async_status.py'
Nov 22 03:19:54 compute-0 sudo[52255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:54 compute-0 python3.9[52257]: ansible-ansible.legacy.async_status Invoked with jid=j589542713072.51687 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 03:19:54 compute-0 sudo[52255]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:55 compute-0 sudo[52407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgnmgonnupitafrtscrnlhwfuechqtgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781594.9760768-322-50316550066188/AnsiballZ_stat.py'
Nov 22 03:19:55 compute-0 sudo[52407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:55 compute-0 python3.9[52409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:19:55 compute-0 sudo[52407]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:55 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 22 03:19:55 compute-0 sudo[52532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuvbsozjqccnmrlrymppgtkammqudpoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781594.9760768-322-50316550066188/AnsiballZ_copy.py'
Nov 22 03:19:55 compute-0 sudo[52532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:56 compute-0 python3.9[52534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781594.9760768-322-50316550066188/.source.returncode _original_basename=.g80t5qs0 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:56 compute-0 sudo[52532]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:56 compute-0 ansible-async_wrapper.py[51690]: Done in kid B.
Nov 22 03:19:56 compute-0 sudo[52684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biuujpbsbrpzmruitjmrovlywgsdvcmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781596.3969758-338-231531987273044/AnsiballZ_stat.py'
Nov 22 03:19:56 compute-0 sudo[52684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:56 compute-0 python3.9[52686]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:19:56 compute-0 sudo[52684]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:57 compute-0 sudo[52808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftpvivkprpnyedsouyvrpqnsssbzmfcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781596.3969758-338-231531987273044/AnsiballZ_copy.py'
Nov 22 03:19:57 compute-0 sudo[52808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:57 compute-0 python3.9[52810]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781596.3969758-338-231531987273044/.source.cfg _original_basename=.fq1p3vp7 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:19:57 compute-0 sudo[52808]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:57 compute-0 sudo[52960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qulpjsnqtyyanfjvotmfznwwvkeyepil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781597.6360023-353-176090969138454/AnsiballZ_systemd.py'
Nov 22 03:19:57 compute-0 sudo[52960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:19:58 compute-0 python3.9[52962]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:19:58 compute-0 systemd[1]: Reloading Network Manager...
Nov 22 03:19:58 compute-0 NetworkManager[48916]: <info>  [1763781598.3020] audit: op="reload" arg="0" pid=52966 uid=0 result="success"
Nov 22 03:19:58 compute-0 NetworkManager[48916]: <info>  [1763781598.3030] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 22 03:19:58 compute-0 systemd[1]: Reloaded Network Manager.
Nov 22 03:19:58 compute-0 sudo[52960]: pam_unix(sudo:session): session closed for user root
Nov 22 03:19:58 compute-0 sshd-session[44916]: Connection closed by 192.168.122.30 port 35192
Nov 22 03:19:58 compute-0 sshd-session[44913]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:19:58 compute-0 systemd-logind[799]: Session 10 logged out. Waiting for processes to exit.
Nov 22 03:19:58 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Nov 22 03:19:58 compute-0 systemd[1]: session-10.scope: Consumed 55.150s CPU time.
Nov 22 03:19:58 compute-0 systemd-logind[799]: Removed session 10.
Nov 22 03:20:04 compute-0 sshd-session[52997]: Accepted publickey for zuul from 192.168.122.30 port 55878 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:20:04 compute-0 systemd-logind[799]: New session 11 of user zuul.
Nov 22 03:20:04 compute-0 systemd[1]: Started Session 11 of User zuul.
Nov 22 03:20:04 compute-0 sshd-session[52997]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:20:05 compute-0 python3.9[53150]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:20:06 compute-0 python3.9[53305]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:20:08 compute-0 python3.9[53498]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:20:08 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 22 03:20:08 compute-0 sshd-session[53000]: Connection closed by 192.168.122.30 port 55878
Nov 22 03:20:08 compute-0 sshd-session[52997]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:20:08 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Nov 22 03:20:08 compute-0 systemd[1]: session-11.scope: Consumed 2.627s CPU time.
Nov 22 03:20:08 compute-0 systemd-logind[799]: Session 11 logged out. Waiting for processes to exit.
Nov 22 03:20:08 compute-0 systemd-logind[799]: Removed session 11.
Nov 22 03:20:13 compute-0 sshd-session[53528]: Accepted publickey for zuul from 192.168.122.30 port 42074 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:20:13 compute-0 systemd-logind[799]: New session 12 of user zuul.
Nov 22 03:20:13 compute-0 systemd[1]: Started Session 12 of User zuul.
Nov 22 03:20:13 compute-0 sshd-session[53528]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:20:14 compute-0 python3.9[53681]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:20:15 compute-0 python3.9[53835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:20:16 compute-0 sudo[53990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnekpqkowazdokwkoyzwbrvcjtinsyms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781616.3103166-40-97440533385416/AnsiballZ_setup.py'
Nov 22 03:20:16 compute-0 sudo[53990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:16 compute-0 python3.9[53992]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:20:17 compute-0 sudo[53990]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:17 compute-0 sudo[54074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dydrjjmjffchsjkfykqjosvdbtmyjsoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781616.3103166-40-97440533385416/AnsiballZ_dnf.py'
Nov 22 03:20:17 compute-0 sudo[54074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:17 compute-0 python3.9[54076]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:20:19 compute-0 sudo[54074]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:19 compute-0 sudo[54228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdklzvdkcwqfjbvanbbquanetyetsbxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781619.2423387-52-230062207783140/AnsiballZ_setup.py'
Nov 22 03:20:19 compute-0 sudo[54228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:19 compute-0 python3.9[54230]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:20:20 compute-0 sudo[54228]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:21 compute-0 sudo[54423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdzuvlilnmjzbjqwfymddrdjnpteaccl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781620.6245036-63-1402989137923/AnsiballZ_file.py'
Nov 22 03:20:21 compute-0 sudo[54423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:21 compute-0 python3.9[54425]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:20:21 compute-0 sudo[54423]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:22 compute-0 sudo[54575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suftpmglwmaezrijrwvwudtykindtiue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781621.5190287-71-46297856378436/AnsiballZ_command.py'
Nov 22 03:20:22 compute-0 sudo[54575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:22 compute-0 python3.9[54577]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:20:22 compute-0 podman[54578]: 2025-11-22 03:20:22.290709944 +0000 UTC m=+0.056847245 system refresh
Nov 22 03:20:22 compute-0 sudo[54575]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:22 compute-0 sudo[54738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqxfwhvwtmybknvsrkkxsgttmrechte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781622.494858-79-86081342519452/AnsiballZ_stat.py'
Nov 22 03:20:22 compute-0 sudo[54738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:23 compute-0 python3.9[54740]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:20:23 compute-0 sudo[54738]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:20:23 compute-0 sudo[54861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyafjoaxjlhpjkdolipqaoysgjmlwlxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781622.494858-79-86081342519452/AnsiballZ_copy.py'
Nov 22 03:20:23 compute-0 sudo[54861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:23 compute-0 python3.9[54863]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781622.494858-79-86081342519452/.source.json follow=False _original_basename=podman_network_config.j2 checksum=6eaf022a4a9731985698aac54c8b705acf9310d7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:20:23 compute-0 sudo[54861]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:24 compute-0 sudo[55013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvtnbgaosckbsdfetrpvvwfchdbowdlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781624.0877774-94-48311906973735/AnsiballZ_stat.py'
Nov 22 03:20:24 compute-0 sudo[55013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:24 compute-0 python3.9[55015]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:20:24 compute-0 sudo[55013]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:25 compute-0 sudo[55136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cblxhfljsolsbfgkposvlgpwbawuxgga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781624.0877774-94-48311906973735/AnsiballZ_copy.py'
Nov 22 03:20:25 compute-0 sudo[55136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:25 compute-0 python3.9[55138]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763781624.0877774-94-48311906973735/.source.conf follow=False _original_basename=registries.conf.j2 checksum=ab0610e0f472dc1e1d78a5bc4899a6884e6f2bfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:20:25 compute-0 sudo[55136]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:25 compute-0 sudo[55288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwipgvbqfvsqsbfjofeydxzikkaoxmmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781625.487921-110-175067859317507/AnsiballZ_ini_file.py'
Nov 22 03:20:25 compute-0 sudo[55288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:26 compute-0 python3.9[55290]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:20:26 compute-0 sudo[55288]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:26 compute-0 sudo[55440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gidhypkecwsmlezmzxekealdmxumlzbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781626.3768222-110-224170095648602/AnsiballZ_ini_file.py'
Nov 22 03:20:26 compute-0 sudo[55440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:26 compute-0 python3.9[55442]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:20:26 compute-0 sudo[55440]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:27 compute-0 sudo[55592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovrcgidtqzvvxonblvsreydkunayipls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781627.119302-110-100620542337387/AnsiballZ_ini_file.py'
Nov 22 03:20:27 compute-0 sudo[55592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:27 compute-0 python3.9[55594]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:20:27 compute-0 sudo[55592]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:28 compute-0 sudo[55744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slbykqtvmqwzshagwnrtkjfsqqzzgbka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781627.7482333-110-53014652544159/AnsiballZ_ini_file.py'
Nov 22 03:20:28 compute-0 sudo[55744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:28 compute-0 python3.9[55746]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:20:28 compute-0 sudo[55744]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:28 compute-0 sudo[55896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-newmtwshzftztfxjuucdjjzwvbkvcszh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781628.4612148-141-206049919070714/AnsiballZ_dnf.py'
Nov 22 03:20:28 compute-0 sudo[55896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:29 compute-0 python3.9[55898]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:20:30 compute-0 sudo[55896]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:31 compute-0 sudo[56049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iddaztycguvclqyskbqpujukymwoexel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781630.8391962-152-116056145848849/AnsiballZ_setup.py'
Nov 22 03:20:31 compute-0 sudo[56049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:31 compute-0 python3.9[56051]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:20:31 compute-0 sudo[56049]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:32 compute-0 sudo[56203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qivhqnuetptghlflulbbyslmzoyjgzas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781631.8075356-160-264193043127424/AnsiballZ_stat.py'
Nov 22 03:20:32 compute-0 sudo[56203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:32 compute-0 python3.9[56205]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:20:32 compute-0 sudo[56203]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:33 compute-0 sudo[56355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olltaanczxejbqdpwgnrkpvgnzfkiqfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781632.6741471-169-106429420245356/AnsiballZ_stat.py'
Nov 22 03:20:33 compute-0 sudo[56355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:33 compute-0 python3.9[56357]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:20:33 compute-0 sudo[56355]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:33 compute-0 sudo[56507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eutvfmuvvvmbmoqznmsumkaimuvrorsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781633.6548054-179-139622238276582/AnsiballZ_command.py'
Nov 22 03:20:33 compute-0 sudo[56507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:34 compute-0 python3.9[56509]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:20:34 compute-0 sudo[56507]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:35 compute-0 sudo[56660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwlddisokqoflfjuaulppdjqlpwmnbuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781634.4681437-189-151376431818822/AnsiballZ_service_facts.py'
Nov 22 03:20:35 compute-0 sudo[56660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:35 compute-0 python3.9[56662]: ansible-service_facts Invoked
Nov 22 03:20:35 compute-0 network[56679]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:20:35 compute-0 network[56680]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:20:35 compute-0 network[56681]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:20:38 compute-0 sudo[56660]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:39 compute-0 sudo[56964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viamadjeetmodhmmkwadjlrynctalxoz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763781639.1047263-204-90500450518830/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763781639.1047263-204-90500450518830/args'
Nov 22 03:20:39 compute-0 sudo[56964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:39 compute-0 sudo[56964]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:40 compute-0 sudo[57131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnhagnuyosghlpsxmloabzwzfxkthwmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781639.889235-215-140325386701283/AnsiballZ_dnf.py'
Nov 22 03:20:40 compute-0 sudo[57131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:40 compute-0 python3.9[57133]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:20:41 compute-0 sudo[57131]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:42 compute-0 sudo[57284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlfrlqxcnydzzcihmrxgjysawmzgzcum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781642.0401514-228-232483805090942/AnsiballZ_package_facts.py'
Nov 22 03:20:42 compute-0 sudo[57284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:43 compute-0 python3.9[57286]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 22 03:20:43 compute-0 sudo[57284]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:44 compute-0 sudo[57436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eolvmtldmicykervnfkwurcbeesmheih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781643.7572455-238-131775936269168/AnsiballZ_stat.py'
Nov 22 03:20:44 compute-0 sudo[57436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:44 compute-0 python3.9[57438]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:20:44 compute-0 sudo[57436]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:44 compute-0 sudo[57561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiaqpwpqmvlnvyliqbsozzdgxxgrtnff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781643.7572455-238-131775936269168/AnsiballZ_copy.py'
Nov 22 03:20:44 compute-0 sudo[57561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:45 compute-0 python3.9[57563]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781643.7572455-238-131775936269168/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:20:45 compute-0 sudo[57561]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:45 compute-0 sudo[57715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkzccxlhaltnlhfedimundkkaxfditvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781645.4103005-253-162192361032098/AnsiballZ_stat.py'
Nov 22 03:20:45 compute-0 sudo[57715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:45 compute-0 python3.9[57717]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:20:46 compute-0 sudo[57715]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:46 compute-0 sudo[57840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrwenrobaqfiozvcdrpdzjzrxvkfoyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781645.4103005-253-162192361032098/AnsiballZ_copy.py'
Nov 22 03:20:46 compute-0 sudo[57840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:46 compute-0 python3.9[57842]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781645.4103005-253-162192361032098/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:20:46 compute-0 sudo[57840]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:47 compute-0 sudo[57994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrpaxazpwyucwtozklqkfrbutdqmsqla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781647.2277641-274-180816492211998/AnsiballZ_lineinfile.py'
Nov 22 03:20:47 compute-0 sudo[57994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:47 compute-0 python3.9[57996]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:20:47 compute-0 sudo[57994]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:48 compute-0 sudo[58148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htdceabtcelhvfufkcbxjysklkkuwjsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781648.4889596-289-237184265125932/AnsiballZ_setup.py'
Nov 22 03:20:48 compute-0 sudo[58148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:50 compute-0 python3.9[58150]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:20:50 compute-0 sudo[58148]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:50 compute-0 sudo[58232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cosviqkfzocdiqnajqgvfnlzgervwzxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781648.4889596-289-237184265125932/AnsiballZ_systemd.py'
Nov 22 03:20:50 compute-0 sudo[58232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:51 compute-0 python3.9[58234]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:20:51 compute-0 sudo[58232]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:52 compute-0 sudo[58386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzzpnfdcrakynavjkdeubkfaocogxbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781651.8731136-305-271206379214577/AnsiballZ_setup.py'
Nov 22 03:20:52 compute-0 sudo[58386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:52 compute-0 python3.9[58388]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:20:52 compute-0 sudo[58386]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:53 compute-0 sudo[58470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kooxwrdheyzcrbbdskqsxfzjwvfkitgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781651.8731136-305-271206379214577/AnsiballZ_systemd.py'
Nov 22 03:20:53 compute-0 sudo[58470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:20:53 compute-0 python3.9[58472]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:20:53 compute-0 chronyd[786]: chronyd exiting
Nov 22 03:20:53 compute-0 systemd[1]: Stopping NTP client/server...
Nov 22 03:20:53 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Nov 22 03:20:53 compute-0 systemd[1]: Stopped NTP client/server.
Nov 22 03:20:53 compute-0 systemd[1]: Starting NTP client/server...
Nov 22 03:20:53 compute-0 chronyd[58481]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 22 03:20:53 compute-0 chronyd[58481]: Frequency -25.763 +/- 0.103 ppm read from /var/lib/chrony/drift
Nov 22 03:20:53 compute-0 chronyd[58481]: Loaded seccomp filter (level 2)
Nov 22 03:20:53 compute-0 systemd[1]: Started NTP client/server.
Nov 22 03:20:53 compute-0 sudo[58470]: pam_unix(sudo:session): session closed for user root
Nov 22 03:20:54 compute-0 sshd-session[53531]: Connection closed by 192.168.122.30 port 42074
Nov 22 03:20:54 compute-0 sshd-session[53528]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:20:54 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Nov 22 03:20:54 compute-0 systemd[1]: session-12.scope: Consumed 28.795s CPU time.
Nov 22 03:20:54 compute-0 systemd-logind[799]: Session 12 logged out. Waiting for processes to exit.
Nov 22 03:20:54 compute-0 systemd-logind[799]: Removed session 12.
Nov 22 03:20:59 compute-0 sshd-session[58507]: Accepted publickey for zuul from 192.168.122.30 port 39274 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:20:59 compute-0 systemd-logind[799]: New session 13 of user zuul.
Nov 22 03:20:59 compute-0 systemd[1]: Started Session 13 of User zuul.
Nov 22 03:20:59 compute-0 sshd-session[58507]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:21:00 compute-0 sudo[58660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eguegupblqdbnlistanmubferreoqesv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781659.80557-22-133095700094835/AnsiballZ_file.py'
Nov 22 03:21:00 compute-0 sudo[58660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:00 compute-0 python3.9[58662]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:00 compute-0 sudo[58660]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:01 compute-0 sudo[58812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwmdvrdzagelfucluqrlbhimgjljegi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781660.7747526-34-20063748647038/AnsiballZ_stat.py'
Nov 22 03:21:01 compute-0 sudo[58812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:01 compute-0 python3.9[58814]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:01 compute-0 sudo[58812]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:02 compute-0 sudo[58935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcjchjweiluyjoqwwbtftvhrmtrkhdog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781660.7747526-34-20063748647038/AnsiballZ_copy.py'
Nov 22 03:21:02 compute-0 sudo[58935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:02 compute-0 python3.9[58937]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781660.7747526-34-20063748647038/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:02 compute-0 sudo[58935]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:02 compute-0 sshd-session[58510]: Connection closed by 192.168.122.30 port 39274
Nov 22 03:21:02 compute-0 sshd-session[58507]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:21:02 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Nov 22 03:21:02 compute-0 systemd[1]: session-13.scope: Consumed 1.962s CPU time.
Nov 22 03:21:02 compute-0 systemd-logind[799]: Session 13 logged out. Waiting for processes to exit.
Nov 22 03:21:02 compute-0 systemd-logind[799]: Removed session 13.
Nov 22 03:21:08 compute-0 sshd-session[58962]: Accepted publickey for zuul from 192.168.122.30 port 33580 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:21:08 compute-0 systemd-logind[799]: New session 14 of user zuul.
Nov 22 03:21:08 compute-0 systemd[1]: Started Session 14 of User zuul.
Nov 22 03:21:08 compute-0 sshd-session[58962]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:21:09 compute-0 python3.9[59115]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:21:10 compute-0 sudo[59269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moshgjweqlijlpbcxnwffpnfpwibryni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781670.3479917-33-80898722147062/AnsiballZ_file.py'
Nov 22 03:21:10 compute-0 sudo[59269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:11 compute-0 python3.9[59271]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:11 compute-0 sudo[59269]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:11 compute-0 sudo[59444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geylcqxvgontesubrxrxqkggtbdluick ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781671.3326707-41-252615525644558/AnsiballZ_stat.py'
Nov 22 03:21:11 compute-0 sudo[59444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:12 compute-0 python3.9[59446]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:12 compute-0 sudo[59444]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:12 compute-0 sudo[59567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvjnuzsujrbvehyaoifjmnjeqdukoeoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781671.3326707-41-252615525644558/AnsiballZ_copy.py'
Nov 22 03:21:12 compute-0 sudo[59567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:13 compute-0 python3.9[59569]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1763781671.3326707-41-252615525644558/.source.json _original_basename=.jqlaxgdz follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:13 compute-0 sudo[59567]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:13 compute-0 sudo[59719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myrpwjhlvyrfwzilenlslpnxgkihdzgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781673.4382725-64-52757703881380/AnsiballZ_stat.py'
Nov 22 03:21:13 compute-0 sudo[59719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:14 compute-0 python3.9[59721]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:14 compute-0 sudo[59719]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:14 compute-0 sudo[59842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwowdqocdedtolcmyxzodhbqjtedttfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781673.4382725-64-52757703881380/AnsiballZ_copy.py'
Nov 22 03:21:14 compute-0 sudo[59842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:14 compute-0 python3.9[59844]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781673.4382725-64-52757703881380/.source _original_basename=.t4l6533p follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:14 compute-0 sudo[59842]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:15 compute-0 sudo[59994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvggbegzgmoaeeaxmwxbimthjchiaauv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781674.9981391-80-23698209978367/AnsiballZ_file.py'
Nov 22 03:21:15 compute-0 sudo[59994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:15 compute-0 python3.9[59996]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:21:15 compute-0 sudo[59994]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:16 compute-0 sudo[60146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-codnxzhwidoomxosvmyrvfijgtxyeawa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781675.8118706-88-141801808145763/AnsiballZ_stat.py'
Nov 22 03:21:16 compute-0 sudo[60146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:16 compute-0 python3.9[60148]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:16 compute-0 sudo[60146]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:16 compute-0 sudo[60269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmuymzilliccgruwdtcynzzkmvbgpvir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781675.8118706-88-141801808145763/AnsiballZ_copy.py'
Nov 22 03:21:16 compute-0 sudo[60269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:16 compute-0 python3.9[60271]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763781675.8118706-88-141801808145763/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:21:17 compute-0 sudo[60269]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:17 compute-0 sudo[60421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owvakslrkbdfmzdcyyjaebjqpiwjxhnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781677.1190088-88-98159723407291/AnsiballZ_stat.py'
Nov 22 03:21:17 compute-0 sudo[60421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:17 compute-0 python3.9[60423]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:17 compute-0 sudo[60421]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:17 compute-0 sudo[60544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxrhbjafdbleolrqcqvaitgtepjdpnth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781677.1190088-88-98159723407291/AnsiballZ_copy.py'
Nov 22 03:21:17 compute-0 sudo[60544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:18 compute-0 python3.9[60546]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763781677.1190088-88-98159723407291/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:21:18 compute-0 sudo[60544]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:18 compute-0 sudo[60696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkxxopjykvmjngakzeyvchsgnsmtxrih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781678.3744962-117-137912861845896/AnsiballZ_file.py'
Nov 22 03:21:18 compute-0 sudo[60696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:18 compute-0 python3.9[60698]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:18 compute-0 sudo[60696]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:19 compute-0 sudo[60848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtpsqnvgmesegxrvlzjmmvsgjyhvapkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781679.0950758-125-4269644883215/AnsiballZ_stat.py'
Nov 22 03:21:19 compute-0 sudo[60848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:19 compute-0 python3.9[60850]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:19 compute-0 sudo[60848]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:20 compute-0 sudo[60971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cionqwiiklevyzlznctkfhlmcxmvovcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781679.0950758-125-4269644883215/AnsiballZ_copy.py'
Nov 22 03:21:20 compute-0 sudo[60971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:20 compute-0 python3.9[60973]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781679.0950758-125-4269644883215/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:20 compute-0 sudo[60971]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:20 compute-0 sudo[61123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-potusnhyrbopfbnyfnbnrikcvdadozww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781680.51003-140-66263865485318/AnsiballZ_stat.py'
Nov 22 03:21:20 compute-0 sudo[61123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:21 compute-0 python3.9[61125]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:21 compute-0 sudo[61123]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:21 compute-0 sudo[61246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkjskyimfyowhuykpotsibijyycjmnfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781680.51003-140-66263865485318/AnsiballZ_copy.py'
Nov 22 03:21:21 compute-0 sudo[61246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:21 compute-0 python3.9[61248]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781680.51003-140-66263865485318/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:21 compute-0 sudo[61246]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:22 compute-0 sudo[61398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovrydptzrrgigzyahdtactkjmwpfkjhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781681.9289072-155-187659352519491/AnsiballZ_systemd.py'
Nov 22 03:21:22 compute-0 sudo[61398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:22 compute-0 python3.9[61400]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:21:22 compute-0 systemd[1]: Reloading.
Nov 22 03:21:23 compute-0 systemd-rc-local-generator[61428]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:21:23 compute-0 systemd-sysv-generator[61431]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:21:23 compute-0 systemd[1]: Reloading.
Nov 22 03:21:23 compute-0 systemd-rc-local-generator[61465]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:21:23 compute-0 systemd-sysv-generator[61468]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:21:23 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Nov 22 03:21:23 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Nov 22 03:21:23 compute-0 sudo[61398]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:24 compute-0 sudo[61625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilohmtrjvqnumjqmejyaciieyhscecxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781683.688535-163-10268798380686/AnsiballZ_stat.py'
Nov 22 03:21:24 compute-0 sudo[61625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:24 compute-0 python3.9[61627]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:24 compute-0 sudo[61625]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:24 compute-0 sudo[61748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irykwamjjexjzkedbmedfqrjozksswwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781683.688535-163-10268798380686/AnsiballZ_copy.py'
Nov 22 03:21:24 compute-0 sudo[61748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:24 compute-0 python3.9[61750]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781683.688535-163-10268798380686/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:24 compute-0 sudo[61748]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:25 compute-0 sudo[61900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eflakprjomzbxlrdwmzycnnexryruvwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781685.1351614-178-253316102211426/AnsiballZ_stat.py'
Nov 22 03:21:25 compute-0 sudo[61900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:25 compute-0 python3.9[61902]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:25 compute-0 sudo[61900]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:26 compute-0 sudo[62023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcnfflgnkdusfzmlaerhqjlklblsikhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781685.1351614-178-253316102211426/AnsiballZ_copy.py'
Nov 22 03:21:26 compute-0 sudo[62023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:26 compute-0 python3.9[62025]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781685.1351614-178-253316102211426/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:26 compute-0 sudo[62023]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:26 compute-0 sudo[62175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fftykumzhfsamvzupswipyknkjelwxjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781686.50135-193-66161227360849/AnsiballZ_systemd.py'
Nov 22 03:21:26 compute-0 sudo[62175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:27 compute-0 python3.9[62177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:21:27 compute-0 systemd[1]: Reloading.
Nov 22 03:21:27 compute-0 systemd-rc-local-generator[62203]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:21:27 compute-0 systemd-sysv-generator[62207]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:21:27 compute-0 systemd[1]: Reloading.
Nov 22 03:21:27 compute-0 systemd-sysv-generator[62242]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:21:27 compute-0 systemd-rc-local-generator[62239]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:21:27 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 03:21:27 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:21:27 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:21:27 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 03:21:27 compute-0 sudo[62175]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:28 compute-0 python3.9[62402]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:21:28 compute-0 network[62419]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:21:28 compute-0 network[62420]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:21:28 compute-0 network[62421]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:21:32 compute-0 sudo[62681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efjxajessgkwssbbuqdgocvsxhyrrwqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781692.2550323-209-47478166908025/AnsiballZ_systemd.py'
Nov 22 03:21:32 compute-0 sudo[62681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:32 compute-0 python3.9[62683]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:21:33 compute-0 systemd[1]: Reloading.
Nov 22 03:21:33 compute-0 systemd-rc-local-generator[62710]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:21:33 compute-0 systemd-sysv-generator[62714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:21:33 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 22 03:21:33 compute-0 iptables.init[62723]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 22 03:21:33 compute-0 iptables.init[62723]: iptables: Flushing firewall rules: [  OK  ]
Nov 22 03:21:33 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Nov 22 03:21:33 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 22 03:21:33 compute-0 sudo[62681]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:34 compute-0 sudo[62917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rflxngxxyaufoiyoggdmgzytpwmyqxdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781693.8577502-209-52141061270040/AnsiballZ_systemd.py'
Nov 22 03:21:34 compute-0 sudo[62917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:34 compute-0 python3.9[62919]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:21:34 compute-0 sudo[62917]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:35 compute-0 sudo[63071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eschnpjvfmmxlapwfbxjlanrccqbfldd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781694.940507-225-205678196154918/AnsiballZ_systemd.py'
Nov 22 03:21:35 compute-0 sudo[63071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:35 compute-0 python3.9[63073]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:21:35 compute-0 systemd[1]: Reloading.
Nov 22 03:21:35 compute-0 systemd-sysv-generator[63107]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:21:35 compute-0 systemd-rc-local-generator[63101]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:21:35 compute-0 systemd[1]: Starting Netfilter Tables...
Nov 22 03:21:35 compute-0 systemd[1]: Finished Netfilter Tables.
Nov 22 03:21:36 compute-0 sudo[63071]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:36 compute-0 sudo[63264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvferyiymlkcrlikuuisujkuwqypvjqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781696.251749-233-179270037036885/AnsiballZ_command.py'
Nov 22 03:21:36 compute-0 sudo[63264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:37 compute-0 python3.9[63266]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:21:37 compute-0 sudo[63264]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:37 compute-0 sudo[63417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-widhucsebidyvbatjaveouwqdvcdsckp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781697.573284-247-260398640529632/AnsiballZ_stat.py'
Nov 22 03:21:37 compute-0 sudo[63417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:38 compute-0 python3.9[63419]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:38 compute-0 sudo[63417]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:38 compute-0 sudo[63542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgavjfqlsardklplphsqpbtkdkdiffxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781697.573284-247-260398640529632/AnsiballZ_copy.py'
Nov 22 03:21:38 compute-0 sudo[63542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:38 compute-0 python3.9[63544]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781697.573284-247-260398640529632/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:38 compute-0 sudo[63542]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:39 compute-0 sudo[63695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrrgqmjvpzzmcuysdfuilhxkyzebllmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781699.1119099-262-174630803985497/AnsiballZ_systemd.py'
Nov 22 03:21:39 compute-0 sudo[63695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:39 compute-0 python3.9[63697]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:21:39 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Nov 22 03:21:39 compute-0 sshd[1008]: Received SIGHUP; restarting.
Nov 22 03:21:39 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Nov 22 03:21:39 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Nov 22 03:21:39 compute-0 sshd[1008]: Server listening on :: port 22.
Nov 22 03:21:39 compute-0 sudo[63695]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:40 compute-0 sudo[63851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eychdbxxulccvuskgljqkwowveufirxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781700.0302083-270-252985843624586/AnsiballZ_file.py'
Nov 22 03:21:40 compute-0 sudo[63851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:40 compute-0 python3.9[63853]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:40 compute-0 sudo[63851]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:41 compute-0 sudo[64003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwpbhprpgcnsapqzqteodwlmsbmiiyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781700.8506606-278-138440256701943/AnsiballZ_stat.py'
Nov 22 03:21:41 compute-0 sudo[64003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:41 compute-0 python3.9[64005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:41 compute-0 sudo[64003]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:41 compute-0 sudo[64126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbjbtozkiytidmuoqnxfvjufkgjfvlsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781700.8506606-278-138440256701943/AnsiballZ_copy.py'
Nov 22 03:21:41 compute-0 sudo[64126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:42 compute-0 python3.9[64128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781700.8506606-278-138440256701943/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:42 compute-0 sudo[64126]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:42 compute-0 sudo[64278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtzzstltzbcjebdrsuztyemtuktgzygl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781702.3563814-296-247681468005508/AnsiballZ_timezone.py'
Nov 22 03:21:42 compute-0 sudo[64278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:43 compute-0 python3.9[64280]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 03:21:43 compute-0 systemd[1]: Starting Time & Date Service...
Nov 22 03:21:43 compute-0 systemd[1]: Started Time & Date Service.
Nov 22 03:21:43 compute-0 sudo[64278]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:43 compute-0 sudo[64434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unaenpsepqnlsvymxcgbntltfqmsxufr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781703.4547396-305-280142222107188/AnsiballZ_file.py'
Nov 22 03:21:43 compute-0 sudo[64434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:43 compute-0 python3.9[64436]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:44 compute-0 sudo[64434]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:44 compute-0 sudo[64586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxwtyyybxkarkfzuycypqqaymulbbnkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781704.2407432-313-262760640718957/AnsiballZ_stat.py'
Nov 22 03:21:44 compute-0 sudo[64586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:44 compute-0 python3.9[64588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:44 compute-0 sudo[64586]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:45 compute-0 sudo[64709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlwrjegkeigozbwteapoynamhjlifqoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781704.2407432-313-262760640718957/AnsiballZ_copy.py'
Nov 22 03:21:45 compute-0 sudo[64709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:45 compute-0 python3.9[64711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781704.2407432-313-262760640718957/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:45 compute-0 sudo[64709]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:45 compute-0 sudo[64861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuxhtncitvlkrcuarqckavnfuvfolzye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781705.6417403-328-21905667890544/AnsiballZ_stat.py'
Nov 22 03:21:45 compute-0 sudo[64861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:46 compute-0 python3.9[64863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:46 compute-0 sudo[64861]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:46 compute-0 sudo[64984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtbgtfyxrnbmthlyzlzlzbnvgajwopad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781705.6417403-328-21905667890544/AnsiballZ_copy.py'
Nov 22 03:21:46 compute-0 sudo[64984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:46 compute-0 python3.9[64986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763781705.6417403-328-21905667890544/.source.yaml _original_basename=.dkrf6pcc follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:46 compute-0 sudo[64984]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:47 compute-0 sudo[65136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fujapmfvmqtyjuuprpbrlayqyvbclqhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781706.9444926-343-3392674618555/AnsiballZ_stat.py'
Nov 22 03:21:47 compute-0 sudo[65136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:47 compute-0 python3.9[65138]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:47 compute-0 sudo[65136]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:48 compute-0 sudo[65260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxjwxmutggqvsiwvztnmofylqskftzqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781706.9444926-343-3392674618555/AnsiballZ_copy.py'
Nov 22 03:21:48 compute-0 sudo[65260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:48 compute-0 python3.9[65262]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781706.9444926-343-3392674618555/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:48 compute-0 sudo[65260]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:49 compute-0 sudo[65412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seflgcmenowsflsuiwemaawzrbczgjbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781708.7201025-358-218642973346493/AnsiballZ_command.py'
Nov 22 03:21:49 compute-0 sudo[65412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:49 compute-0 python3.9[65414]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:21:49 compute-0 sudo[65412]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:49 compute-0 sudo[65565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rltosqkjllxkvrwnqsgwggvtxagvnlyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781709.467634-366-42568480934415/AnsiballZ_command.py'
Nov 22 03:21:49 compute-0 sudo[65565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:50 compute-0 python3.9[65567]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:21:50 compute-0 sudo[65565]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:50 compute-0 sudo[65718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfpkjlokyqwatsvmruspduyeyiwgdram ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763781710.2351067-374-252622616928168/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 03:21:50 compute-0 sudo[65718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:51 compute-0 python3[65720]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 03:21:51 compute-0 sudo[65718]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:51 compute-0 sudo[65870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqbmwddivnxjymkcfyrstzhnhyqgdlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781711.2734513-382-158391348164300/AnsiballZ_stat.py'
Nov 22 03:21:51 compute-0 sudo[65870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:51 compute-0 python3.9[65872]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:51 compute-0 sudo[65870]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:52 compute-0 sudo[65993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqtieeoiiofhysuakxjwywkiknbvsdsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781711.2734513-382-158391348164300/AnsiballZ_copy.py'
Nov 22 03:21:52 compute-0 sudo[65993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:52 compute-0 python3.9[65995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781711.2734513-382-158391348164300/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:52 compute-0 sudo[65993]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:53 compute-0 sudo[66145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iicgxzixhzihmdfjylfxyadizfezxmxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781712.6993356-397-15674909645350/AnsiballZ_stat.py'
Nov 22 03:21:53 compute-0 sudo[66145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:53 compute-0 python3.9[66147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:53 compute-0 sudo[66145]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:53 compute-0 sudo[66268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqnupgetnakpxsatzcuhbwtthmytyztg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781712.6993356-397-15674909645350/AnsiballZ_copy.py'
Nov 22 03:21:53 compute-0 sudo[66268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:54 compute-0 python3.9[66270]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781712.6993356-397-15674909645350/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:54 compute-0 sudo[66268]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:54 compute-0 sudo[66420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzdadtmdwyfpmroaghvanwzqjxispzup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781714.2612891-412-12650886323929/AnsiballZ_stat.py'
Nov 22 03:21:54 compute-0 sudo[66420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:54 compute-0 python3.9[66422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:54 compute-0 sudo[66420]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:55 compute-0 sudo[66543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjzuvibzbioobycmnotlfppwnbgrwpai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781714.2612891-412-12650886323929/AnsiballZ_copy.py'
Nov 22 03:21:55 compute-0 sudo[66543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:55 compute-0 python3.9[66545]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781714.2612891-412-12650886323929/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:55 compute-0 sudo[66543]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:56 compute-0 sudo[66695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmsydmlabhdgaqhsyrcemgvssxrktqbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781715.6990142-427-7011108001787/AnsiballZ_stat.py'
Nov 22 03:21:56 compute-0 sudo[66695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:56 compute-0 python3.9[66697]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:56 compute-0 sudo[66695]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:56 compute-0 sudo[66818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsgjtxtxcdmnbxvrbxghhxpmjknqcsib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781715.6990142-427-7011108001787/AnsiballZ_copy.py'
Nov 22 03:21:56 compute-0 sudo[66818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:56 compute-0 python3.9[66820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781715.6990142-427-7011108001787/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:56 compute-0 sudo[66818]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:57 compute-0 sudo[66970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lboreypnsrjzmnmqmucmprdllmfdnveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781717.02007-442-138669401116328/AnsiballZ_stat.py'
Nov 22 03:21:57 compute-0 sudo[66970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:57 compute-0 python3.9[66972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:21:57 compute-0 sudo[66970]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:58 compute-0 sudo[67093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqkafopgtyjpaezaawjlowbtocizsace ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781717.02007-442-138669401116328/AnsiballZ_copy.py'
Nov 22 03:21:58 compute-0 sudo[67093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:58 compute-0 python3.9[67095]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763781717.02007-442-138669401116328/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:58 compute-0 sudo[67093]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:58 compute-0 sudo[67245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vntugattrcoejnqwnkbunriybijghvvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781718.43619-457-224347037669431/AnsiballZ_file.py'
Nov 22 03:21:58 compute-0 sudo[67245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:58 compute-0 python3.9[67247]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:21:58 compute-0 sudo[67245]: pam_unix(sudo:session): session closed for user root
Nov 22 03:21:59 compute-0 sudo[67397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aycwotbtzjubmscncaltmoitzzeasrct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781719.1369557-465-39528403256403/AnsiballZ_command.py'
Nov 22 03:21:59 compute-0 sudo[67397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:21:59 compute-0 python3.9[67399]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:21:59 compute-0 sudo[67397]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:00 compute-0 sudo[67556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfmafentgivuhsqcejpayjpwjsvwbltp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781720.0611703-473-105610224102666/AnsiballZ_blockinfile.py'
Nov 22 03:22:00 compute-0 sudo[67556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:00 compute-0 python3.9[67558]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:22:00 compute-0 sudo[67556]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:01 compute-0 sudo[67709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hecptdvqujeybzaxmmazmfzchzugexvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781721.137277-482-220950401436517/AnsiballZ_file.py'
Nov 22 03:22:01 compute-0 sudo[67709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:01 compute-0 python3.9[67711]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:22:01 compute-0 sudo[67709]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:02 compute-0 sudo[67861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzfqxqvwehkmuvqbjfwbhqoqwlcfvxye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781721.8879619-482-236905768677715/AnsiballZ_file.py'
Nov 22 03:22:02 compute-0 sudo[67861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:02 compute-0 python3.9[67863]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:22:02 compute-0 sudo[67861]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:03 compute-0 sudo[68013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pinigmaphlextusiwmwacbysypyyevpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781722.7386696-497-193906889357159/AnsiballZ_mount.py'
Nov 22 03:22:03 compute-0 sudo[68013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:03 compute-0 python3.9[68015]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 03:22:03 compute-0 sudo[68013]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:04 compute-0 sudo[68166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iustqchazavsygqauwzxncyqmtkixgjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781723.764956-497-178108329955655/AnsiballZ_mount.py'
Nov 22 03:22:04 compute-0 sudo[68166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:04 compute-0 python3.9[68168]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 03:22:04 compute-0 sudo[68166]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:04 compute-0 sshd-session[58965]: Connection closed by 192.168.122.30 port 33580
Nov 22 03:22:04 compute-0 sshd-session[58962]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:22:04 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Nov 22 03:22:04 compute-0 systemd[1]: session-14.scope: Consumed 41.942s CPU time.
Nov 22 03:22:04 compute-0 systemd-logind[799]: Session 14 logged out. Waiting for processes to exit.
Nov 22 03:22:04 compute-0 systemd-logind[799]: Removed session 14.
Nov 22 03:22:10 compute-0 sshd-session[68194]: Accepted publickey for zuul from 192.168.122.30 port 36824 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:22:10 compute-0 systemd-logind[799]: New session 15 of user zuul.
Nov 22 03:22:10 compute-0 systemd[1]: Started Session 15 of User zuul.
Nov 22 03:22:10 compute-0 sshd-session[68194]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:22:11 compute-0 sudo[68347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nreftxxmrjszkmiunlrcckhbgeyehtqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781730.3537223-16-236388603639004/AnsiballZ_tempfile.py'
Nov 22 03:22:11 compute-0 sudo[68347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:11 compute-0 python3.9[68349]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 22 03:22:11 compute-0 sudo[68347]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:11 compute-0 sudo[68499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbirjlyatybfaibnvqjoefmfrcftjnyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781731.4599133-28-187481810291784/AnsiballZ_stat.py'
Nov 22 03:22:11 compute-0 sudo[68499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:12 compute-0 python3.9[68501]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:22:12 compute-0 sudo[68499]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:13 compute-0 sudo[68651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gimjmtiyrpdmahphjyepydmtebmppvqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781732.3342707-38-4690736878599/AnsiballZ_setup.py'
Nov 22 03:22:13 compute-0 sudo[68651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:13 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 03:22:13 compute-0 python3.9[68653]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:22:13 compute-0 sudo[68651]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:14 compute-0 sudo[68805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huurgjhikiachsczuuxbgjggvbmwypff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781733.6127071-47-42875735269692/AnsiballZ_blockinfile.py'
Nov 22 03:22:14 compute-0 sudo[68805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:14 compute-0 python3.9[68807]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0IeFJb7kQxb2DeQqcdzq1GSLJDpNy68eOR3ZDXuWh33pI4eZUi0JHU3XECYjd3u5pQXNDYzizaDrOCFS2XxTlU6DDLuyibyynXnzR2ly7eQ8G1/oGdqUyc4BszdHthFoULrL5JRW0J8TELiUiV6bMj50x3rMa7zIxoC97SunNaUnpWEj+Ubw1Nu0xbcBsYLa44UTaQEAZlVquM6SowLCqvgeMllgv23QNftiAsZrfPsc1rZ5eJb3MQZkGIWnqC3DFNLh9g9KpGd9E8tgyGWgEU+Xen3UQgKmWy1i6xF89YHD2VdaFIozAhwSf0kt9jGXAwuZ3Q21accnFB94mFTcEGqeP/Zlo7G4XB7fgSQN1kbhOsJUm+7JuHZeSK1WUbhqFog/8SNnQgjnth1o9uesTnW/dJ1306v4DPzUqx3gH8S1pU7LJsZo+KeTsUhfNYaskZlo6XnKQvbxALvPdXjoSHcvCQ0k5NFFrxXZ8jX6v9sSCN4hzWqUcM+NdgrOVAdc=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICQJxsece1Cz2OzI46uQPE380Q0ilq7yAVzYoL0Elw+/
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKjZC6F/EUiZDzyFyU/vKwZTwRGbAqbS375B1AM5JcXnV0pA/6kr/noDVECTxeGpQEDmFInFRHuDu1kYtCCJmE8=
                                             create=True mode=0644 path=/tmp/ansible.tq5_p8hb state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:22:14 compute-0 sudo[68805]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:15 compute-0 sudo[68957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxcinlhvxugtjbwnensovjpectxjukfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781734.5764828-55-68133587619399/AnsiballZ_command.py'
Nov 22 03:22:15 compute-0 sudo[68957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:15 compute-0 python3.9[68959]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.tq5_p8hb' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:22:15 compute-0 sudo[68957]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:16 compute-0 sudo[69111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duvdumqxvzgalrozkcnafhsczydgaocz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781735.6272268-63-222914573217508/AnsiballZ_file.py'
Nov 22 03:22:16 compute-0 sudo[69111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:16 compute-0 python3.9[69113]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.tq5_p8hb state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:22:16 compute-0 sudo[69111]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:16 compute-0 sshd-session[68197]: Connection closed by 192.168.122.30 port 36824
Nov 22 03:22:16 compute-0 sshd-session[68194]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:22:16 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Nov 22 03:22:16 compute-0 systemd[1]: session-15.scope: Consumed 4.126s CPU time.
Nov 22 03:22:16 compute-0 systemd-logind[799]: Session 15 logged out. Waiting for processes to exit.
Nov 22 03:22:16 compute-0 systemd-logind[799]: Removed session 15.
Nov 22 03:22:22 compute-0 sshd-session[69138]: Accepted publickey for zuul from 192.168.122.30 port 47142 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:22:22 compute-0 systemd-logind[799]: New session 16 of user zuul.
Nov 22 03:22:22 compute-0 systemd[1]: Started Session 16 of User zuul.
Nov 22 03:22:22 compute-0 sshd-session[69138]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:22:23 compute-0 python3.9[69291]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:22:24 compute-0 sudo[69445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwbddzpcrlhmwikmonuwodgdiphrlkfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781743.6704047-32-84744203719962/AnsiballZ_systemd.py'
Nov 22 03:22:24 compute-0 sudo[69445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:24 compute-0 python3.9[69447]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 03:22:24 compute-0 sudo[69445]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:25 compute-0 sudo[69599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwzhuligqkmwswhirmjktghralaocsxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781744.8823266-40-219943149543368/AnsiballZ_systemd.py'
Nov 22 03:22:25 compute-0 sudo[69599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:25 compute-0 python3.9[69601]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:22:25 compute-0 sudo[69599]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:26 compute-0 sudo[69752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gphjugclbjmmjzwdlubhhzpwpanqhxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781745.852209-49-204482099453810/AnsiballZ_command.py'
Nov 22 03:22:26 compute-0 sudo[69752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:26 compute-0 python3.9[69754]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:22:26 compute-0 sudo[69752]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:27 compute-0 sudo[69905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kblratoyiarcbkgdvuklvhhjxtjswwks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781746.7760134-57-243658174803496/AnsiballZ_stat.py'
Nov 22 03:22:27 compute-0 sudo[69905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:27 compute-0 python3.9[69907]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:22:27 compute-0 sudo[69905]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:28 compute-0 sudo[70059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufnkglihmmslqacucodwbashwfnravgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781747.7699864-65-186905020287137/AnsiballZ_command.py'
Nov 22 03:22:28 compute-0 sudo[70059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:28 compute-0 python3.9[70061]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:22:28 compute-0 sudo[70059]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:29 compute-0 sudo[70214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yikvynrbznbsmwbmopehjmsugydthmfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781748.6043875-73-175197346269243/AnsiballZ_file.py'
Nov 22 03:22:29 compute-0 sudo[70214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:29 compute-0 python3.9[70216]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:22:29 compute-0 sudo[70214]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:29 compute-0 sshd-session[69141]: Connection closed by 192.168.122.30 port 47142
Nov 22 03:22:29 compute-0 sshd-session[69138]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:22:29 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Nov 22 03:22:29 compute-0 systemd[1]: session-16.scope: Consumed 5.325s CPU time.
Nov 22 03:22:29 compute-0 systemd-logind[799]: Session 16 logged out. Waiting for processes to exit.
Nov 22 03:22:29 compute-0 systemd-logind[799]: Removed session 16.
Nov 22 03:22:35 compute-0 sshd-session[70241]: Accepted publickey for zuul from 192.168.122.30 port 55994 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:22:35 compute-0 systemd-logind[799]: New session 17 of user zuul.
Nov 22 03:22:35 compute-0 systemd[1]: Started Session 17 of User zuul.
Nov 22 03:22:35 compute-0 sshd-session[70241]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:22:37 compute-0 python3.9[70394]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:22:37 compute-0 sudo[70548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uudfbmyonakwochpkstxvbwkspowdhln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781757.6628919-34-219630252720535/AnsiballZ_setup.py'
Nov 22 03:22:37 compute-0 sudo[70548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:38 compute-0 python3.9[70550]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:22:38 compute-0 sudo[70548]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:38 compute-0 sudo[70632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avhrqstyrwviwppmagfptjobbcthxtyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763781757.6628919-34-219630252720535/AnsiballZ_dnf.py'
Nov 22 03:22:38 compute-0 sudo[70632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:39 compute-0 python3.9[70634]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 03:22:40 compute-0 sudo[70632]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:40 compute-0 sshd-session[70636]: Received disconnect from 80.94.93.233 port 58770:11:  [preauth]
Nov 22 03:22:40 compute-0 sshd-session[70636]: Disconnected from authenticating user root 80.94.93.233 port 58770 [preauth]
Nov 22 03:22:41 compute-0 python3.9[70787]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:22:42 compute-0 python3.9[70938]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:22:43 compute-0 python3.9[71088]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:22:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:22:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:22:44 compute-0 python3.9[71239]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:22:44 compute-0 sshd-session[70244]: Connection closed by 192.168.122.30 port 55994
Nov 22 03:22:44 compute-0 sshd-session[70241]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:22:44 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Nov 22 03:22:44 compute-0 systemd[1]: session-17.scope: Consumed 6.468s CPU time.
Nov 22 03:22:44 compute-0 systemd-logind[799]: Session 17 logged out. Waiting for processes to exit.
Nov 22 03:22:44 compute-0 systemd-logind[799]: Removed session 17.
Nov 22 03:22:52 compute-0 sshd-session[71264]: Accepted publickey for zuul from 38.102.83.143 port 60052 ssh2: RSA SHA256:eVsZt2yHpbTNrFfKVGH3GdI61kssxBz29Cce2alCemw
Nov 22 03:22:52 compute-0 systemd-logind[799]: New session 18 of user zuul.
Nov 22 03:22:52 compute-0 systemd[1]: Started Session 18 of User zuul.
Nov 22 03:22:52 compute-0 sshd-session[71264]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:22:53 compute-0 sudo[71340]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynhujnfquevuvlannmqavrpompdepepl ; /usr/bin/python3'
Nov 22 03:22:53 compute-0 sudo[71340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:53 compute-0 useradd[71344]: new group: name=ceph-admin, GID=42478
Nov 22 03:22:53 compute-0 useradd[71344]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Nov 22 03:22:53 compute-0 sudo[71340]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:53 compute-0 sudo[71426]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwngrppslxrasobqrleaeceakalgzetu ; /usr/bin/python3'
Nov 22 03:22:53 compute-0 sudo[71426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:54 compute-0 sudo[71426]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:54 compute-0 sudo[71499]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkgwbaroremzoklwtqrdnweafabgfpiy ; /usr/bin/python3'
Nov 22 03:22:54 compute-0 sudo[71499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:54 compute-0 sudo[71499]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:54 compute-0 sudo[71549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvdtjdruggmimkemnteailwjxphvense ; /usr/bin/python3'
Nov 22 03:22:54 compute-0 sudo[71549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:55 compute-0 sudo[71549]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:55 compute-0 sudo[71575]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ompawkijvqmhqxpivqrmtmgbjbwovpqr ; /usr/bin/python3'
Nov 22 03:22:55 compute-0 sudo[71575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:55 compute-0 sudo[71575]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:55 compute-0 sudo[71601]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgoyklamtmaalcyjsfrplkrfphtqwiiu ; /usr/bin/python3'
Nov 22 03:22:55 compute-0 sudo[71601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:55 compute-0 sudo[71601]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:56 compute-0 sudo[71627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-garvtshbddegjreaxtcuhymggzljpfav ; /usr/bin/python3'
Nov 22 03:22:56 compute-0 sudo[71627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:56 compute-0 sudo[71627]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:56 compute-0 sudo[71705]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwbazkajiipwedvowvsakqoupelgvazl ; /usr/bin/python3'
Nov 22 03:22:56 compute-0 sudo[71705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:56 compute-0 sudo[71705]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:56 compute-0 sudo[71778]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmbpqrcsontpadlvjqkplvzpgxgfkikc ; /usr/bin/python3'
Nov 22 03:22:56 compute-0 sudo[71778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:57 compute-0 sudo[71778]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:57 compute-0 sudo[71880]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raljtasilzvcnkqxhrftjtgnlvfxlvya ; /usr/bin/python3'
Nov 22 03:22:57 compute-0 sudo[71880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:57 compute-0 sudo[71880]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:57 compute-0 sudo[71953]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taksavvrtaqmeppjiqxejhskhdmejxaj ; /usr/bin/python3'
Nov 22 03:22:57 compute-0 sudo[71953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:57 compute-0 sudo[71953]: pam_unix(sudo:session): session closed for user root
Nov 22 03:22:58 compute-0 sudo[72003]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isupnpkwafnxlbhsesmlmtzkbvwqrhip ; /usr/bin/python3'
Nov 22 03:22:58 compute-0 sudo[72003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:22:58 compute-0 python3[72005]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:22:59 compute-0 sudo[72003]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:00 compute-0 sudo[72098]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imoygvrudzmarumtggmdkwredawmokvz ; /usr/bin/python3'
Nov 22 03:23:00 compute-0 sudo[72098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:00 compute-0 python3[72100]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:23:01 compute-0 sudo[72098]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:01 compute-0 sudo[72125]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svkwqhfmjlophtxtmlcriqfdnsqfublj ; /usr/bin/python3'
Nov 22 03:23:01 compute-0 sudo[72125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:01 compute-0 python3[72127]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:23:01 compute-0 sudo[72125]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:02 compute-0 sudo[72151]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjufugpfdruxmwzmnvdgvkpysefrwkp ; /usr/bin/python3'
Nov 22 03:23:02 compute-0 sudo[72151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:02 compute-0 python3[72153]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:23:02 compute-0 kernel: loop: module loaded
Nov 22 03:23:02 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Nov 22 03:23:02 compute-0 sudo[72151]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:02 compute-0 sudo[72186]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdrmcpqjkgzznnxxiisaoygxbtiekhvh ; /usr/bin/python3'
Nov 22 03:23:02 compute-0 sudo[72186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:02 compute-0 python3[72188]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:23:02 compute-0 lvm[72191]: PV /dev/loop3 not used.
Nov 22 03:23:02 compute-0 lvm[72200]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:23:02 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Nov 22 03:23:02 compute-0 sudo[72186]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:03 compute-0 lvm[72202]:   1 logical volume(s) in volume group "ceph_vg0" now active
Nov 22 03:23:03 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Nov 22 03:23:03 compute-0 sudo[72278]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozugyltonxsnnygkcflyhlygfxtbadxr ; /usr/bin/python3'
Nov 22 03:23:03 compute-0 sudo[72278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:03 compute-0 python3[72280]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:23:03 compute-0 sudo[72278]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:03 compute-0 chronyd[58481]: Selected source 198.50.127.72 (pool.ntp.org)
Nov 22 03:23:03 compute-0 sudo[72351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrienwdhzcyzzlifzovfubhuuabxnyo ; /usr/bin/python3'
Nov 22 03:23:03 compute-0 sudo[72351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:04 compute-0 python3[72353]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781783.1705837-36055-67082987751303/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:23:04 compute-0 sudo[72351]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:04 compute-0 sudo[72401]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfqdhitlvwmhzxjwqeclbqwwnlfmfdfq ; /usr/bin/python3'
Nov 22 03:23:04 compute-0 sudo[72401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:04 compute-0 python3[72403]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:23:04 compute-0 systemd[1]: Reloading.
Nov 22 03:23:05 compute-0 systemd-sysv-generator[72436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:05 compute-0 systemd-rc-local-generator[72432]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:05 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 22 03:23:05 compute-0 bash[72443]: /dev/loop3: [64513]:4328014 (/var/lib/ceph-osd-0.img)
Nov 22 03:23:05 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 22 03:23:05 compute-0 lvm[72444]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:23:05 compute-0 lvm[72444]: VG ceph_vg0 finished
Nov 22 03:23:05 compute-0 sudo[72401]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:05 compute-0 sudo[72468]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcjlaegnbiolsfogsctospkuduaktksh ; /usr/bin/python3'
Nov 22 03:23:05 compute-0 sudo[72468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:05 compute-0 python3[72470]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:23:07 compute-0 sudo[72468]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:07 compute-0 sudo[72495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lngzfazsxjeacputmowxjqewhvnzjyyc ; /usr/bin/python3'
Nov 22 03:23:07 compute-0 sudo[72495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:07 compute-0 python3[72497]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:23:07 compute-0 sudo[72495]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:07 compute-0 sudo[72521]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iddbjcwsrnxkcwzivkdankmkllnakyrk ; /usr/bin/python3'
Nov 22 03:23:07 compute-0 sudo[72521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:07 compute-0 python3[72523]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:23:07 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Nov 22 03:23:07 compute-0 sudo[72521]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:07 compute-0 sudo[72553]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tozkrqasrrhonxjxdyjsmllpcnfksums ; /usr/bin/python3'
Nov 22 03:23:07 compute-0 sudo[72553]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:08 compute-0 python3[72555]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:23:08 compute-0 lvm[72558]: PV /dev/loop4 not used.
Nov 22 03:23:08 compute-0 lvm[72568]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:23:08 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Nov 22 03:23:08 compute-0 sudo[72553]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:08 compute-0 lvm[72570]:   1 logical volume(s) in volume group "ceph_vg1" now active
Nov 22 03:23:08 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Nov 22 03:23:08 compute-0 sudo[72647]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzotedqpiurzbaxgagtvcgqorpnzrugh ; /usr/bin/python3'
Nov 22 03:23:08 compute-0 sudo[72647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:08 compute-0 python3[72649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:23:08 compute-0 sudo[72647]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:09 compute-0 sudo[72720]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjdvfvkkvgiwpgmconpnfyuqcouaxhlb ; /usr/bin/python3'
Nov 22 03:23:09 compute-0 sudo[72720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:09 compute-0 python3[72722]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781788.2201178-36082-280138224079419/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:23:09 compute-0 sudo[72720]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:09 compute-0 sudo[72770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfrqqtceqqsiycgdfbhctopczeoeidmf ; /usr/bin/python3'
Nov 22 03:23:09 compute-0 sudo[72770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:09 compute-0 python3[72772]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:23:09 compute-0 systemd[1]: Reloading.
Nov 22 03:23:09 compute-0 systemd-rc-local-generator[72798]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:09 compute-0 systemd-sysv-generator[72803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:10 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 22 03:23:10 compute-0 bash[72813]: /dev/loop4: [64513]:4328057 (/var/lib/ceph-osd-1.img)
Nov 22 03:23:10 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 22 03:23:10 compute-0 lvm[72814]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:23:10 compute-0 lvm[72814]: VG ceph_vg1 finished
Nov 22 03:23:10 compute-0 sudo[72770]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:10 compute-0 sudo[72838]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrtntlwofykeppglmosmatprtfqurxrg ; /usr/bin/python3'
Nov 22 03:23:10 compute-0 sudo[72838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:10 compute-0 python3[72840]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:23:11 compute-0 sudo[72838]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:12 compute-0 sudo[72865]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcigcwahxbtgvcndydwshetnvpbwmego ; /usr/bin/python3'
Nov 22 03:23:12 compute-0 sudo[72865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:12 compute-0 python3[72867]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:23:12 compute-0 sudo[72865]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:12 compute-0 sudo[72891]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfwrtfdnkbheuotlaazpotzdsiugpqo ; /usr/bin/python3'
Nov 22 03:23:12 compute-0 sudo[72891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:12 compute-0 python3[72893]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:23:12 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Nov 22 03:23:12 compute-0 sudo[72891]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:12 compute-0 sudo[72923]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqzkkzkvxcwfpvogbjhxpaskxqntwnnc ; /usr/bin/python3'
Nov 22 03:23:12 compute-0 sudo[72923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:12 compute-0 python3[72925]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:23:12 compute-0 lvm[72928]: PV /dev/loop5 not used.
Nov 22 03:23:13 compute-0 lvm[72930]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:23:13 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Nov 22 03:23:13 compute-0 lvm[72937]:   1 logical volume(s) in volume group "ceph_vg2" now active
Nov 22 03:23:13 compute-0 lvm[72941]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:23:13 compute-0 lvm[72941]: VG ceph_vg2 finished
Nov 22 03:23:13 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Nov 22 03:23:13 compute-0 sudo[72923]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:13 compute-0 sudo[73017]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toqhsjbczzfkoykhorsrayirbuobfyop ; /usr/bin/python3'
Nov 22 03:23:13 compute-0 sudo[73017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:13 compute-0 python3[73019]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:23:13 compute-0 sudo[73017]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:13 compute-0 sudo[73090]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hflkexchzxbhjkyrhjoegejqrhtqgacq ; /usr/bin/python3'
Nov 22 03:23:13 compute-0 sudo[73090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:14 compute-0 python3[73092]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781793.1580362-36109-143051294873987/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:23:14 compute-0 sudo[73090]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:14 compute-0 sudo[73140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tftnamecqangljimsayslpvevhmowdob ; /usr/bin/python3'
Nov 22 03:23:14 compute-0 sudo[73140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:14 compute-0 python3[73142]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:23:14 compute-0 systemd[1]: Reloading.
Nov 22 03:23:14 compute-0 systemd-rc-local-generator[73168]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:14 compute-0 systemd-sysv-generator[73175]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:14 compute-0 systemd[1]: Starting Ceph OSD losetup...
Nov 22 03:23:14 compute-0 bash[73182]: /dev/loop5: [64513]:4328405 (/var/lib/ceph-osd-2.img)
Nov 22 03:23:14 compute-0 systemd[1]: Finished Ceph OSD losetup.
Nov 22 03:23:14 compute-0 lvm[73183]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:23:14 compute-0 lvm[73183]: VG ceph_vg2 finished
Nov 22 03:23:14 compute-0 sudo[73140]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:16 compute-0 python3[73207]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:23:18 compute-0 sudo[73298]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbhxnfefidsbbrjislkjopelwyccvnsf ; /usr/bin/python3'
Nov 22 03:23:18 compute-0 sudo[73298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:19 compute-0 python3[73300]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 22 03:23:20 compute-0 groupadd[73306]: group added to /etc/group: name=cephadm, GID=992
Nov 22 03:23:20 compute-0 groupadd[73306]: group added to /etc/gshadow: name=cephadm
Nov 22 03:23:20 compute-0 groupadd[73306]: new group: name=cephadm, GID=992
Nov 22 03:23:20 compute-0 useradd[73313]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Nov 22 03:23:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:23:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:23:20 compute-0 sudo[73298]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:21 compute-0 sudo[73408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spowrmczsofrngpvytrtbfggeadllxsx ; /usr/bin/python3'
Nov 22 03:23:21 compute-0 sudo[73408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:21 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:23:21 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:23:21 compute-0 systemd[1]: run-r75e12e7c0fbd459192c31235b186c2b8.service: Deactivated successfully.
Nov 22 03:23:21 compute-0 python3[73410]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:23:21 compute-0 sudo[73408]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:21 compute-0 sudo[73437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keqdbaccjxhttxorfbqrglkkqmpnxukv ; /usr/bin/python3'
Nov 22 03:23:21 compute-0 sudo[73437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:21 compute-0 python3[73439]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:21 compute-0 sudo[73437]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:22 compute-0 sudo[73503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrxlpiyxfrdkmzwjxabkyzkuaacwjcey ; /usr/bin/python3'
Nov 22 03:23:22 compute-0 sudo[73503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:22 compute-0 python3[73505]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:23:22 compute-0 sudo[73503]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:22 compute-0 sudo[73529]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taurxnpudqhoiwftosmaauxbjqydibia ; /usr/bin/python3'
Nov 22 03:23:22 compute-0 sudo[73529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:22 compute-0 python3[73531]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:23:22 compute-0 sudo[73529]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:23 compute-0 sudo[73607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idghsnouorgbooagcbqdpbhmdafvvvhm ; /usr/bin/python3'
Nov 22 03:23:23 compute-0 sudo[73607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:23 compute-0 python3[73609]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:23:23 compute-0 sudo[73607]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:23 compute-0 sudo[73680]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjpsyztwuooefhsepomzvksjeaomyokx ; /usr/bin/python3'
Nov 22 03:23:23 compute-0 sudo[73680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:24 compute-0 python3[73682]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781803.0825102-36256-4684983383112/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:23:24 compute-0 sudo[73680]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:24 compute-0 sudo[73782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqqpvtgsnqvhzwkloojfruloovelfzzg ; /usr/bin/python3'
Nov 22 03:23:24 compute-0 sudo[73782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:24 compute-0 python3[73784]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:23:24 compute-0 sudo[73782]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:25 compute-0 sudo[73855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regmgtszvtbieytwkquklwnmokwrruej ; /usr/bin/python3'
Nov 22 03:23:25 compute-0 sudo[73855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:25 compute-0 python3[73857]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781804.2421381-36274-232654600844203/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:23:25 compute-0 sudo[73855]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:25 compute-0 sudo[73905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olhkutwuujxbicmlhcyxhqyemjoqzpce ; /usr/bin/python3'
Nov 22 03:23:25 compute-0 sudo[73905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:25 compute-0 python3[73907]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:23:25 compute-0 sudo[73905]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:25 compute-0 sudo[73933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhsnswuircfoakpolstepdqkoohibsns ; /usr/bin/python3'
Nov 22 03:23:25 compute-0 sudo[73933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:25 compute-0 python3[73935]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:23:25 compute-0 sudo[73933]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:26 compute-0 sudo[73961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkqwnhulhrzmuvsqukrxhkfpetkokzys ; /usr/bin/python3'
Nov 22 03:23:26 compute-0 sudo[73961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:26 compute-0 python3[73963]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:23:26 compute-0 sudo[73961]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:26 compute-0 sudo[73989]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muqxiddptxofjcnmfephalzpxvatqlsu ; /usr/bin/python3'
Nov 22 03:23:26 compute-0 sudo[73989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:23:26 compute-0 python3[73991]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:23:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:26 compute-0 sshd-session[74008]: Accepted publickey for ceph-admin from 192.168.122.100 port 42594 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:23:26 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 22 03:23:26 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 22 03:23:26 compute-0 systemd-logind[799]: New session 19 of user ceph-admin.
Nov 22 03:23:26 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 22 03:23:26 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 22 03:23:26 compute-0 systemd[74012]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:23:27 compute-0 systemd[74012]: Queued start job for default target Main User Target.
Nov 22 03:23:27 compute-0 systemd[74012]: Created slice User Application Slice.
Nov 22 03:23:27 compute-0 systemd[74012]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 03:23:27 compute-0 systemd[74012]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 03:23:27 compute-0 systemd[74012]: Reached target Paths.
Nov 22 03:23:27 compute-0 systemd[74012]: Reached target Timers.
Nov 22 03:23:27 compute-0 systemd[74012]: Starting D-Bus User Message Bus Socket...
Nov 22 03:23:27 compute-0 systemd[74012]: Starting Create User's Volatile Files and Directories...
Nov 22 03:23:27 compute-0 systemd[74012]: Listening on D-Bus User Message Bus Socket.
Nov 22 03:23:27 compute-0 systemd[74012]: Reached target Sockets.
Nov 22 03:23:27 compute-0 systemd[74012]: Finished Create User's Volatile Files and Directories.
Nov 22 03:23:27 compute-0 systemd[74012]: Reached target Basic System.
Nov 22 03:23:27 compute-0 systemd[74012]: Reached target Main User Target.
Nov 22 03:23:27 compute-0 systemd[74012]: Startup finished in 120ms.
Nov 22 03:23:27 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 22 03:23:27 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Nov 22 03:23:27 compute-0 sshd-session[74008]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:23:27 compute-0 sudo[74029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Nov 22 03:23:27 compute-0 sudo[74029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:23:27 compute-0 sudo[74029]: pam_unix(sudo:session): session closed for user root
Nov 22 03:23:27 compute-0 sshd-session[74028]: Received disconnect from 192.168.122.100 port 42594:11: disconnected by user
Nov 22 03:23:27 compute-0 sshd-session[74028]: Disconnected from user ceph-admin 192.168.122.100 port 42594
Nov 22 03:23:27 compute-0 sshd-session[74008]: pam_unix(sshd:session): session closed for user ceph-admin
Nov 22 03:23:27 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Nov 22 03:23:27 compute-0 systemd-logind[799]: Session 19 logged out. Waiting for processes to exit.
Nov 22 03:23:27 compute-0 systemd-logind[799]: Removed session 19.
Nov 22 03:23:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1343165072-merged.mount: Deactivated successfully.
Nov 22 03:23:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat1343165072-lower\x2dmapped.mount: Deactivated successfully.
Nov 22 03:23:37 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Nov 22 03:23:37 compute-0 systemd[74012]: Activating special unit Exit the Session...
Nov 22 03:23:37 compute-0 systemd[74012]: Stopped target Main User Target.
Nov 22 03:23:37 compute-0 systemd[74012]: Stopped target Basic System.
Nov 22 03:23:37 compute-0 systemd[74012]: Stopped target Paths.
Nov 22 03:23:37 compute-0 systemd[74012]: Stopped target Sockets.
Nov 22 03:23:37 compute-0 systemd[74012]: Stopped target Timers.
Nov 22 03:23:37 compute-0 systemd[74012]: Stopped Mark boot as successful after the user session has run 2 minutes.
Nov 22 03:23:37 compute-0 systemd[74012]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 03:23:37 compute-0 systemd[74012]: Closed D-Bus User Message Bus Socket.
Nov 22 03:23:37 compute-0 systemd[74012]: Stopped Create User's Volatile Files and Directories.
Nov 22 03:23:37 compute-0 systemd[74012]: Removed slice User Application Slice.
Nov 22 03:23:37 compute-0 systemd[74012]: Reached target Shutdown.
Nov 22 03:23:37 compute-0 systemd[74012]: Finished Exit the Session.
Nov 22 03:23:37 compute-0 systemd[74012]: Reached target Exit the Session.
Nov 22 03:23:37 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Nov 22 03:23:37 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Nov 22 03:23:37 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Nov 22 03:23:37 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Nov 22 03:23:37 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Nov 22 03:23:37 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Nov 22 03:23:37 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Nov 22 03:23:41 compute-0 podman[74066]: 2025-11-22 03:23:41.273091171 +0000 UTC m=+13.930818133 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:41 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:41 compute-0 podman[74125]: 2025-11-22 03:23:41.365123998 +0000 UTC m=+0.060253636 container create b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee (image=quay.io/ceph/ceph:v18, name=objective_booth, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:23:41 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 22 03:23:41 compute-0 systemd[1]: Started libpod-conmon-b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee.scope.
Nov 22 03:23:41 compute-0 podman[74125]: 2025-11-22 03:23:41.341254686 +0000 UTC m=+0.036384404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:41 compute-0 podman[74125]: 2025-11-22 03:23:41.517316447 +0000 UTC m=+0.212446165 container init b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee (image=quay.io/ceph/ceph:v18, name=objective_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:23:41 compute-0 podman[74125]: 2025-11-22 03:23:41.529736036 +0000 UTC m=+0.224865704 container start b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee (image=quay.io/ceph/ceph:v18, name=objective_booth, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:23:41 compute-0 podman[74125]: 2025-11-22 03:23:41.535269452 +0000 UTC m=+0.230399180 container attach b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee (image=quay.io/ceph/ceph:v18, name=objective_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:23:41 compute-0 objective_booth[74141]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 22 03:23:41 compute-0 systemd[1]: libpod-b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee.scope: Deactivated successfully.
Nov 22 03:23:41 compute-0 podman[74146]: 2025-11-22 03:23:41.911604176 +0000 UTC m=+0.041757406 container died b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee (image=quay.io/ceph/ceph:v18, name=objective_booth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa18024676e0d0400f3783f46ed63114f6a0c10bebdc9e924b36eebdf3629b49-merged.mount: Deactivated successfully.
Nov 22 03:23:42 compute-0 podman[74146]: 2025-11-22 03:23:42.097903009 +0000 UTC m=+0.228056199 container remove b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee (image=quay.io/ceph/ceph:v18, name=objective_booth, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:23:42 compute-0 systemd[1]: libpod-conmon-b5948fdb1c3fcc1a951d6c3586b9df194f9c3a85c9435930e9713da83137d7ee.scope: Deactivated successfully.
Nov 22 03:23:42 compute-0 podman[74161]: 2025-11-22 03:23:42.188993701 +0000 UTC m=+0.057386751 container create 0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff (image=quay.io/ceph/ceph:v18, name=optimistic_cannon, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:23:42 compute-0 systemd[1]: Started libpod-conmon-0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff.scope.
Nov 22 03:23:42 compute-0 podman[74161]: 2025-11-22 03:23:42.16064897 +0000 UTC m=+0.029042060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:42 compute-0 podman[74161]: 2025-11-22 03:23:42.301347745 +0000 UTC m=+0.169740865 container init 0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff (image=quay.io/ceph/ceph:v18, name=optimistic_cannon, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:23:42 compute-0 podman[74161]: 2025-11-22 03:23:42.312178412 +0000 UTC m=+0.180571462 container start 0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff (image=quay.io/ceph/ceph:v18, name=optimistic_cannon, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:23:42 compute-0 podman[74161]: 2025-11-22 03:23:42.318100509 +0000 UTC m=+0.186493559 container attach 0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff (image=quay.io/ceph/ceph:v18, name=optimistic_cannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:23:42 compute-0 optimistic_cannon[74177]: 167 167
Nov 22 03:23:42 compute-0 systemd[1]: libpod-0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff.scope: Deactivated successfully.
Nov 22 03:23:42 compute-0 podman[74161]: 2025-11-22 03:23:42.320698788 +0000 UTC m=+0.189091838 container died 0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff (image=quay.io/ceph/ceph:v18, name=optimistic_cannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c055fd61558b5b9203b22ef67a8eae614600d5ea3149c818a2d13e25cdabdb2a-merged.mount: Deactivated successfully.
Nov 22 03:23:42 compute-0 podman[74161]: 2025-11-22 03:23:42.394184514 +0000 UTC m=+0.262577524 container remove 0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff (image=quay.io/ceph/ceph:v18, name=optimistic_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:42 compute-0 systemd[1]: libpod-conmon-0f79b73365305f6d5686e0111ec205bd5703452f09778106383205a259ce7aff.scope: Deactivated successfully.
Nov 22 03:23:42 compute-0 podman[74195]: 2025-11-22 03:23:42.501234478 +0000 UTC m=+0.074567285 container create d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0 (image=quay.io/ceph/ceph:v18, name=elastic_bartik, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:23:42 compute-0 systemd[1]: Started libpod-conmon-d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0.scope.
Nov 22 03:23:42 compute-0 podman[74195]: 2025-11-22 03:23:42.469035915 +0000 UTC m=+0.042368772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:42 compute-0 podman[74195]: 2025-11-22 03:23:42.621844651 +0000 UTC m=+0.195177498 container init d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0 (image=quay.io/ceph/ceph:v18, name=elastic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:23:42 compute-0 podman[74195]: 2025-11-22 03:23:42.627968464 +0000 UTC m=+0.201301261 container start d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0 (image=quay.io/ceph/ceph:v18, name=elastic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:23:42 compute-0 podman[74195]: 2025-11-22 03:23:42.632272418 +0000 UTC m=+0.205605225 container attach d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0 (image=quay.io/ceph/ceph:v18, name=elastic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:23:42 compute-0 elastic_bartik[74211]: AQC+LCFp7w2uJhAA8+vbNwcIJeP4tQujC4QdLA==
Nov 22 03:23:42 compute-0 systemd[1]: libpod-d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0.scope: Deactivated successfully.
Nov 22 03:23:42 compute-0 podman[74195]: 2025-11-22 03:23:42.653492879 +0000 UTC m=+0.226825646 container died d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0 (image=quay.io/ceph/ceph:v18, name=elastic_bartik, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:23:42 compute-0 podman[74195]: 2025-11-22 03:23:42.7143565 +0000 UTC m=+0.287689267 container remove d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0 (image=quay.io/ceph/ceph:v18, name=elastic_bartik, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 22 03:23:42 compute-0 systemd[1]: libpod-conmon-d9aa9d079bd82f24218e816d27d0d07d21e94cfacf46a46bb2c9e7d282d19dd0.scope: Deactivated successfully.
Nov 22 03:23:42 compute-0 podman[74229]: 2025-11-22 03:23:42.82049399 +0000 UTC m=+0.078119049 container create c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e (image=quay.io/ceph/ceph:v18, name=elegant_albattani, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:23:42 compute-0 podman[74229]: 2025-11-22 03:23:42.776113345 +0000 UTC m=+0.033738444 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:42 compute-0 systemd[1]: Started libpod-conmon-c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e.scope.
Nov 22 03:23:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:42 compute-0 podman[74229]: 2025-11-22 03:23:42.920347255 +0000 UTC m=+0.177972314 container init c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e (image=quay.io/ceph/ceph:v18, name=elegant_albattani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:42 compute-0 podman[74229]: 2025-11-22 03:23:42.930893614 +0000 UTC m=+0.188518673 container start c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e (image=quay.io/ceph/ceph:v18, name=elegant_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:23:42 compute-0 podman[74229]: 2025-11-22 03:23:42.945598223 +0000 UTC m=+0.203223332 container attach c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e (image=quay.io/ceph/ceph:v18, name=elegant_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:23:42 compute-0 elegant_albattani[74246]: AQC+LCFpy5qYORAAjPqNyzUdJICEiF9PSCGM1g==
Nov 22 03:23:42 compute-0 systemd[1]: libpod-c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e.scope: Deactivated successfully.
Nov 22 03:23:43 compute-0 podman[74253]: 2025-11-22 03:23:43.013381938 +0000 UTC m=+0.026943444 container died c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e (image=quay.io/ceph/ceph:v18, name=elegant_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:23:43 compute-0 podman[74253]: 2025-11-22 03:23:43.295062775 +0000 UTC m=+0.308624301 container remove c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e (image=quay.io/ceph/ceph:v18, name=elegant_albattani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:43 compute-0 systemd[1]: libpod-conmon-c14993ee47ea9d892867f4f66412970fda7bff1aa2e3eb392d398cc06b399c3e.scope: Deactivated successfully.
Nov 22 03:23:43 compute-0 podman[74268]: 2025-11-22 03:23:43.409520206 +0000 UTC m=+0.076768134 container create 44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1 (image=quay.io/ceph/ceph:v18, name=loving_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:23:43 compute-0 podman[74268]: 2025-11-22 03:23:43.369692071 +0000 UTC m=+0.036940029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:43 compute-0 systemd[1]: Started libpod-conmon-44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1.scope.
Nov 22 03:23:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:44 compute-0 podman[74268]: 2025-11-22 03:23:44.469318206 +0000 UTC m=+1.136566204 container init 44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1 (image=quay.io/ceph/ceph:v18, name=loving_wescoff, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:23:44 compute-0 podman[74268]: 2025-11-22 03:23:44.474249256 +0000 UTC m=+1.141497214 container start 44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1 (image=quay.io/ceph/ceph:v18, name=loving_wescoff, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:23:44 compute-0 loving_wescoff[74285]: AQDALCFpIl9cHhAAZeIDZjc8Nw/0rf+bPKKYFQ==
Nov 22 03:23:44 compute-0 systemd[1]: libpod-44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1.scope: Deactivated successfully.
Nov 22 03:23:44 compute-0 podman[74268]: 2025-11-22 03:23:44.576122423 +0000 UTC m=+1.243370441 container attach 44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1 (image=quay.io/ceph/ceph:v18, name=loving_wescoff, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:23:44 compute-0 podman[74268]: 2025-11-22 03:23:44.577152001 +0000 UTC m=+1.244399959 container died 44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1 (image=quay.io/ceph/ceph:v18, name=loving_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a99376558353d6336c1f86499b2eb3472957d9f0c3c35bc25af200ecc39752c6-merged.mount: Deactivated successfully.
Nov 22 03:23:44 compute-0 podman[74268]: 2025-11-22 03:23:44.678313719 +0000 UTC m=+1.345561647 container remove 44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1 (image=quay.io/ceph/ceph:v18, name=loving_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:23:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:44 compute-0 systemd[1]: libpod-conmon-44ac5707a5f4cd140370a225faa77c64b71a25bf6265eeb385cea5bf97aff1e1.scope: Deactivated successfully.
Nov 22 03:23:44 compute-0 podman[74304]: 2025-11-22 03:23:44.760624409 +0000 UTC m=+0.058270685 container create 750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd (image=quay.io/ceph/ceph:v18, name=quirky_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:23:44 compute-0 systemd[1]: Started libpod-conmon-750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd.scope.
Nov 22 03:23:44 compute-0 podman[74304]: 2025-11-22 03:23:44.732857473 +0000 UTC m=+0.030503789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/399bd82919a7005c583c741542cf817e45e64176536fca751232d38c507db4fe/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:44 compute-0 podman[74304]: 2025-11-22 03:23:44.886237296 +0000 UTC m=+0.183883612 container init 750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:23:44 compute-0 podman[74304]: 2025-11-22 03:23:44.895770097 +0000 UTC m=+0.193416363 container start 750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:44 compute-0 podman[74304]: 2025-11-22 03:23:44.900257287 +0000 UTC m=+0.197903573 container attach 750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:23:44 compute-0 quirky_perlman[74321]: /usr/bin/monmaptool: monmap file /tmp/monmap
Nov 22 03:23:44 compute-0 quirky_perlman[74321]: setting min_mon_release = pacific
Nov 22 03:23:44 compute-0 quirky_perlman[74321]: /usr/bin/monmaptool: set fsid to 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:23:44 compute-0 quirky_perlman[74321]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Nov 22 03:23:44 compute-0 systemd[1]: libpod-750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd.scope: Deactivated successfully.
Nov 22 03:23:44 compute-0 podman[74304]: 2025-11-22 03:23:44.94612019 +0000 UTC m=+0.243766456 container died 750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd (image=quay.io/ceph/ceph:v18, name=quirky_perlman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:23:45 compute-0 podman[74304]: 2025-11-22 03:23:45.021911436 +0000 UTC m=+0.319557682 container remove 750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd (image=quay.io/ceph/ceph:v18, name=quirky_perlman, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:23:45 compute-0 systemd[1]: libpod-conmon-750d9af2fa1ab27371ddd2d48f10e27b77f0511d849c48a5495e8ce0049bf5fd.scope: Deactivated successfully.
Nov 22 03:23:45 compute-0 podman[74340]: 2025-11-22 03:23:45.123762533 +0000 UTC m=+0.068945216 container create 4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:23:45 compute-0 systemd[1]: Started libpod-conmon-4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519.scope.
Nov 22 03:23:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af93e02486d9a67480ba9ca8a2a42505289e0e1f19b10236c109890273684e39/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af93e02486d9a67480ba9ca8a2a42505289e0e1f19b10236c109890273684e39/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af93e02486d9a67480ba9ca8a2a42505289e0e1f19b10236c109890273684e39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af93e02486d9a67480ba9ca8a2a42505289e0e1f19b10236c109890273684e39/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:45 compute-0 podman[74340]: 2025-11-22 03:23:45.097063526 +0000 UTC m=+0.042246249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:45 compute-0 podman[74340]: 2025-11-22 03:23:45.218401099 +0000 UTC m=+0.163583842 container init 4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:23:45 compute-0 podman[74340]: 2025-11-22 03:23:45.225201559 +0000 UTC m=+0.170384242 container start 4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:45 compute-0 podman[74340]: 2025-11-22 03:23:45.247786787 +0000 UTC m=+0.192969450 container attach 4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:23:45 compute-0 systemd[1]: libpod-4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519.scope: Deactivated successfully.
Nov 22 03:23:45 compute-0 podman[74340]: 2025-11-22 03:23:45.380556023 +0000 UTC m=+0.325738736 container died 4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:23:45 compute-0 podman[74340]: 2025-11-22 03:23:45.500773425 +0000 UTC m=+0.445956108 container remove 4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519 (image=quay.io/ceph/ceph:v18, name=funny_engelbart, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:23:45 compute-0 systemd[1]: libpod-conmon-4b7f7b3a949f7844bbff0f4ceba932e39c9a61ba9b4b61c29462743e3f4a0519.scope: Deactivated successfully.
Nov 22 03:23:45 compute-0 systemd[1]: Reloading.
Nov 22 03:23:45 compute-0 systemd-rc-local-generator[74422]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:45 compute-0 systemd-sysv-generator[74427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-399bd82919a7005c583c741542cf817e45e64176536fca751232d38c507db4fe-merged.mount: Deactivated successfully.
Nov 22 03:23:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:45 compute-0 systemd[1]: Reloading.
Nov 22 03:23:45 compute-0 systemd-rc-local-generator[74459]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:45 compute-0 systemd-sysv-generator[74462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:46 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Nov 22 03:23:46 compute-0 systemd[1]: Reloading.
Nov 22 03:23:46 compute-0 systemd-rc-local-generator[74502]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:46 compute-0 systemd-sysv-generator[74507]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:46 compute-0 systemd[1]: Reached target Ceph cluster 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:23:46 compute-0 systemd[1]: Reloading.
Nov 22 03:23:46 compute-0 systemd-rc-local-generator[74540]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:46 compute-0 systemd-sysv-generator[74544]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:46 compute-0 systemd[1]: Reloading.
Nov 22 03:23:46 compute-0 systemd-rc-local-generator[74583]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:46 compute-0 systemd-sysv-generator[74587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:46 compute-0 systemd[1]: Created slice Slice /system/ceph-7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:23:46 compute-0 systemd[1]: Reached target System Time Set.
Nov 22 03:23:46 compute-0 systemd[1]: Reached target System Time Synchronized.
Nov 22 03:23:46 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:47 compute-0 podman[74638]: 2025-11-22 03:23:47.272500715 +0000 UTC m=+0.110201450 container create b4c80c7d894e1e83a1bc5168786cb244079dbf3ac72d51fcd716259b9fcc8670 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:23:47 compute-0 podman[74638]: 2025-11-22 03:23:47.198087234 +0000 UTC m=+0.035787989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa91dc61f7d361ee290bc75c3b5324d3633095231c4edaf12fc5124cc5e994c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa91dc61f7d361ee290bc75c3b5324d3633095231c4edaf12fc5124cc5e994c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa91dc61f7d361ee290bc75c3b5324d3633095231c4edaf12fc5124cc5e994c8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa91dc61f7d361ee290bc75c3b5324d3633095231c4edaf12fc5124cc5e994c8/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:47 compute-0 podman[74638]: 2025-11-22 03:23:47.389208914 +0000 UTC m=+0.226909739 container init b4c80c7d894e1e83a1bc5168786cb244079dbf3ac72d51fcd716259b9fcc8670 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:23:47 compute-0 podman[74638]: 2025-11-22 03:23:47.399280161 +0000 UTC m=+0.236980926 container start b4c80c7d894e1e83a1bc5168786cb244079dbf3ac72d51fcd716259b9fcc8670 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:23:47 compute-0 bash[74638]: b4c80c7d894e1e83a1bc5168786cb244079dbf3ac72d51fcd716259b9fcc8670
Nov 22 03:23:47 compute-0 systemd[1]: Started Ceph mon.compute-0 for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:23:47 compute-0 ceph-mon[74657]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:23:47 compute-0 ceph-mon[74657]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: pidfile_write: ignore empty --pid-file
Nov 22 03:23:47 compute-0 ceph-mon[74657]: load: jerasure load: lrc 
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Git sha 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: DB SUMMARY
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: DB Session ID:  U2POY6ZR3I7G92JZ0V40
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                                     Options.env: 0x55bdf8f89c40
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                                Options.info_log: 0x55bdf9d2ae80
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                                 Options.wal_dir: 
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                    Options.write_buffer_manager: 0x55bdf9d3ab40
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                               Options.row_cache: None
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                              Options.wal_filter: None
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.wal_compression: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.max_background_jobs: 2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.max_total_wal_size: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:       Options.compaction_readahead_size: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Compression algorithms supported:
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         kZSTD supported: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         kXpressCompression supported: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         kBZip2Compression supported: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         kLZ4Compression supported: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         kZlibCompression supported: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         kSnappyCompression supported: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:           Options.merge_operator: 
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:        Options.compaction_filter: None
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdf9d2aa80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdf9d231f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:        Options.write_buffer_size: 33554432
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:  Options.max_write_buffer_number: 2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:          Options.compression: NoCompression
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.num_levels: 7
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 159d9642-0336-4475-a7ed-37f1dd054c28
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781827454855, "job": 1, "event": "recovery_started", "wal_files": [4]}
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781827457595, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "U2POY6ZR3I7G92JZ0V40", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781827457688, "job": 1, "event": "recovery_finished"}
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bdf9d4ce00
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: DB pointer 0x55bdf9e56000
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:23:47 compute-0 ceph-mon[74657]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.7      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.09 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdf9d231f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 03:23:47 compute-0 ceph-mon[74657]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@-1(???) e0 preinit fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(probing) e0 win_standalone_election
Nov 22 03:23:47 compute-0 ceph-mon[74657]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:23:47 compute-0 ceph-mon[74657]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 22 03:23:47 compute-0 ceph-mon[74657]: paxos.0).electionLogic(2) init, last seen epoch 2
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:23:47 compute-0 ceph-mon[74657]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:23:47 compute-0 ceph-mon[74657]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-11-22T03:23:45.301287Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,os=Linux}
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).mds e1 new map
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Nov 22 03:23:47 compute-0 ceph-mon[74657]: log_channel(cluster) log [DBG] : fsmap 
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mkfs 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Nov 22 03:23:47 compute-0 podman[74658]: 2025-11-22 03:23:47.531295147 +0000 UTC m=+0.074940236 container create ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a (image=quay.io/ceph/ceph:v18, name=xenodochial_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:23:47 compute-0 ceph-mon[74657]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 22 03:23:47 compute-0 ceph-mon[74657]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 22 03:23:47 compute-0 ceph-mon[74657]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:23:47 compute-0 systemd[1]: Started libpod-conmon-ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a.scope.
Nov 22 03:23:47 compute-0 podman[74658]: 2025-11-22 03:23:47.507222519 +0000 UTC m=+0.050867638 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e10410c9c11f02cb023cdf8781b4b9f7bb1cb356b3e5dfb58edc6ad4fb987d0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e10410c9c11f02cb023cdf8781b4b9f7bb1cb356b3e5dfb58edc6ad4fb987d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e10410c9c11f02cb023cdf8781b4b9f7bb1cb356b3e5dfb58edc6ad4fb987d0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:47 compute-0 podman[74658]: 2025-11-22 03:23:47.746374991 +0000 UTC m=+0.290020110 container init ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a (image=quay.io/ceph/ceph:v18, name=xenodochial_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:23:47 compute-0 podman[74658]: 2025-11-22 03:23:47.753589241 +0000 UTC m=+0.297234330 container start ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a (image=quay.io/ceph/ceph:v18, name=xenodochial_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:23:47 compute-0 podman[74658]: 2025-11-22 03:23:47.757185237 +0000 UTC m=+0.300830366 container attach ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a (image=quay.io/ceph/ceph:v18, name=xenodochial_stonebraker, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:23:48 compute-0 ceph-mon[74657]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 03:23:48 compute-0 ceph-mon[74657]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559787760' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:   cluster:
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     id:     7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     health: HEALTH_OK
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:  
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:   services:
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     mon: 1 daemons, quorum compute-0 (age 0.605245s)
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     mgr: no daemons active
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     osd: 0 osds: 0 up, 0 in
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:  
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:   data:
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     pools:   0 pools, 0 pgs
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     objects: 0 objects, 0 B
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     usage:   0 B used, 0 B / 0 B avail
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:     pgs:     
Nov 22 03:23:48 compute-0 xenodochial_stonebraker[74712]:  
Nov 22 03:23:48 compute-0 systemd[1]: libpod-ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a.scope: Deactivated successfully.
Nov 22 03:23:48 compute-0 podman[74738]: 2025-11-22 03:23:48.16746078 +0000 UTC m=+0.027503320 container died ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a (image=quay.io/ceph/ceph:v18, name=xenodochial_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e10410c9c11f02cb023cdf8781b4b9f7bb1cb356b3e5dfb58edc6ad4fb987d0-merged.mount: Deactivated successfully.
Nov 22 03:23:48 compute-0 podman[74738]: 2025-11-22 03:23:48.21578883 +0000 UTC m=+0.075831300 container remove ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a (image=quay.io/ceph/ceph:v18, name=xenodochial_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:23:48 compute-0 systemd[1]: libpod-conmon-ef9e4886c3f8c72fca4d7d6f9329ad72fa289f952b2702d4dfb3d37b818fb54a.scope: Deactivated successfully.
Nov 22 03:23:48 compute-0 podman[74753]: 2025-11-22 03:23:48.302459154 +0000 UTC m=+0.054853284 container create 35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955 (image=quay.io/ceph/ceph:v18, name=dazzling_mayer, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:48 compute-0 systemd[1]: Started libpod-conmon-35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955.scope.
Nov 22 03:23:48 compute-0 podman[74753]: 2025-11-22 03:23:48.273329563 +0000 UTC m=+0.025723733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcfad0963d33b8fb90080a0c2526827166d272efa4c22d0c99e89b70b778326/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcfad0963d33b8fb90080a0c2526827166d272efa4c22d0c99e89b70b778326/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcfad0963d33b8fb90080a0c2526827166d272efa4c22d0c99e89b70b778326/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbcfad0963d33b8fb90080a0c2526827166d272efa4c22d0c99e89b70b778326/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:48 compute-0 podman[74753]: 2025-11-22 03:23:48.398759133 +0000 UTC m=+0.151153293 container init 35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955 (image=quay.io/ceph/ceph:v18, name=dazzling_mayer, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:23:48 compute-0 podman[74753]: 2025-11-22 03:23:48.411345337 +0000 UTC m=+0.163739487 container start 35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955 (image=quay.io/ceph/ceph:v18, name=dazzling_mayer, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:23:48 compute-0 podman[74753]: 2025-11-22 03:23:48.416551645 +0000 UTC m=+0.168945815 container attach 35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955 (image=quay.io/ceph/ceph:v18, name=dazzling_mayer, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:23:48 compute-0 ceph-mon[74657]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:23:48 compute-0 ceph-mon[74657]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 03:23:48 compute-0 ceph-mon[74657]: fsmap 
Nov 22 03:23:48 compute-0 ceph-mon[74657]: osdmap e1: 0 total, 0 up, 0 in
Nov 22 03:23:48 compute-0 ceph-mon[74657]: mgrmap e1: no daemons active
Nov 22 03:23:48 compute-0 ceph-mon[74657]: from='client.? 192.168.122.100:0/2559787760' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 03:23:48 compute-0 ceph-mon[74657]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 03:23:48 compute-0 ceph-mon[74657]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1945353542' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:23:48 compute-0 ceph-mon[74657]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1945353542' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 03:23:48 compute-0 dazzling_mayer[74770]: 
Nov 22 03:23:48 compute-0 dazzling_mayer[74770]: [global]
Nov 22 03:23:48 compute-0 dazzling_mayer[74770]:         fsid = 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:23:48 compute-0 dazzling_mayer[74770]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Nov 22 03:23:48 compute-0 dazzling_mayer[74770]:         osd_crush_chooseleaf_type = 0
Nov 22 03:23:48 compute-0 systemd[1]: libpod-35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955.scope: Deactivated successfully.
Nov 22 03:23:48 compute-0 podman[74753]: 2025-11-22 03:23:48.81333075 +0000 UTC m=+0.565724870 container died 35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955 (image=quay.io/ceph/ceph:v18, name=dazzling_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:23:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbcfad0963d33b8fb90080a0c2526827166d272efa4c22d0c99e89b70b778326-merged.mount: Deactivated successfully.
Nov 22 03:23:48 compute-0 podman[74753]: 2025-11-22 03:23:48.850081104 +0000 UTC m=+0.602475234 container remove 35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955 (image=quay.io/ceph/ceph:v18, name=dazzling_mayer, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:23:48 compute-0 systemd[1]: libpod-conmon-35378ac815f055a55431eb9fe182e8db1758da2b07f583310aa337ebecbfa955.scope: Deactivated successfully.
Nov 22 03:23:48 compute-0 podman[74807]: 2025-11-22 03:23:48.929365662 +0000 UTC m=+0.054718500 container create 723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8 (image=quay.io/ceph/ceph:v18, name=angry_goldwasser, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:23:48 compute-0 systemd[1]: Started libpod-conmon-723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8.scope.
Nov 22 03:23:48 compute-0 podman[74807]: 2025-11-22 03:23:48.904019661 +0000 UTC m=+0.029372559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cac6c52aa9511ed6314ec9fe5e1ea771941aa0ff3fa5b8abac6dcbea428a1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cac6c52aa9511ed6314ec9fe5e1ea771941aa0ff3fa5b8abac6dcbea428a1c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cac6c52aa9511ed6314ec9fe5e1ea771941aa0ff3fa5b8abac6dcbea428a1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89cac6c52aa9511ed6314ec9fe5e1ea771941aa0ff3fa5b8abac6dcbea428a1c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:49 compute-0 podman[74807]: 2025-11-22 03:23:49.03802935 +0000 UTC m=+0.163382278 container init 723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8 (image=quay.io/ceph/ceph:v18, name=angry_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:23:49 compute-0 podman[74807]: 2025-11-22 03:23:49.051583269 +0000 UTC m=+0.176936127 container start 723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8 (image=quay.io/ceph/ceph:v18, name=angry_goldwasser, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Nov 22 03:23:49 compute-0 podman[74807]: 2025-11-22 03:23:49.056729564 +0000 UTC m=+0.182082482 container attach 723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8 (image=quay.io/ceph/ceph:v18, name=angry_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:23:49 compute-0 ceph-mon[74657]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:23:49 compute-0 ceph-mon[74657]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405416314' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:23:49 compute-0 systemd[1]: libpod-723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8.scope: Deactivated successfully.
Nov 22 03:23:49 compute-0 podman[74807]: 2025-11-22 03:23:49.447656094 +0000 UTC m=+0.573008942 container died 723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8 (image=quay.io/ceph/ceph:v18, name=angry_goldwasser, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-89cac6c52aa9511ed6314ec9fe5e1ea771941aa0ff3fa5b8abac6dcbea428a1c-merged.mount: Deactivated successfully.
Nov 22 03:23:49 compute-0 podman[74807]: 2025-11-22 03:23:49.507815077 +0000 UTC m=+0.633167935 container remove 723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8 (image=quay.io/ceph/ceph:v18, name=angry_goldwasser, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:23:49 compute-0 systemd[1]: libpod-conmon-723be1801ec3bf0cd2ae6a8d8ee158c03e7141c53e9245ac96ad95c0901a7da8.scope: Deactivated successfully.
Nov 22 03:23:49 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:23:49 compute-0 ceph-mon[74657]: from='client.? 192.168.122.100:0/1945353542' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:23:49 compute-0 ceph-mon[74657]: from='client.? 192.168.122.100:0/1945353542' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 03:23:49 compute-0 ceph-mon[74657]: from='client.? 192.168.122.100:0/1405416314' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:23:49 compute-0 ceph-mon[74657]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 22 03:23:49 compute-0 ceph-mon[74657]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 22 03:23:49 compute-0 ceph-mon[74657]: mon.compute-0@0(leader) e1 shutdown
Nov 22 03:23:49 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0[74653]: 2025-11-22T03:23:49.798+0000 7f579e701640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Nov 22 03:23:49 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0[74653]: 2025-11-22T03:23:49.798+0000 7f579e701640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Nov 22 03:23:49 compute-0 ceph-mon[74657]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 03:23:49 compute-0 ceph-mon[74657]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 03:23:49 compute-0 podman[74891]: 2025-11-22 03:23:49.829370431 +0000 UTC m=+0.087027155 container died b4c80c7d894e1e83a1bc5168786cb244079dbf3ac72d51fcd716259b9fcc8670 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa91dc61f7d361ee290bc75c3b5324d3633095231c4edaf12fc5124cc5e994c8-merged.mount: Deactivated successfully.
Nov 22 03:23:49 compute-0 podman[74891]: 2025-11-22 03:23:49.877957838 +0000 UTC m=+0.135614562 container remove b4c80c7d894e1e83a1bc5168786cb244079dbf3ac72d51fcd716259b9fcc8670 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:49 compute-0 bash[74891]: ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0
Nov 22 03:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:49 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 22 03:23:50 compute-0 systemd[1]: ceph-7adcc38b-6484-5de6-b879-33a0309153df@mon.compute-0.service: Deactivated successfully.
Nov 22 03:23:50 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:23:50 compute-0 systemd[1]: ceph-7adcc38b-6484-5de6-b879-33a0309153df@mon.compute-0.service: Consumed 1.201s CPU time.
Nov 22 03:23:50 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:23:50 compute-0 podman[74992]: 2025-11-22 03:23:50.308399644 +0000 UTC m=+0.051620488 container create ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e19a54e6909b6034165aedbe9d7e561ff511dd78760d0acb2af73ed2cd38ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e19a54e6909b6034165aedbe9d7e561ff511dd78760d0acb2af73ed2cd38ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e19a54e6909b6034165aedbe9d7e561ff511dd78760d0acb2af73ed2cd38ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e19a54e6909b6034165aedbe9d7e561ff511dd78760d0acb2af73ed2cd38ae/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:50 compute-0 podman[74992]: 2025-11-22 03:23:50.370687324 +0000 UTC m=+0.113908178 container init ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:23:50 compute-0 podman[74992]: 2025-11-22 03:23:50.282715915 +0000 UTC m=+0.025936859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:50 compute-0 podman[74992]: 2025-11-22 03:23:50.383416071 +0000 UTC m=+0.126636925 container start ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 22 03:23:50 compute-0 bash[74992]: ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000
Nov 22 03:23:50 compute-0 systemd[1]: Started Ceph mon.compute-0 for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:23:50 compute-0 ceph-mon[75011]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:23:50 compute-0 ceph-mon[75011]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Nov 22 03:23:50 compute-0 ceph-mon[75011]: pidfile_write: ignore empty --pid-file
Nov 22 03:23:50 compute-0 ceph-mon[75011]: load: jerasure load: lrc 
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Git sha 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: DB SUMMARY
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: DB Session ID:  RO0MN4JG72VR0TZSJMKP
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55680 ; 
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                                     Options.env: 0x5574927a8c40
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                                      Options.fs: PosixFileSystem
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                                Options.info_log: 0x5574942a1040
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                                 Options.wal_dir: 
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                    Options.write_buffer_manager: 0x5574942b0b40
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                               Options.row_cache: None
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                              Options.wal_filter: None
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.wal_compression: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.max_background_jobs: 2
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.max_total_wal_size: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:       Options.compaction_readahead_size: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Compression algorithms supported:
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         kZSTD supported: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         kXpressCompression supported: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         kBZip2Compression supported: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         kLZ4Compression supported: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         kZlibCompression supported: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         kSnappyCompression supported: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:           Options.merge_operator: 
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:        Options.compaction_filter: None
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5574942a0c40)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5574942991f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:        Options.write_buffer_size: 33554432
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:  Options.max_write_buffer_number: 2
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:          Options.compression: NoCompression
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.num_levels: 7
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 159d9642-0336-4475-a7ed-37f1dd054c28
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781830441814, "job": 1, "event": "recovery_started", "wal_files": [9]}
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781830444692, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55261, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53801, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51390, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781830, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781830444844, "job": 1, "event": "recovery_finished"}
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5574942c2e00
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: DB pointer 0x55749434c000
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 3.79 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 3.79 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5574942991f0#2 capacity: 512.00 MB usage: 1.73 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(2,0.95 KB,0.000181794%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 03:23:50 compute-0 ceph-mon[75011]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(???) e1 preinit fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(???).mds e1 new map
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@0(probing) e1 win_standalone_election
Nov 22 03:23:50 compute-0 ceph-mon[75011]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:23:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:23:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Nov 22 03:23:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : fsmap 
Nov 22 03:23:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Nov 22 03:23:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Nov 22 03:23:50 compute-0 podman[75012]: 2025-11-22 03:23:50.494354968 +0000 UTC m=+0.070257201 container create 7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31 (image=quay.io/ceph/ceph:v18, name=nice_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Nov 22 03:23:50 compute-0 ceph-mon[75011]: monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Nov 22 03:23:50 compute-0 ceph-mon[75011]: fsmap 
Nov 22 03:23:50 compute-0 ceph-mon[75011]: osdmap e1: 0 total, 0 up, 0 in
Nov 22 03:23:50 compute-0 ceph-mon[75011]: mgrmap e1: no daemons active
Nov 22 03:23:50 compute-0 systemd[1]: Started libpod-conmon-7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31.scope.
Nov 22 03:23:50 compute-0 podman[75012]: 2025-11-22 03:23:50.465059372 +0000 UTC m=+0.040961695 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a649c36f1ed06eb9e54c2c2bf0bb76e4d4c69b04dc3bdb8b7b525f86bd78f5dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a649c36f1ed06eb9e54c2c2bf0bb76e4d4c69b04dc3bdb8b7b525f86bd78f5dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a649c36f1ed06eb9e54c2c2bf0bb76e4d4c69b04dc3bdb8b7b525f86bd78f5dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:50 compute-0 podman[75012]: 2025-11-22 03:23:50.596751119 +0000 UTC m=+0.172653322 container init 7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31 (image=quay.io/ceph/ceph:v18, name=nice_roentgen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:23:50 compute-0 podman[75012]: 2025-11-22 03:23:50.605943243 +0000 UTC m=+0.181845456 container start 7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31 (image=quay.io/ceph/ceph:v18, name=nice_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:23:50 compute-0 podman[75012]: 2025-11-22 03:23:50.612434674 +0000 UTC m=+0.188336907 container attach 7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31 (image=quay.io/ceph/ceph:v18, name=nice_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:23:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Nov 22 03:23:51 compute-0 systemd[1]: libpod-7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31.scope: Deactivated successfully.
Nov 22 03:23:51 compute-0 conmon[75066]: conmon 7a6ed23e3486487c7316 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31.scope/container/memory.events
Nov 22 03:23:51 compute-0 podman[75012]: 2025-11-22 03:23:51.048030727 +0000 UTC m=+0.623932940 container died 7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31 (image=quay.io/ceph/ceph:v18, name=nice_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a649c36f1ed06eb9e54c2c2bf0bb76e4d4c69b04dc3bdb8b7b525f86bd78f5dd-merged.mount: Deactivated successfully.
Nov 22 03:23:51 compute-0 podman[75012]: 2025-11-22 03:23:51.109010852 +0000 UTC m=+0.684913095 container remove 7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31 (image=quay.io/ceph/ceph:v18, name=nice_roentgen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:23:51 compute-0 systemd[1]: libpod-conmon-7a6ed23e3486487c73165e26fa8ec6b1cbfa54c70837bb6b14adaf3ef08c7f31.scope: Deactivated successfully.
Nov 22 03:23:51 compute-0 podman[75102]: 2025-11-22 03:23:51.198095011 +0000 UTC m=+0.060072292 container create 0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:23:51 compute-0 systemd[1]: Started libpod-conmon-0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161.scope.
Nov 22 03:23:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4ae87c02ac6e3f70937b30f65024cb53c443015a93bf4b221d902b52d516a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4ae87c02ac6e3f70937b30f65024cb53c443015a93bf4b221d902b52d516a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e4ae87c02ac6e3f70937b30f65024cb53c443015a93bf4b221d902b52d516a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:51 compute-0 podman[75102]: 2025-11-22 03:23:51.17353452 +0000 UTC m=+0.035511851 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:51 compute-0 podman[75102]: 2025-11-22 03:23:51.28874336 +0000 UTC m=+0.150720701 container init 0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:23:51 compute-0 podman[75102]: 2025-11-22 03:23:51.29853627 +0000 UTC m=+0.160513561 container start 0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:23:51 compute-0 podman[75102]: 2025-11-22 03:23:51.30344431 +0000 UTC m=+0.165421631 container attach 0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:23:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Nov 22 03:23:51 compute-0 systemd[1]: libpod-0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161.scope: Deactivated successfully.
Nov 22 03:23:51 compute-0 podman[75102]: 2025-11-22 03:23:51.723922533 +0000 UTC m=+0.585899824 container died 0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:23:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e4ae87c02ac6e3f70937b30f65024cb53c443015a93bf4b221d902b52d516a6-merged.mount: Deactivated successfully.
Nov 22 03:23:51 compute-0 podman[75102]: 2025-11-22 03:23:51.776041512 +0000 UTC m=+0.638018763 container remove 0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161 (image=quay.io/ceph/ceph:v18, name=optimistic_bouman, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:23:51 compute-0 systemd[1]: libpod-conmon-0739312b38aa0970b6133671398e58c3299d721247d688d10c05a3926f962161.scope: Deactivated successfully.
Nov 22 03:23:51 compute-0 systemd[1]: Reloading.
Nov 22 03:23:51 compute-0 systemd-sysv-generator[75184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:51 compute-0 systemd-rc-local-generator[75178]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:52 compute-0 systemd[1]: Reloading.
Nov 22 03:23:52 compute-0 systemd-rc-local-generator[75221]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:23:52 compute-0 systemd-sysv-generator[75225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:23:52 compute-0 systemd[1]: Starting Ceph mgr.compute-0.wbwfxq for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:23:52 compute-0 podman[75275]: 2025-11-22 03:23:52.697480239 +0000 UTC m=+0.068610868 container create b2284a130048c3103be4538cd4c472e11808189f2c2ea589baebbfe7aecf770e (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4213f3df7af513bc3d3a0d4ae4432813607a72a76410bdd36401c46aff10256/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4213f3df7af513bc3d3a0d4ae4432813607a72a76410bdd36401c46aff10256/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4213f3df7af513bc3d3a0d4ae4432813607a72a76410bdd36401c46aff10256/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4213f3df7af513bc3d3a0d4ae4432813607a72a76410bdd36401c46aff10256/merged/var/lib/ceph/mgr/ceph-compute-0.wbwfxq supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:52 compute-0 podman[75275]: 2025-11-22 03:23:52.668259826 +0000 UTC m=+0.039390545 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:52 compute-0 podman[75275]: 2025-11-22 03:23:52.771334364 +0000 UTC m=+0.142464993 container init b2284a130048c3103be4538cd4c472e11808189f2c2ea589baebbfe7aecf770e (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:23:52 compute-0 podman[75275]: 2025-11-22 03:23:52.780498127 +0000 UTC m=+0.151628766 container start b2284a130048c3103be4538cd4c472e11808189f2c2ea589baebbfe7aecf770e (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:23:52 compute-0 bash[75275]: b2284a130048c3103be4538cd4c472e11808189f2c2ea589baebbfe7aecf770e
Nov 22 03:23:52 compute-0 systemd[1]: Started Ceph mgr.compute-0.wbwfxq for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:23:52 compute-0 ceph-mgr[75294]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:23:52 compute-0 ceph-mgr[75294]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 03:23:52 compute-0 ceph-mgr[75294]: pidfile_write: ignore empty --pid-file
Nov 22 03:23:52 compute-0 podman[75295]: 2025-11-22 03:23:52.883173595 +0000 UTC m=+0.055211282 container create 43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e (image=quay.io/ceph/ceph:v18, name=happy_wilson, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:23:52 compute-0 systemd[1]: Started libpod-conmon-43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e.scope.
Nov 22 03:23:52 compute-0 podman[75295]: 2025-11-22 03:23:52.859910539 +0000 UTC m=+0.031948246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:52 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'alerts'
Nov 22 03:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223126d825e9144b404098e612cd0c4a646613244ac976f5f3ac2282a9932334/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223126d825e9144b404098e612cd0c4a646613244ac976f5f3ac2282a9932334/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/223126d825e9144b404098e612cd0c4a646613244ac976f5f3ac2282a9932334/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:52 compute-0 podman[75295]: 2025-11-22 03:23:52.995416947 +0000 UTC m=+0.167454694 container init 43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e (image=quay.io/ceph/ceph:v18, name=happy_wilson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:23:53 compute-0 podman[75295]: 2025-11-22 03:23:53.00721356 +0000 UTC m=+0.179251247 container start 43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e (image=quay.io/ceph/ceph:v18, name=happy_wilson, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:23:53 compute-0 podman[75295]: 2025-11-22 03:23:53.011153293 +0000 UTC m=+0.183191000 container attach 43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e (image=quay.io/ceph/ceph:v18, name=happy_wilson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:23:53 compute-0 ceph-mgr[75294]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:23:53 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'balancer'
Nov 22 03:23:53 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:23:53.271+0000 7f2afbc83140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:23:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:23:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300494745' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:23:53 compute-0 happy_wilson[75335]: 
Nov 22 03:23:53 compute-0 happy_wilson[75335]: {
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "health": {
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "status": "HEALTH_OK",
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "checks": {},
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "mutes": []
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     },
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "election_epoch": 5,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "quorum": [
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         0
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     ],
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "quorum_names": [
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "compute-0"
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     ],
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "quorum_age": 2,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "monmap": {
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "epoch": 1,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "min_mon_release_name": "reef",
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_mons": 1
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     },
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "osdmap": {
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "epoch": 1,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_osds": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_up_osds": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "osd_up_since": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_in_osds": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "osd_in_since": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_remapped_pgs": 0
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     },
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "pgmap": {
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "pgs_by_state": [],
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_pgs": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_pools": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_objects": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "data_bytes": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "bytes_used": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "bytes_avail": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "bytes_total": 0
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     },
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "fsmap": {
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "epoch": 1,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "by_rank": [],
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "up:standby": 0
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     },
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "mgrmap": {
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "available": false,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "num_standbys": 0,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "modules": [
Nov 22 03:23:53 compute-0 happy_wilson[75335]:             "iostat",
Nov 22 03:23:53 compute-0 happy_wilson[75335]:             "nfs",
Nov 22 03:23:53 compute-0 happy_wilson[75335]:             "restful"
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         ],
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "services": {}
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     },
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "servicemap": {
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "epoch": 1,
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:23:53 compute-0 happy_wilson[75335]:         "services": {}
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     },
Nov 22 03:23:53 compute-0 happy_wilson[75335]:     "progress_events": {}
Nov 22 03:23:53 compute-0 happy_wilson[75335]: }
Nov 22 03:23:53 compute-0 systemd[1]: libpod-43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e.scope: Deactivated successfully.
Nov 22 03:23:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2300494745' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:23:53 compute-0 ceph-mgr[75294]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:23:53 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'cephadm'
Nov 22 03:23:53 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:23:53.522+0000 7f2afbc83140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:23:53 compute-0 podman[75361]: 2025-11-22 03:23:53.52468251 +0000 UTC m=+0.045918287 container died 43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e (image=quay.io/ceph/ceph:v18, name=happy_wilson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-223126d825e9144b404098e612cd0c4a646613244ac976f5f3ac2282a9932334-merged.mount: Deactivated successfully.
Nov 22 03:23:53 compute-0 podman[75361]: 2025-11-22 03:23:53.571824549 +0000 UTC m=+0.093060336 container remove 43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e (image=quay.io/ceph/ceph:v18, name=happy_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:23:53 compute-0 systemd[1]: libpod-conmon-43060fbf3b56b259db15c08909db09171023bf64f3cb175ba71019b86e61742e.scope: Deactivated successfully.
Nov 22 03:23:55 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'crash'
Nov 22 03:23:55 compute-0 podman[75388]: 2025-11-22 03:23:55.659816911 +0000 UTC m=+0.053860256 container create 0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a (image=quay.io/ceph/ceph:v18, name=strange_faraday, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:23:55 compute-0 systemd[1]: Started libpod-conmon-0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a.scope.
Nov 22 03:23:55 compute-0 podman[75388]: 2025-11-22 03:23:55.635144939 +0000 UTC m=+0.029188274 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19cddba9754990a05c77b14440b22d9de272ccb961d47a01bfe912806d3a152b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19cddba9754990a05c77b14440b22d9de272ccb961d47a01bfe912806d3a152b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19cddba9754990a05c77b14440b22d9de272ccb961d47a01bfe912806d3a152b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:55 compute-0 podman[75388]: 2025-11-22 03:23:55.755461314 +0000 UTC m=+0.149504639 container init 0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a (image=quay.io/ceph/ceph:v18, name=strange_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:23:55 compute-0 ceph-mgr[75294]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:23:55 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'dashboard'
Nov 22 03:23:55 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:23:55.764+0000 7f2afbc83140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:23:55 compute-0 podman[75388]: 2025-11-22 03:23:55.768618092 +0000 UTC m=+0.162661407 container start 0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a (image=quay.io/ceph/ceph:v18, name=strange_faraday, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:23:55 compute-0 podman[75388]: 2025-11-22 03:23:55.772992978 +0000 UTC m=+0.167036303 container attach 0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a (image=quay.io/ceph/ceph:v18, name=strange_faraday, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:23:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:23:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3344623990' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:23:56 compute-0 strange_faraday[75405]: 
Nov 22 03:23:56 compute-0 strange_faraday[75405]: {
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "health": {
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "status": "HEALTH_OK",
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "checks": {},
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "mutes": []
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     },
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "election_epoch": 5,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "quorum": [
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         0
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     ],
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "quorum_names": [
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "compute-0"
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     ],
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "quorum_age": 5,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "monmap": {
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "epoch": 1,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "min_mon_release_name": "reef",
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_mons": 1
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     },
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "osdmap": {
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "epoch": 1,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_osds": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_up_osds": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "osd_up_since": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_in_osds": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "osd_in_since": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_remapped_pgs": 0
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     },
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "pgmap": {
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "pgs_by_state": [],
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_pgs": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_pools": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_objects": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "data_bytes": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "bytes_used": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "bytes_avail": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "bytes_total": 0
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     },
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "fsmap": {
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "epoch": 1,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "by_rank": [],
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "up:standby": 0
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     },
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "mgrmap": {
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "available": false,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "num_standbys": 0,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "modules": [
Nov 22 03:23:56 compute-0 strange_faraday[75405]:             "iostat",
Nov 22 03:23:56 compute-0 strange_faraday[75405]:             "nfs",
Nov 22 03:23:56 compute-0 strange_faraday[75405]:             "restful"
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         ],
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "services": {}
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     },
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "servicemap": {
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "epoch": 1,
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:23:56 compute-0 strange_faraday[75405]:         "services": {}
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     },
Nov 22 03:23:56 compute-0 strange_faraday[75405]:     "progress_events": {}
Nov 22 03:23:56 compute-0 strange_faraday[75405]: }
Nov 22 03:23:56 compute-0 systemd[1]: libpod-0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a.scope: Deactivated successfully.
Nov 22 03:23:56 compute-0 podman[75388]: 2025-11-22 03:23:56.191858458 +0000 UTC m=+0.585901783 container died 0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a (image=quay.io/ceph/ceph:v18, name=strange_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:23:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-19cddba9754990a05c77b14440b22d9de272ccb961d47a01bfe912806d3a152b-merged.mount: Deactivated successfully.
Nov 22 03:23:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3344623990' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:23:56 compute-0 podman[75388]: 2025-11-22 03:23:56.24366673 +0000 UTC m=+0.637710045 container remove 0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a (image=quay.io/ceph/ceph:v18, name=strange_faraday, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:23:56 compute-0 systemd[1]: libpod-conmon-0577ed845646dc75895dbb0209c1dccc9ccc4d865d8ed924b94bf45e013fad2a.scope: Deactivated successfully.
Nov 22 03:23:57 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'devicehealth'
Nov 22 03:23:57 compute-0 ceph-mgr[75294]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:23:57 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 03:23:57 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:23:57.377+0000 7f2afbc83140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:23:57 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 03:23:57 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 03:23:57 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]:   from numpy import show_config as show_numpy_config
Nov 22 03:23:57 compute-0 ceph-mgr[75294]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:23:57 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:23:57.861+0000 7f2afbc83140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:23:57 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'influx'
Nov 22 03:23:58 compute-0 ceph-mgr[75294]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:23:58 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'insights'
Nov 22 03:23:58 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:23:58.077+0000 7f2afbc83140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:23:58 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'iostat'
Nov 22 03:23:58 compute-0 podman[75443]: 2025-11-22 03:23:58.33495035 +0000 UTC m=+0.059750253 container create 538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8 (image=quay.io/ceph/ceph:v18, name=determined_mccarthy, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:23:58 compute-0 systemd[1]: Started libpod-conmon-538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8.scope.
Nov 22 03:23:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:23:58 compute-0 podman[75443]: 2025-11-22 03:23:58.311645523 +0000 UTC m=+0.036445416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1756f41ab8d2e1888602714b307d2c927c108dcce09675fc19a08d9bf78bac4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1756f41ab8d2e1888602714b307d2c927c108dcce09675fc19a08d9bf78bac4e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1756f41ab8d2e1888602714b307d2c927c108dcce09675fc19a08d9bf78bac4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:23:58 compute-0 podman[75443]: 2025-11-22 03:23:58.420909796 +0000 UTC m=+0.145709719 container init 538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8 (image=quay.io/ceph/ceph:v18, name=determined_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:23:58 compute-0 podman[75443]: 2025-11-22 03:23:58.431846685 +0000 UTC m=+0.156646568 container start 538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8 (image=quay.io/ceph/ceph:v18, name=determined_mccarthy, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:23:58 compute-0 podman[75443]: 2025-11-22 03:23:58.43581593 +0000 UTC m=+0.160615843 container attach 538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8 (image=quay.io/ceph/ceph:v18, name=determined_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:23:58 compute-0 ceph-mgr[75294]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:23:58 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'k8sevents'
Nov 22 03:23:58 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:23:58.554+0000 7f2afbc83140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:23:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:23:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598638735' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]: 
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]: {
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "health": {
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "status": "HEALTH_OK",
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "checks": {},
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "mutes": []
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     },
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "election_epoch": 5,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "quorum": [
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         0
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     ],
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "quorum_names": [
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "compute-0"
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     ],
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "quorum_age": 8,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "monmap": {
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "epoch": 1,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "min_mon_release_name": "reef",
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_mons": 1
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     },
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "osdmap": {
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "epoch": 1,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_osds": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_up_osds": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "osd_up_since": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_in_osds": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "osd_in_since": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_remapped_pgs": 0
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     },
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "pgmap": {
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "pgs_by_state": [],
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_pgs": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_pools": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_objects": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "data_bytes": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "bytes_used": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "bytes_avail": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "bytes_total": 0
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     },
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "fsmap": {
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "epoch": 1,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "by_rank": [],
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "up:standby": 0
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     },
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "mgrmap": {
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "available": false,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "num_standbys": 0,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "modules": [
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:             "iostat",
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:             "nfs",
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:             "restful"
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         ],
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "services": {}
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     },
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "servicemap": {
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "epoch": 1,
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:         "services": {}
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     },
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]:     "progress_events": {}
Nov 22 03:23:58 compute-0 determined_mccarthy[75458]: }
Nov 22 03:23:58 compute-0 systemd[1]: libpod-538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8.scope: Deactivated successfully.
Nov 22 03:23:58 compute-0 podman[75443]: 2025-11-22 03:23:58.822846887 +0000 UTC m=+0.547646770 container died 538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8 (image=quay.io/ceph/ceph:v18, name=determined_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-1756f41ab8d2e1888602714b307d2c927c108dcce09675fc19a08d9bf78bac4e-merged.mount: Deactivated successfully.
Nov 22 03:23:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/598638735' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:23:58 compute-0 podman[75443]: 2025-11-22 03:23:58.871233079 +0000 UTC m=+0.596032952 container remove 538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8 (image=quay.io/ceph/ceph:v18, name=determined_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:23:58 compute-0 systemd[1]: libpod-conmon-538010d4215cfa22ed69276b266b033c8819f50db61e7a13638e2248bdf1c7a8.scope: Deactivated successfully.
Nov 22 03:24:00 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'localpool'
Nov 22 03:24:00 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'mds_autoscaler'
Nov 22 03:24:00 compute-0 podman[75498]: 2025-11-22 03:24:00.944671456 +0000 UTC m=+0.049028379 container create 6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:24:00 compute-0 systemd[1]: Started libpod-conmon-6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e.scope.
Nov 22 03:24:01 compute-0 podman[75498]: 2025-11-22 03:24:00.923199627 +0000 UTC m=+0.027556550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ac4e639ec85285e6654e63189afd71ee41bff23184e4e533cda3918d9a8796/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ac4e639ec85285e6654e63189afd71ee41bff23184e4e533cda3918d9a8796/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75ac4e639ec85285e6654e63189afd71ee41bff23184e4e533cda3918d9a8796/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:01 compute-0 podman[75498]: 2025-11-22 03:24:01.05585931 +0000 UTC m=+0.160216273 container init 6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:01 compute-0 podman[75498]: 2025-11-22 03:24:01.062607839 +0000 UTC m=+0.166964732 container start 6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:24:01 compute-0 podman[75498]: 2025-11-22 03:24:01.066627615 +0000 UTC m=+0.170984538 container attach 6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:01 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'mirroring'
Nov 22 03:24:01 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'nfs'
Nov 22 03:24:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:24:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/280464181' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]: 
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]: {
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "health": {
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "status": "HEALTH_OK",
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "checks": {},
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "mutes": []
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     },
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "election_epoch": 5,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "quorum": [
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         0
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     ],
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "quorum_names": [
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "compute-0"
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     ],
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "quorum_age": 11,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "monmap": {
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "epoch": 1,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "min_mon_release_name": "reef",
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_mons": 1
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     },
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "osdmap": {
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "epoch": 1,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_osds": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_up_osds": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "osd_up_since": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_in_osds": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "osd_in_since": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_remapped_pgs": 0
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     },
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "pgmap": {
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "pgs_by_state": [],
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_pgs": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_pools": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_objects": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "data_bytes": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "bytes_used": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "bytes_avail": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "bytes_total": 0
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     },
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "fsmap": {
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "epoch": 1,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "by_rank": [],
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "up:standby": 0
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     },
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "mgrmap": {
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "available": false,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "num_standbys": 0,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "modules": [
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:             "iostat",
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:             "nfs",
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:             "restful"
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         ],
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "services": {}
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     },
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "servicemap": {
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "epoch": 1,
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:         "services": {}
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     },
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]:     "progress_events": {}
Nov 22 03:24:01 compute-0 youthful_satoshi[75515]: }
Nov 22 03:24:01 compute-0 systemd[1]: libpod-6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e.scope: Deactivated successfully.
Nov 22 03:24:01 compute-0 podman[75498]: 2025-11-22 03:24:01.486688917 +0000 UTC m=+0.591045820 container died 6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-75ac4e639ec85285e6654e63189afd71ee41bff23184e4e533cda3918d9a8796-merged.mount: Deactivated successfully.
Nov 22 03:24:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/280464181' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:01 compute-0 podman[75498]: 2025-11-22 03:24:01.535056947 +0000 UTC m=+0.639413850 container remove 6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e (image=quay.io/ceph/ceph:v18, name=youthful_satoshi, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:24:01 compute-0 systemd[1]: libpod-conmon-6409708c4e8ba92c674b090d45d8a2a2fe16d9ecfe6b1ce0a4581465c00bd17e.scope: Deactivated successfully.
Nov 22 03:24:02 compute-0 ceph-mgr[75294]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 03:24:02 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'orchestrator'
Nov 22 03:24:02 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:02.092+0000 7f2afbc83140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 03:24:02 compute-0 ceph-mgr[75294]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 03:24:02 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'osd_perf_query'
Nov 22 03:24:02 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:02.752+0000 7f2afbc83140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 03:24:03 compute-0 ceph-mgr[75294]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 03:24:03 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'osd_support'
Nov 22 03:24:03 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:03.021+0000 7f2afbc83140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 03:24:03 compute-0 ceph-mgr[75294]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 03:24:03 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'pg_autoscaler'
Nov 22 03:24:03 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:03.253+0000 7f2afbc83140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 03:24:03 compute-0 ceph-mgr[75294]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 03:24:03 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'progress'
Nov 22 03:24:03 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:03.519+0000 7f2afbc83140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 03:24:03 compute-0 podman[75553]: 2025-11-22 03:24:03.606528733 +0000 UTC m=+0.048357982 container create e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50 (image=quay.io/ceph/ceph:v18, name=elated_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:24:03 compute-0 systemd[1]: Started libpod-conmon-e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50.scope.
Nov 22 03:24:03 compute-0 podman[75553]: 2025-11-22 03:24:03.586681308 +0000 UTC m=+0.028510567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bf942c4965b26c29018731b2f33b27eb6e83913a8a2ec02b02a4af1b2c85f3a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bf942c4965b26c29018731b2f33b27eb6e83913a8a2ec02b02a4af1b2c85f3a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bf942c4965b26c29018731b2f33b27eb6e83913a8a2ec02b02a4af1b2c85f3a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:03 compute-0 podman[75553]: 2025-11-22 03:24:03.70914639 +0000 UTC m=+0.150975689 container init e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50 (image=quay.io/ceph/ceph:v18, name=elated_neumann, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:03 compute-0 podman[75553]: 2025-11-22 03:24:03.715853558 +0000 UTC m=+0.157682827 container start e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50 (image=quay.io/ceph/ceph:v18, name=elated_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:03 compute-0 podman[75553]: 2025-11-22 03:24:03.720415128 +0000 UTC m=+0.162244407 container attach e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50 (image=quay.io/ceph/ceph:v18, name=elated_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:24:03 compute-0 ceph-mgr[75294]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 03:24:03 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'prometheus'
Nov 22 03:24:03 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:03.769+0000 7f2afbc83140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 03:24:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:24:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817403256' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:04 compute-0 elated_neumann[75570]: 
Nov 22 03:24:04 compute-0 elated_neumann[75570]: {
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "health": {
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "status": "HEALTH_OK",
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "checks": {},
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "mutes": []
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     },
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "election_epoch": 5,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "quorum": [
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         0
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     ],
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "quorum_names": [
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "compute-0"
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     ],
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "quorum_age": 13,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "monmap": {
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "epoch": 1,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "min_mon_release_name": "reef",
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_mons": 1
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     },
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "osdmap": {
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "epoch": 1,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_osds": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_up_osds": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "osd_up_since": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_in_osds": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "osd_in_since": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_remapped_pgs": 0
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     },
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "pgmap": {
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "pgs_by_state": [],
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_pgs": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_pools": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_objects": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "data_bytes": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "bytes_used": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "bytes_avail": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "bytes_total": 0
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     },
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "fsmap": {
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "epoch": 1,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "by_rank": [],
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "up:standby": 0
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     },
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "mgrmap": {
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "available": false,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "num_standbys": 0,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "modules": [
Nov 22 03:24:04 compute-0 elated_neumann[75570]:             "iostat",
Nov 22 03:24:04 compute-0 elated_neumann[75570]:             "nfs",
Nov 22 03:24:04 compute-0 elated_neumann[75570]:             "restful"
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         ],
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "services": {}
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     },
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "servicemap": {
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "epoch": 1,
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:24:04 compute-0 elated_neumann[75570]:         "services": {}
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     },
Nov 22 03:24:04 compute-0 elated_neumann[75570]:     "progress_events": {}
Nov 22 03:24:04 compute-0 elated_neumann[75570]: }
Nov 22 03:24:04 compute-0 systemd[1]: libpod-e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50.scope: Deactivated successfully.
Nov 22 03:24:04 compute-0 podman[75553]: 2025-11-22 03:24:04.16123432 +0000 UTC m=+0.603063569 container died e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50 (image=quay.io/ceph/ceph:v18, name=elated_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bf942c4965b26c29018731b2f33b27eb6e83913a8a2ec02b02a4af1b2c85f3a-merged.mount: Deactivated successfully.
Nov 22 03:24:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1817403256' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:04 compute-0 podman[75553]: 2025-11-22 03:24:04.219672987 +0000 UTC m=+0.661502246 container remove e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50 (image=quay.io/ceph/ceph:v18, name=elated_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:24:04 compute-0 systemd[1]: libpod-conmon-e454d9882ecfccce897abfbbd2025c5be704bc6bd5d273542cdb81af0df8dc50.scope: Deactivated successfully.
Nov 22 03:24:04 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:04.775+0000 7f2afbc83140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 03:24:04 compute-0 ceph-mgr[75294]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 03:24:04 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'rbd_support'
Nov 22 03:24:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:05.073+0000 7f2afbc83140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 03:24:05 compute-0 ceph-mgr[75294]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 03:24:05 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'restful'
Nov 22 03:24:05 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'rgw'
Nov 22 03:24:06 compute-0 podman[75609]: 2025-11-22 03:24:06.305568385 +0000 UTC m=+0.061393637 container create 1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3 (image=quay.io/ceph/ceph:v18, name=nervous_curran, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:06 compute-0 systemd[1]: Started libpod-conmon-1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3.scope.
Nov 22 03:24:06 compute-0 podman[75609]: 2025-11-22 03:24:06.272785526 +0000 UTC m=+0.028610828 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a1b94ba831bd8ff2f90972f9c52bb50b1f1f524ee0e42fa18463df268d09ad3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a1b94ba831bd8ff2f90972f9c52bb50b1f1f524ee0e42fa18463df268d09ad3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a1b94ba831bd8ff2f90972f9c52bb50b1f1f524ee0e42fa18463df268d09ad3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:06 compute-0 podman[75609]: 2025-11-22 03:24:06.397842937 +0000 UTC m=+0.153668169 container init 1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3 (image=quay.io/ceph/ceph:v18, name=nervous_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:06 compute-0 podman[75609]: 2025-11-22 03:24:06.409285901 +0000 UTC m=+0.165111113 container start 1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3 (image=quay.io/ceph/ceph:v18, name=nervous_curran, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:24:06 compute-0 podman[75609]: 2025-11-22 03:24:06.412586458 +0000 UTC m=+0.168411760 container attach 1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3 (image=quay.io/ceph/ceph:v18, name=nervous_curran, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:24:06 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:06.463+0000 7f2afbc83140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 03:24:06 compute-0 ceph-mgr[75294]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 03:24:06 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'rook'
Nov 22 03:24:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:24:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351220711' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:06 compute-0 nervous_curran[75625]: 
Nov 22 03:24:06 compute-0 nervous_curran[75625]: {
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "health": {
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "status": "HEALTH_OK",
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "checks": {},
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "mutes": []
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     },
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "election_epoch": 5,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "quorum": [
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         0
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     ],
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "quorum_names": [
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "compute-0"
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     ],
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "quorum_age": 16,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "monmap": {
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "epoch": 1,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "min_mon_release_name": "reef",
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_mons": 1
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     },
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "osdmap": {
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "epoch": 1,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_osds": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_up_osds": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "osd_up_since": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_in_osds": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "osd_in_since": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_remapped_pgs": 0
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     },
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "pgmap": {
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "pgs_by_state": [],
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_pgs": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_pools": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_objects": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "data_bytes": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "bytes_used": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "bytes_avail": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "bytes_total": 0
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     },
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "fsmap": {
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "epoch": 1,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "by_rank": [],
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "up:standby": 0
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     },
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "mgrmap": {
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "available": false,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "num_standbys": 0,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "modules": [
Nov 22 03:24:06 compute-0 nervous_curran[75625]:             "iostat",
Nov 22 03:24:06 compute-0 nervous_curran[75625]:             "nfs",
Nov 22 03:24:06 compute-0 nervous_curran[75625]:             "restful"
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         ],
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "services": {}
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     },
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "servicemap": {
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "epoch": 1,
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:24:06 compute-0 nervous_curran[75625]:         "services": {}
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     },
Nov 22 03:24:06 compute-0 nervous_curran[75625]:     "progress_events": {}
Nov 22 03:24:06 compute-0 nervous_curran[75625]: }
Nov 22 03:24:06 compute-0 systemd[1]: libpod-1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3.scope: Deactivated successfully.
Nov 22 03:24:06 compute-0 podman[75609]: 2025-11-22 03:24:06.785542532 +0000 UTC m=+0.541367784 container died 1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3 (image=quay.io/ceph/ceph:v18, name=nervous_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a1b94ba831bd8ff2f90972f9c52bb50b1f1f524ee0e42fa18463df268d09ad3-merged.mount: Deactivated successfully.
Nov 22 03:24:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/351220711' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:06 compute-0 podman[75609]: 2025-11-22 03:24:06.833539973 +0000 UTC m=+0.589365185 container remove 1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3 (image=quay.io/ceph/ceph:v18, name=nervous_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:06 compute-0 systemd[1]: libpod-conmon-1c21ade45edcf9261317c16b6eedbdc3e3517e136cbe13f4beca914ad64263c3.scope: Deactivated successfully.
Nov 22 03:24:08 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:08.508+0000 7f2afbc83140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 03:24:08 compute-0 ceph-mgr[75294]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 03:24:08 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'selftest'
Nov 22 03:24:08 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:08.743+0000 7f2afbc83140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 03:24:08 compute-0 ceph-mgr[75294]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 03:24:08 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'snap_schedule'
Nov 22 03:24:08 compute-0 podman[75661]: 2025-11-22 03:24:08.916065551 +0000 UTC m=+0.054700169 container create 53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd (image=quay.io/ceph/ceph:v18, name=cool_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:24:08 compute-0 systemd[1]: Started libpod-conmon-53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd.scope.
Nov 22 03:24:08 compute-0 podman[75661]: 2025-11-22 03:24:08.89215395 +0000 UTC m=+0.030788568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:09 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:08.999+0000 7f2afbc83140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 03:24:09 compute-0 ceph-mgr[75294]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 03:24:09 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'stats'
Nov 22 03:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b894db068360da89cf1b4da1506a29d730b492a0a00ef73d92c873174d4e49/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b894db068360da89cf1b4da1506a29d730b492a0a00ef73d92c873174d4e49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b894db068360da89cf1b4da1506a29d730b492a0a00ef73d92c873174d4e49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:09 compute-0 podman[75661]: 2025-11-22 03:24:09.019527724 +0000 UTC m=+0.158162392 container init 53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd (image=quay.io/ceph/ceph:v18, name=cool_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:09 compute-0 podman[75661]: 2025-11-22 03:24:09.02892832 +0000 UTC m=+0.167562908 container start 53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd (image=quay.io/ceph/ceph:v18, name=cool_borg, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:24:09 compute-0 podman[75661]: 2025-11-22 03:24:09.03360234 +0000 UTC m=+0.172237028 container attach 53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd (image=quay.io/ceph/ceph:v18, name=cool_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:09 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'status'
Nov 22 03:24:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:24:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3803400768' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:09 compute-0 cool_borg[75678]: 
Nov 22 03:24:09 compute-0 cool_borg[75678]: {
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "health": {
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "status": "HEALTH_OK",
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "checks": {},
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "mutes": []
Nov 22 03:24:09 compute-0 cool_borg[75678]:     },
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "election_epoch": 5,
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "quorum": [
Nov 22 03:24:09 compute-0 cool_borg[75678]:         0
Nov 22 03:24:09 compute-0 cool_borg[75678]:     ],
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "quorum_names": [
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "compute-0"
Nov 22 03:24:09 compute-0 cool_borg[75678]:     ],
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "quorum_age": 18,
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "monmap": {
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "epoch": 1,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "min_mon_release_name": "reef",
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_mons": 1
Nov 22 03:24:09 compute-0 cool_borg[75678]:     },
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "osdmap": {
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "epoch": 1,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_osds": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_up_osds": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "osd_up_since": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_in_osds": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "osd_in_since": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_remapped_pgs": 0
Nov 22 03:24:09 compute-0 cool_borg[75678]:     },
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "pgmap": {
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "pgs_by_state": [],
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_pgs": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_pools": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_objects": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "data_bytes": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "bytes_used": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "bytes_avail": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "bytes_total": 0
Nov 22 03:24:09 compute-0 cool_borg[75678]:     },
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "fsmap": {
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "epoch": 1,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "by_rank": [],
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "up:standby": 0
Nov 22 03:24:09 compute-0 cool_borg[75678]:     },
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "mgrmap": {
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "available": false,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "num_standbys": 0,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "modules": [
Nov 22 03:24:09 compute-0 cool_borg[75678]:             "iostat",
Nov 22 03:24:09 compute-0 cool_borg[75678]:             "nfs",
Nov 22 03:24:09 compute-0 cool_borg[75678]:             "restful"
Nov 22 03:24:09 compute-0 cool_borg[75678]:         ],
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "services": {}
Nov 22 03:24:09 compute-0 cool_borg[75678]:     },
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "servicemap": {
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "epoch": 1,
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:24:09 compute-0 cool_borg[75678]:         "services": {}
Nov 22 03:24:09 compute-0 cool_borg[75678]:     },
Nov 22 03:24:09 compute-0 cool_borg[75678]:     "progress_events": {}
Nov 22 03:24:09 compute-0 cool_borg[75678]: }
Nov 22 03:24:09 compute-0 systemd[1]: libpod-53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd.scope: Deactivated successfully.
Nov 22 03:24:09 compute-0 podman[75661]: 2025-11-22 03:24:09.444158457 +0000 UTC m=+0.582793035 container died 53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd (image=quay.io/ceph/ceph:v18, name=cool_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3803400768' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3b894db068360da89cf1b4da1506a29d730b492a0a00ef73d92c873174d4e49-merged.mount: Deactivated successfully.
Nov 22 03:24:09 compute-0 podman[75661]: 2025-11-22 03:24:09.50217066 +0000 UTC m=+0.640805258 container remove 53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd (image=quay.io/ceph/ceph:v18, name=cool_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:24:09 compute-0 systemd[1]: libpod-conmon-53cd4dd1e2ac14c758baaf01119d3487a9cbe76ce9b0dcce14294f46885c03cd.scope: Deactivated successfully.
Nov 22 03:24:09 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:09.524+0000 7f2afbc83140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 03:24:09 compute-0 ceph-mgr[75294]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 03:24:09 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'telegraf'
Nov 22 03:24:09 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:09.752+0000 7f2afbc83140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 03:24:09 compute-0 ceph-mgr[75294]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 03:24:09 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'telemetry'
Nov 22 03:24:10 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:10.375+0000 7f2afbc83140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 03:24:10 compute-0 ceph-mgr[75294]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 03:24:10 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'test_orchestrator'
Nov 22 03:24:11 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:11.048+0000 7f2afbc83140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 03:24:11 compute-0 ceph-mgr[75294]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 03:24:11 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'volumes'
Nov 22 03:24:11 compute-0 podman[75716]: 2025-11-22 03:24:11.659599231 +0000 UTC m=+0.121777008 container create d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d (image=quay.io/ceph/ceph:v18, name=clever_swanson, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:11 compute-0 podman[75716]: 2025-11-22 03:24:11.581629173 +0000 UTC m=+0.043807000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:11 compute-0 systemd[1]: Started libpod-conmon-d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d.scope.
Nov 22 03:24:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:11 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:11.736+0000 7f2afbc83140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 03:24:11 compute-0 ceph-mgr[75294]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 03:24:11 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'zabbix'
Nov 22 03:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da11e0680b2eb2c65e8769981a018857152692ef3da3a8f8475e087c63369b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da11e0680b2eb2c65e8769981a018857152692ef3da3a8f8475e087c63369b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71da11e0680b2eb2c65e8769981a018857152692ef3da3a8f8475e087c63369b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:11 compute-0 podman[75716]: 2025-11-22 03:24:11.769557384 +0000 UTC m=+0.231735141 container init d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d (image=quay.io/ceph/ceph:v18, name=clever_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:11 compute-0 podman[75716]: 2025-11-22 03:24:11.778930697 +0000 UTC m=+0.241108434 container start d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d (image=quay.io/ceph/ceph:v18, name=clever_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:24:11 compute-0 podman[75716]: 2025-11-22 03:24:11.783523599 +0000 UTC m=+0.245701366 container attach d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d (image=quay.io/ceph/ceph:v18, name=clever_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:24:11 compute-0 ceph-mgr[75294]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 03:24:11 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:11.979+0000 7f2afbc83140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 03:24:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wbwfxq
Nov 22 03:24:11 compute-0 ceph-mgr[75294]: ms_deliver_dispatch: unhandled message 0x5640bda391e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 22 03:24:11 compute-0 ceph-mgr[75294]: mgr handle_mgr_map Activating!
Nov 22 03:24:11 compute-0 ceph-mgr[75294]: mgr handle_mgr_map I am now activating
Nov 22 03:24:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.wbwfxq(active, starting, since 0.0148114s)
Nov 22 03:24:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 22 03:24:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e1 all = 1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wbwfxq", "id": "compute-0.wbwfxq"} v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wbwfxq", "id": "compute-0.wbwfxq"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: balancer
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [balancer INFO root] Starting
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Manager daemon compute-0.wbwfxq is now available
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: crash
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:24:12
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [balancer INFO root] No pools available
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: devicehealth
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [devicehealth INFO root] Starting
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: iostat
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: nfs
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: orchestrator
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: pg_autoscaler
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: progress
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [progress INFO root] Loading...
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [progress INFO root] No stored events to load
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [progress INFO root] Loaded [] historic events
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [progress INFO root] Loaded OSDMap, ready.
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] recovery thread starting
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] starting setup
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: rbd_support
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: restful
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/mirror_snapshot_schedule"} v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/mirror_snapshot_schedule"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [restful INFO root] server_addr: :: server_port: 8003
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [restful WARNING root] server not running: no certificate configured
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: status
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: telemetry
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] PerfHandler: starting
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TaskHandler: starting
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/trash_purge_schedule"} v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/trash_purge_schedule"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: volumes
Nov 22 03:24:12 compute-0 ceph-mon[75011]: Activating manager daemon compute-0.wbwfxq
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mgrmap e2: compute-0.wbwfxq(active, starting, since 0.0148114s)
Nov 22 03:24:12 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wbwfxq", "id": "compute-0.wbwfxq"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: Manager daemon compute-0.wbwfxq is now available
Nov 22 03:24:12 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/mirror_snapshot_schedule"}]: dispatch
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Nov 22 03:24:12 compute-0 ceph-mgr[75294]: [rbd_support INFO root] setup complete
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462693762' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:12 compute-0 clever_swanson[75733]: 
Nov 22 03:24:12 compute-0 clever_swanson[75733]: {
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "health": {
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "status": "HEALTH_OK",
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "checks": {},
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "mutes": []
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     },
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "election_epoch": 5,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "quorum": [
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         0
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     ],
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "quorum_names": [
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "compute-0"
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     ],
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "quorum_age": 21,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "monmap": {
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "epoch": 1,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "min_mon_release_name": "reef",
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_mons": 1
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     },
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "osdmap": {
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "epoch": 1,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_osds": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_up_osds": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "osd_up_since": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_in_osds": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "osd_in_since": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_remapped_pgs": 0
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     },
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "pgmap": {
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "pgs_by_state": [],
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_pgs": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_pools": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_objects": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "data_bytes": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "bytes_used": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "bytes_avail": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "bytes_total": 0
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     },
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "fsmap": {
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "epoch": 1,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "by_rank": [],
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "up:standby": 0
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     },
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "mgrmap": {
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "available": false,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "num_standbys": 0,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "modules": [
Nov 22 03:24:12 compute-0 clever_swanson[75733]:             "iostat",
Nov 22 03:24:12 compute-0 clever_swanson[75733]:             "nfs",
Nov 22 03:24:12 compute-0 clever_swanson[75733]:             "restful"
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         ],
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "services": {}
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     },
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "servicemap": {
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "epoch": 1,
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:24:12 compute-0 clever_swanson[75733]:         "services": {}
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     },
Nov 22 03:24:12 compute-0 clever_swanson[75733]:     "progress_events": {}
Nov 22 03:24:12 compute-0 clever_swanson[75733]: }
Nov 22 03:24:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:12 compute-0 systemd[1]: libpod-d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d.scope: Deactivated successfully.
Nov 22 03:24:12 compute-0 podman[75716]: 2025-11-22 03:24:12.294694604 +0000 UTC m=+0.756872381 container died d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d (image=quay.io/ceph/ceph:v18, name=clever_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-71da11e0680b2eb2c65e8769981a018857152692ef3da3a8f8475e087c63369b-merged.mount: Deactivated successfully.
Nov 22 03:24:12 compute-0 podman[75716]: 2025-11-22 03:24:12.354251378 +0000 UTC m=+0.816429125 container remove d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d (image=quay.io/ceph/ceph:v18, name=clever_swanson, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:12 compute-0 systemd[1]: libpod-conmon-d6865d93ef3f9a8091b2abdf157cf0bd1e9e2fc20642708c6d41c44885fc455d.scope: Deactivated successfully.
Nov 22 03:24:13 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.wbwfxq(active, since 1.03306s)
Nov 22 03:24:13 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/trash_purge_schedule"}]: dispatch
Nov 22 03:24:13 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:13 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1462693762' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:13 compute-0 ceph-mon[75011]: from='mgr.14102 192.168.122.100:0/2296809889' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:13 compute-0 ceph-mon[75011]: mgrmap e3: compute-0.wbwfxq(active, since 1.03306s)
Nov 22 03:24:13 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.wbwfxq(active, since 2s)
Nov 22 03:24:14 compute-0 podman[75850]: 2025-11-22 03:24:14.44635609 +0000 UTC m=+0.059475976 container create 18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531 (image=quay.io/ceph/ceph:v18, name=brave_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:24:14 compute-0 systemd[1]: Started libpod-conmon-18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531.scope.
Nov 22 03:24:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cb9c0b9a91f4589db28eda1ddf0ac3575a8be1da42f9d4d3cb95f06d10b6629/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cb9c0b9a91f4589db28eda1ddf0ac3575a8be1da42f9d4d3cb95f06d10b6629/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cb9c0b9a91f4589db28eda1ddf0ac3575a8be1da42f9d4d3cb95f06d10b6629/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:14 compute-0 podman[75850]: 2025-11-22 03:24:14.42409657 +0000 UTC m=+0.037216436 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:14 compute-0 podman[75850]: 2025-11-22 03:24:14.548169378 +0000 UTC m=+0.161289324 container init 18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531 (image=quay.io/ceph/ceph:v18, name=brave_banzai, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:24:14 compute-0 podman[75850]: 2025-11-22 03:24:14.557798683 +0000 UTC m=+0.170918569 container start 18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531 (image=quay.io/ceph/ceph:v18, name=brave_banzai, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:24:14 compute-0 podman[75850]: 2025-11-22 03:24:14.562597447 +0000 UTC m=+0.175717393 container attach 18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531 (image=quay.io/ceph/ceph:v18, name=brave_banzai, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:15 compute-0 ceph-mon[75011]: mgrmap e4: compute-0.wbwfxq(active, since 2s)
Nov 22 03:24:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Nov 22 03:24:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632671202' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:15 compute-0 brave_banzai[75866]: 
Nov 22 03:24:15 compute-0 brave_banzai[75866]: {
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "health": {
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "status": "HEALTH_OK",
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "checks": {},
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "mutes": []
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     },
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "election_epoch": 5,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "quorum": [
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         0
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     ],
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "quorum_names": [
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "compute-0"
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     ],
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "quorum_age": 24,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "monmap": {
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "epoch": 1,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "min_mon_release_name": "reef",
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_mons": 1
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     },
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "osdmap": {
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "epoch": 1,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_osds": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_up_osds": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "osd_up_since": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_in_osds": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "osd_in_since": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_remapped_pgs": 0
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     },
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "pgmap": {
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "pgs_by_state": [],
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_pgs": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_pools": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_objects": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "data_bytes": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "bytes_used": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "bytes_avail": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "bytes_total": 0
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     },
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "fsmap": {
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "epoch": 1,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "by_rank": [],
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "up:standby": 0
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     },
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "mgrmap": {
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "available": true,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "num_standbys": 0,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "modules": [
Nov 22 03:24:15 compute-0 brave_banzai[75866]:             "iostat",
Nov 22 03:24:15 compute-0 brave_banzai[75866]:             "nfs",
Nov 22 03:24:15 compute-0 brave_banzai[75866]:             "restful"
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         ],
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "services": {}
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     },
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "servicemap": {
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "epoch": 1,
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "modified": "2025-11-22T03:23:47.511353+0000",
Nov 22 03:24:15 compute-0 brave_banzai[75866]:         "services": {}
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     },
Nov 22 03:24:15 compute-0 brave_banzai[75866]:     "progress_events": {}
Nov 22 03:24:15 compute-0 brave_banzai[75866]: }
Nov 22 03:24:15 compute-0 systemd[1]: libpod-18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531.scope: Deactivated successfully.
Nov 22 03:24:15 compute-0 podman[75850]: 2025-11-22 03:24:15.202459826 +0000 UTC m=+0.815579732 container died 18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531 (image=quay.io/ceph/ceph:v18, name=brave_banzai, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cb9c0b9a91f4589db28eda1ddf0ac3575a8be1da42f9d4d3cb95f06d10b6629-merged.mount: Deactivated successfully.
Nov 22 03:24:15 compute-0 podman[75850]: 2025-11-22 03:24:15.259609041 +0000 UTC m=+0.872728917 container remove 18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531 (image=quay.io/ceph/ceph:v18, name=brave_banzai, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:24:15 compute-0 systemd[1]: libpod-conmon-18cd25f8cccabe97be60018c22f860e9b8fba80b8ceebc80d2ec3e39ddb44531.scope: Deactivated successfully.
Nov 22 03:24:15 compute-0 podman[75906]: 2025-11-22 03:24:15.328559368 +0000 UTC m=+0.038366894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:15 compute-0 podman[75906]: 2025-11-22 03:24:15.420699026 +0000 UTC m=+0.130506501 container create 707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4 (image=quay.io/ceph/ceph:v18, name=blissful_chandrasekhar, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:15 compute-0 systemd[1]: Started libpod-conmon-707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4.scope.
Nov 22 03:24:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162d1ac25824cbee4cfd026cfe1e79a053001e21f26ebdd5747930afc4193ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162d1ac25824cbee4cfd026cfe1e79a053001e21f26ebdd5747930afc4193ba8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162d1ac25824cbee4cfd026cfe1e79a053001e21f26ebdd5747930afc4193ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/162d1ac25824cbee4cfd026cfe1e79a053001e21f26ebdd5747930afc4193ba8/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:15 compute-0 podman[75906]: 2025-11-22 03:24:15.559896869 +0000 UTC m=+0.269704375 container init 707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4 (image=quay.io/ceph/ceph:v18, name=blissful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:24:15 compute-0 podman[75906]: 2025-11-22 03:24:15.56579734 +0000 UTC m=+0.275604826 container start 707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4 (image=quay.io/ceph/ceph:v18, name=blissful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:24:15 compute-0 podman[75906]: 2025-11-22 03:24:15.571373029 +0000 UTC m=+0.281180515 container attach 707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4 (image=quay.io/ceph/ceph:v18, name=blissful_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:15 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 03:24:16 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2990890833' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:24:16 compute-0 systemd[1]: libpod-707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4.scope: Deactivated successfully.
Nov 22 03:24:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1632671202' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Nov 22 03:24:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2990890833' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:24:16 compute-0 podman[75949]: 2025-11-22 03:24:16.138037412 +0000 UTC m=+0.033331520 container died 707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4 (image=quay.io/ceph/ceph:v18, name=blissful_chandrasekhar, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:24:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-162d1ac25824cbee4cfd026cfe1e79a053001e21f26ebdd5747930afc4193ba8-merged.mount: Deactivated successfully.
Nov 22 03:24:16 compute-0 podman[75949]: 2025-11-22 03:24:16.185318263 +0000 UTC m=+0.080612371 container remove 707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4 (image=quay.io/ceph/ceph:v18, name=blissful_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:24:16 compute-0 systemd[1]: libpod-conmon-707e7436823f2479151bc04957443cc9ab69a04990674632f383df3408c775e4.scope: Deactivated successfully.
Nov 22 03:24:16 compute-0 podman[75964]: 2025-11-22 03:24:16.268190929 +0000 UTC m=+0.047101943 container create 2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7 (image=quay.io/ceph/ceph:v18, name=relaxed_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:24:16 compute-0 systemd[1]: Started libpod-conmon-2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7.scope.
Nov 22 03:24:16 compute-0 podman[75964]: 2025-11-22 03:24:16.246909501 +0000 UTC m=+0.025820495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6938711a6579d7bd05c96615c2ed94fd4455735316ffc4e6da6e567340bea2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6938711a6579d7bd05c96615c2ed94fd4455735316ffc4e6da6e567340bea2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf6938711a6579d7bd05c96615c2ed94fd4455735316ffc4e6da6e567340bea2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:16 compute-0 podman[75964]: 2025-11-22 03:24:16.370066964 +0000 UTC m=+0.148978048 container init 2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7 (image=quay.io/ceph/ceph:v18, name=relaxed_mcclintock, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:16 compute-0 podman[75964]: 2025-11-22 03:24:16.381296533 +0000 UTC m=+0.160207537 container start 2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7 (image=quay.io/ceph/ceph:v18, name=relaxed_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:24:16 compute-0 podman[75964]: 2025-11-22 03:24:16.385129053 +0000 UTC m=+0.164040117 container attach 2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7 (image=quay.io/ceph/ceph:v18, name=relaxed_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:24:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Nov 22 03:24:16 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2808934129' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 22 03:24:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2808934129' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Nov 22 03:24:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2808934129' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 22 03:24:17 compute-0 ceph-mgr[75294]: mgr handle_mgr_map respawning because set of enabled modules changed!
Nov 22 03:24:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.wbwfxq(active, since 5s)
Nov 22 03:24:17 compute-0 systemd[1]: libpod-2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7.scope: Deactivated successfully.
Nov 22 03:24:17 compute-0 podman[75964]: 2025-11-22 03:24:17.144715202 +0000 UTC m=+0.923626196 container died 2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7 (image=quay.io/ceph/ceph:v18, name=relaxed_mcclintock, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf6938711a6579d7bd05c96615c2ed94fd4455735316ffc4e6da6e567340bea2-merged.mount: Deactivated successfully.
Nov 22 03:24:17 compute-0 podman[75964]: 2025-11-22 03:24:17.19828575 +0000 UTC m=+0.977196774 container remove 2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7 (image=quay.io/ceph/ceph:v18, name=relaxed_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:24:17 compute-0 systemd[1]: libpod-conmon-2a2e78ee3dcf2f55b6ba5670571e7d198d427de2ee1899726d6c1408ee43a8c7.scope: Deactivated successfully.
Nov 22 03:24:17 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: ignoring --setuser ceph since I am not root
Nov 22 03:24:17 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: ignoring --setgroup ceph since I am not root
Nov 22 03:24:17 compute-0 ceph-mgr[75294]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 03:24:17 compute-0 ceph-mgr[75294]: pidfile_write: ignore empty --pid-file
Nov 22 03:24:17 compute-0 podman[76019]: 2025-11-22 03:24:17.302458976 +0000 UTC m=+0.073836364 container create 2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9 (image=quay.io/ceph/ceph:v18, name=bold_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:24:17 compute-0 systemd[1]: Started libpod-conmon-2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9.scope.
Nov 22 03:24:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:17 compute-0 podman[76019]: 2025-11-22 03:24:17.273562009 +0000 UTC m=+0.044939457 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45034ca3766975a915d7f0ece9bb4f275c83d8cb7a3c9ba47ce45ffae7c9993/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45034ca3766975a915d7f0ece9bb4f275c83d8cb7a3c9ba47ce45ffae7c9993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b45034ca3766975a915d7f0ece9bb4f275c83d8cb7a3c9ba47ce45ffae7c9993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:17 compute-0 podman[76019]: 2025-11-22 03:24:17.392579262 +0000 UTC m=+0.163956690 container init 2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9 (image=quay.io/ceph/ceph:v18, name=bold_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:24:17 compute-0 podman[76019]: 2025-11-22 03:24:17.398613678 +0000 UTC m=+0.169991036 container start 2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9 (image=quay.io/ceph/ceph:v18, name=bold_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:24:17 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'alerts'
Nov 22 03:24:17 compute-0 podman[76019]: 2025-11-22 03:24:17.403002276 +0000 UTC m=+0.174379704 container attach 2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9 (image=quay.io/ceph/ceph:v18, name=bold_bohr, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:24:17 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:17.674+0000 7fd442101140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:24:17 compute-0 ceph-mgr[75294]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:24:17 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'balancer'
Nov 22 03:24:17 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:17.902+0000 7fd442101140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:24:17 compute-0 ceph-mgr[75294]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:24:17 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'cephadm'
Nov 22 03:24:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 03:24:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502605179' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 03:24:17 compute-0 bold_bohr[76059]: {
Nov 22 03:24:17 compute-0 bold_bohr[76059]:     "epoch": 5,
Nov 22 03:24:17 compute-0 bold_bohr[76059]:     "available": true,
Nov 22 03:24:17 compute-0 bold_bohr[76059]:     "active_name": "compute-0.wbwfxq",
Nov 22 03:24:17 compute-0 bold_bohr[76059]:     "num_standby": 0
Nov 22 03:24:17 compute-0 bold_bohr[76059]: }
Nov 22 03:24:17 compute-0 systemd[1]: libpod-2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9.scope: Deactivated successfully.
Nov 22 03:24:17 compute-0 podman[76019]: 2025-11-22 03:24:17.980895589 +0000 UTC m=+0.752272937 container died 2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9 (image=quay.io/ceph/ceph:v18, name=bold_bohr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b45034ca3766975a915d7f0ece9bb4f275c83d8cb7a3c9ba47ce45ffae7c9993-merged.mount: Deactivated successfully.
Nov 22 03:24:18 compute-0 podman[76019]: 2025-11-22 03:24:18.025970237 +0000 UTC m=+0.797347595 container remove 2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9 (image=quay.io/ceph/ceph:v18, name=bold_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:18 compute-0 systemd[1]: libpod-conmon-2b793654dee6db063de40a3c1adefee2683dec1a95e75fa158f7f07ab06505e9.scope: Deactivated successfully.
Nov 22 03:24:18 compute-0 podman[76097]: 2025-11-22 03:24:18.092771055 +0000 UTC m=+0.046103021 container create a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3 (image=quay.io/ceph/ceph:v18, name=relaxed_dhawan, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:24:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2808934129' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Nov 22 03:24:18 compute-0 ceph-mon[75011]: mgrmap e5: compute-0.wbwfxq(active, since 5s)
Nov 22 03:24:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3502605179' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 03:24:18 compute-0 systemd[1]: Started libpod-conmon-a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3.scope.
Nov 22 03:24:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735a6e0c8a93cc29d17667d06963f5e82989608129612e0d05a974bf4cb9a3c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735a6e0c8a93cc29d17667d06963f5e82989608129612e0d05a974bf4cb9a3c3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/735a6e0c8a93cc29d17667d06963f5e82989608129612e0d05a974bf4cb9a3c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:18 compute-0 podman[76097]: 2025-11-22 03:24:18.072636379 +0000 UTC m=+0.025968395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:18 compute-0 podman[76097]: 2025-11-22 03:24:18.16900065 +0000 UTC m=+0.122332646 container init a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3 (image=quay.io/ceph/ceph:v18, name=relaxed_dhawan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:24:18 compute-0 podman[76097]: 2025-11-22 03:24:18.17777312 +0000 UTC m=+0.131105116 container start a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3 (image=quay.io/ceph/ceph:v18, name=relaxed_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:24:18 compute-0 podman[76097]: 2025-11-22 03:24:18.18176398 +0000 UTC m=+0.135095946 container attach a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3 (image=quay.io/ceph/ceph:v18, name=relaxed_dhawan, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:19 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'crash'
Nov 22 03:24:19 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:19.922+0000 7fd442101140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:24:19 compute-0 ceph-mgr[75294]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:24:19 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'dashboard'
Nov 22 03:24:21 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'devicehealth'
Nov 22 03:24:21 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:21.536+0000 7fd442101140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:24:21 compute-0 ceph-mgr[75294]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:24:21 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 03:24:22 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 03:24:22 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 03:24:22 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]:   from numpy import show_config as show_numpy_config
Nov 22 03:24:22 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:22.044+0000 7fd442101140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:24:22 compute-0 ceph-mgr[75294]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:24:22 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'influx'
Nov 22 03:24:22 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:22.279+0000 7fd442101140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:24:22 compute-0 ceph-mgr[75294]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:24:22 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'insights'
Nov 22 03:24:22 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'iostat'
Nov 22 03:24:22 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:22.804+0000 7fd442101140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:24:22 compute-0 ceph-mgr[75294]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Nov 22 03:24:22 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'k8sevents'
Nov 22 03:24:24 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'localpool'
Nov 22 03:24:24 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'mds_autoscaler'
Nov 22 03:24:25 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'mirroring'
Nov 22 03:24:25 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'nfs'
Nov 22 03:24:26 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:26.333+0000 7fd442101140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 03:24:26 compute-0 ceph-mgr[75294]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Nov 22 03:24:26 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'orchestrator'
Nov 22 03:24:26 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:26.973+0000 7fd442101140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 03:24:26 compute-0 ceph-mgr[75294]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Nov 22 03:24:26 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'osd_perf_query'
Nov 22 03:24:27 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:27.234+0000 7fd442101140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 03:24:27 compute-0 ceph-mgr[75294]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Nov 22 03:24:27 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'osd_support'
Nov 22 03:24:27 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:27.465+0000 7fd442101140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 03:24:27 compute-0 ceph-mgr[75294]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Nov 22 03:24:27 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'pg_autoscaler'
Nov 22 03:24:27 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:27.729+0000 7fd442101140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 03:24:27 compute-0 ceph-mgr[75294]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Nov 22 03:24:27 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'progress'
Nov 22 03:24:27 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:27.946+0000 7fd442101140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 03:24:27 compute-0 ceph-mgr[75294]: mgr[py] Module progress has missing NOTIFY_TYPES member
Nov 22 03:24:27 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'prometheus'
Nov 22 03:24:28 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:28.933+0000 7fd442101140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 03:24:28 compute-0 ceph-mgr[75294]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Nov 22 03:24:28 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'rbd_support'
Nov 22 03:24:29 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:29.230+0000 7fd442101140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 03:24:29 compute-0 ceph-mgr[75294]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Nov 22 03:24:29 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'restful'
Nov 22 03:24:29 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'rgw'
Nov 22 03:24:30 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:30.651+0000 7fd442101140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 03:24:30 compute-0 ceph-mgr[75294]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Nov 22 03:24:30 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'rook'
Nov 22 03:24:32 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:32.691+0000 7fd442101140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 03:24:32 compute-0 ceph-mgr[75294]: mgr[py] Module rook has missing NOTIFY_TYPES member
Nov 22 03:24:32 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'selftest'
Nov 22 03:24:32 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:32.930+0000 7fd442101140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 03:24:32 compute-0 ceph-mgr[75294]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Nov 22 03:24:32 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'snap_schedule'
Nov 22 03:24:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:33.174+0000 7fd442101140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 03:24:33 compute-0 ceph-mgr[75294]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Nov 22 03:24:33 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'stats'
Nov 22 03:24:33 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'status'
Nov 22 03:24:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:33.695+0000 7fd442101140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 03:24:33 compute-0 ceph-mgr[75294]: mgr[py] Module status has missing NOTIFY_TYPES member
Nov 22 03:24:33 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'telegraf'
Nov 22 03:24:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:33.916+0000 7fd442101140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 03:24:33 compute-0 ceph-mgr[75294]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Nov 22 03:24:33 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'telemetry'
Nov 22 03:24:34 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:34.496+0000 7fd442101140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 03:24:34 compute-0 ceph-mgr[75294]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Nov 22 03:24:34 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'test_orchestrator'
Nov 22 03:24:35 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:35.150+0000 7fd442101140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 03:24:35 compute-0 ceph-mgr[75294]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Nov 22 03:24:35 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'volumes'
Nov 22 03:24:35 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:35.826+0000 7fd442101140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 03:24:35 compute-0 ceph-mgr[75294]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Nov 22 03:24:35 compute-0 ceph-mgr[75294]: mgr[py] Loading python module 'zabbix'
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Active manager daemon compute-0.wbwfxq restarted
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.wbwfxq
Nov 22 03:24:36 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T03:24:36.060+0000 7fd442101140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: ms_deliver_dispatch: unhandled message 0x55734d7ef1e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr handle_mgr_map Activating!
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr handle_mgr_map I am now activating
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.wbwfxq(active, starting, since 0.013237s)
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.wbwfxq", "id": "compute-0.wbwfxq"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wbwfxq", "id": "compute-0.wbwfxq"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e1 all = 1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: balancer
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Manager daemon compute-0.wbwfxq is now available
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Starting
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:24:36
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] No pools available
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Nov 22 03:24:36 compute-0 ceph-mon[75011]: Active manager daemon compute-0.wbwfxq restarted
Nov 22 03:24:36 compute-0 ceph-mon[75011]: Activating manager daemon compute-0.wbwfxq
Nov 22 03:24:36 compute-0 ceph-mon[75011]: osdmap e2: 0 total, 0 up, 0 in
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mgrmap e6: compute-0.wbwfxq(active, starting, since 0.013237s)
Nov 22 03:24:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mgr metadata", "who": "compute-0.wbwfxq", "id": "compute-0.wbwfxq"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mds metadata"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mon[75011]: Manager daemon compute-0.wbwfxq is now available
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: cephadm
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: crash
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: devicehealth
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: iostat
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [devicehealth INFO root] Starting
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: nfs
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: orchestrator
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: pg_autoscaler
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: progress
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [progress INFO root] Loading...
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [progress INFO root] No stored events to load
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [progress INFO root] Loaded [] historic events
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [progress INFO root] Loaded OSDMap, ready.
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] recovery thread starting
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] starting setup
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: rbd_support
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: restful
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: status
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [restful INFO root] server_addr: :: server_port: 8003
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/mirror_snapshot_schedule"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/mirror_snapshot_schedule"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: telemetry
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] PerfHandler: starting
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [restful WARNING root] server not running: no certificate configured
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TaskHandler: starting
Nov 22 03:24:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/trash_purge_schedule"} v 0) v1
Nov 22 03:24:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/trash_purge_schedule"}]: dispatch
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] setup complete
Nov 22 03:24:36 compute-0 ceph-mgr[75294]: mgr load Constructed class from module: volumes
Nov 22 03:24:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.wbwfxq(active, since 1.02368s)
Nov 22 03:24:37 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 22 03:24:37 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 22 03:24:37 compute-0 relaxed_dhawan[76114]: {
Nov 22 03:24:37 compute-0 relaxed_dhawan[76114]:     "mgrmap_epoch": 7,
Nov 22 03:24:37 compute-0 relaxed_dhawan[76114]:     "initialized": true
Nov 22 03:24:37 compute-0 relaxed_dhawan[76114]: }
Nov 22 03:24:37 compute-0 systemd[1]: libpod-a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3.scope: Deactivated successfully.
Nov 22 03:24:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Nov 22 03:24:37 compute-0 podman[76097]: 2025-11-22 03:24:37.119674649 +0000 UTC m=+19.073006615 container died a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3 (image=quay.io/ceph/ceph:v18, name=relaxed_dhawan, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:24:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Nov 22 03:24:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:37 compute-0 ceph-mon[75011]: Found migration_current of "None". Setting to last migration.
Nov 22 03:24:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/mirror_snapshot_schedule"}]: dispatch
Nov 22 03:24:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.wbwfxq/trash_purge_schedule"}]: dispatch
Nov 22 03:24:37 compute-0 ceph-mon[75011]: mgrmap e7: compute-0.wbwfxq(active, since 1.02368s)
Nov 22 03:24:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-735a6e0c8a93cc29d17667d06963f5e82989608129612e0d05a974bf4cb9a3c3-merged.mount: Deactivated successfully.
Nov 22 03:24:37 compute-0 podman[76097]: 2025-11-22 03:24:37.172569341 +0000 UTC m=+19.125901317 container remove a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3 (image=quay.io/ceph/ceph:v18, name=relaxed_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:37 compute-0 systemd[1]: libpod-conmon-a4fbf61d2e58a05f0a28c333865836151137b4e46fff7f4ec1c7fa14e97193b3.scope: Deactivated successfully.
Nov 22 03:24:37 compute-0 podman[76275]: 2025-11-22 03:24:37.243048154 +0000 UTC m=+0.044231592 container create aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3 (image=quay.io/ceph/ceph:v18, name=youthful_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:24:37 compute-0 systemd[1]: Started libpod-conmon-aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3.scope.
Nov 22 03:24:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d60308a2e757dbf5cea05cf2a4ba469c1463944633ef3ce858be4220cb9a4b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d60308a2e757dbf5cea05cf2a4ba469c1463944633ef3ce858be4220cb9a4b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d60308a2e757dbf5cea05cf2a4ba469c1463944633ef3ce858be4220cb9a4b4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:37 compute-0 podman[76275]: 2025-11-22 03:24:37.226397109 +0000 UTC m=+0.027580567 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:37 compute-0 podman[76275]: 2025-11-22 03:24:37.332975415 +0000 UTC m=+0.134158953 container init aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3 (image=quay.io/ceph/ceph:v18, name=youthful_einstein, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:37 compute-0 podman[76275]: 2025-11-22 03:24:37.339412315 +0000 UTC m=+0.140595793 container start aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3 (image=quay.io/ceph/ceph:v18, name=youthful_einstein, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:37 compute-0 podman[76275]: 2025-11-22 03:24:37.343616339 +0000 UTC m=+0.144799907 container attach aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3 (image=quay.io/ceph/ceph:v18, name=youthful_einstein, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:37 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Nov 22 03:24:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:24:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:37 compute-0 systemd[1]: libpod-aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3.scope: Deactivated successfully.
Nov 22 03:24:37 compute-0 podman[76275]: 2025-11-22 03:24:37.927408254 +0000 UTC m=+0.728591692 container died aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3 (image=quay.io/ceph/ceph:v18, name=youthful_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d60308a2e757dbf5cea05cf2a4ba469c1463944633ef3ce858be4220cb9a4b4-merged.mount: Deactivated successfully.
Nov 22 03:24:37 compute-0 ceph-mgr[75294]: [cephadm INFO cherrypy.error] [22/Nov/2025:03:24:37] ENGINE Bus STARTING
Nov 22 03:24:37 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : [22/Nov/2025:03:24:37] ENGINE Bus STARTING
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:38 compute-0 podman[76275]: 2025-11-22 03:24:38.087971171 +0000 UTC m=+0.889154609 container remove aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3 (image=quay.io/ceph/ceph:v18, name=youthful_einstein, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: [cephadm INFO cherrypy.error] [22/Nov/2025:03:24:38] ENGINE Serving on https://192.168.122.100:7150
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : [22/Nov/2025:03:24:38] ENGINE Serving on https://192.168.122.100:7150
Nov 22 03:24:38 compute-0 systemd[1]: libpod-conmon-aebdbac5117d0d8c2f19469d2e7be4af6192762447cf336ea09d014925d603b3.scope: Deactivated successfully.
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: [cephadm INFO cherrypy.error] [22/Nov/2025:03:24:38] ENGINE Client ('192.168.122.100', 35036) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : [22/Nov/2025:03:24:38] ENGINE Client ('192.168.122.100', 35036) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 03:24:38 compute-0 ceph-mon[75011]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Nov 22 03:24:38 compute-0 ceph-mon[75011]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Nov 22 03:24:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:38 compute-0 podman[76343]: 2025-11-22 03:24:38.177643381 +0000 UTC m=+0.063736043 container create 4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad (image=quay.io/ceph/ceph:v18, name=nostalgic_edison, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: [cephadm INFO cherrypy.error] [22/Nov/2025:03:24:38] ENGINE Serving on http://192.168.122.100:8765
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : [22/Nov/2025:03:24:38] ENGINE Serving on http://192.168.122.100:8765
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: [cephadm INFO cherrypy.error] [22/Nov/2025:03:24:38] ENGINE Bus STARTED
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : [22/Nov/2025:03:24:38] ENGINE Bus STARTED
Nov 22 03:24:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:24:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:38 compute-0 systemd[1]: Started libpod-conmon-4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad.scope.
Nov 22 03:24:38 compute-0 podman[76343]: 2025-11-22 03:24:38.147945063 +0000 UTC m=+0.034037826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c586b7cdb6bc4c15ec403299b0a40815e703af80488d4384d2ae33c310fd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c586b7cdb6bc4c15ec403299b0a40815e703af80488d4384d2ae33c310fd2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0c586b7cdb6bc4c15ec403299b0a40815e703af80488d4384d2ae33c310fd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:38 compute-0 podman[76343]: 2025-11-22 03:24:38.270618325 +0000 UTC m=+0.156711088 container init 4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad (image=quay.io/ceph/ceph:v18, name=nostalgic_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:38 compute-0 podman[76343]: 2025-11-22 03:24:38.279832079 +0000 UTC m=+0.165924732 container start 4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad (image=quay.io/ceph/ceph:v18, name=nostalgic_edison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:24:38 compute-0 podman[76343]: 2025-11-22 03:24:38.285622741 +0000 UTC m=+0.171715484 container attach 4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad (image=quay.io/ceph/ceph:v18, name=nostalgic_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Nov 22 03:24:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: [cephadm INFO root] Set ssh ssh_user
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Nov 22 03:24:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Nov 22 03:24:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: [cephadm INFO root] Set ssh ssh_config
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Nov 22 03:24:38 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Nov 22 03:24:38 compute-0 nostalgic_edison[76370]: ssh user set to ceph-admin. sudo will be used
Nov 22 03:24:38 compute-0 systemd[1]: libpod-4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad.scope: Deactivated successfully.
Nov 22 03:24:38 compute-0 podman[76343]: 2025-11-22 03:24:38.837612374 +0000 UTC m=+0.723705067 container died 4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad (image=quay.io/ceph/ceph:v18, name=nostalgic_edison, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d0c586b7cdb6bc4c15ec403299b0a40815e703af80488d4384d2ae33c310fd2-merged.mount: Deactivated successfully.
Nov 22 03:24:38 compute-0 podman[76343]: 2025-11-22 03:24:38.920481704 +0000 UTC m=+0.806574407 container remove 4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad (image=quay.io/ceph/ceph:v18, name=nostalgic_edison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:24:38 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.wbwfxq(active, since 2s)
Nov 22 03:24:38 compute-0 systemd[1]: libpod-conmon-4225c4b0023b9647beec8eec9bf8931c960878c2330c8a7a37787eb086cfd5ad.scope: Deactivated successfully.
Nov 22 03:24:39 compute-0 podman[76408]: 2025-11-22 03:24:39.027729714 +0000 UTC m=+0.073919799 container create 504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d (image=quay.io/ceph/ceph:v18, name=suspicious_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:39 compute-0 systemd[1]: Started libpod-conmon-504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d.scope.
Nov 22 03:24:39 compute-0 podman[76408]: 2025-11-22 03:24:38.991239685 +0000 UTC m=+0.037429870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2731f15ad87aac239a6a19ffa7b59733b706d955c9f51144b2c7988b480837/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2731f15ad87aac239a6a19ffa7b59733b706d955c9f51144b2c7988b480837/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2731f15ad87aac239a6a19ffa7b59733b706d955c9f51144b2c7988b480837/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2731f15ad87aac239a6a19ffa7b59733b706d955c9f51144b2c7988b480837/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce2731f15ad87aac239a6a19ffa7b59733b706d955c9f51144b2c7988b480837/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:39 compute-0 podman[76408]: 2025-11-22 03:24:39.204748242 +0000 UTC m=+0.250938357 container init 504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d (image=quay.io/ceph/ceph:v18, name=suspicious_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:24:39 compute-0 podman[76408]: 2025-11-22 03:24:39.214179087 +0000 UTC m=+0.260369172 container start 504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d (image=quay.io/ceph/ceph:v18, name=suspicious_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:39 compute-0 ceph-mon[75011]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:39 compute-0 ceph-mon[75011]: [22/Nov/2025:03:24:37] ENGINE Bus STARTING
Nov 22 03:24:39 compute-0 ceph-mon[75011]: [22/Nov/2025:03:24:38] ENGINE Serving on https://192.168.122.100:7150
Nov 22 03:24:39 compute-0 ceph-mon[75011]: [22/Nov/2025:03:24:38] ENGINE Client ('192.168.122.100', 35036) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Nov 22 03:24:39 compute-0 ceph-mon[75011]: [22/Nov/2025:03:24:38] ENGINE Serving on http://192.168.122.100:8765
Nov 22 03:24:39 compute-0 ceph-mon[75011]: [22/Nov/2025:03:24:38] ENGINE Bus STARTED
Nov 22 03:24:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:39 compute-0 podman[76408]: 2025-11-22 03:24:39.251685809 +0000 UTC m=+0.297875904 container attach 504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d (image=quay.io/ceph/ceph:v18, name=suspicious_moser, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:24:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:39 compute-0 ceph-mon[75011]: mgrmap e8: compute-0.wbwfxq(active, since 2s)
Nov 22 03:24:39 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Nov 22 03:24:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:39 compute-0 ceph-mgr[75294]: [cephadm INFO root] Set ssh ssh_identity_key
Nov 22 03:24:39 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Nov 22 03:24:39 compute-0 ceph-mgr[75294]: [cephadm INFO root] Set ssh private key
Nov 22 03:24:39 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Set ssh private key
Nov 22 03:24:39 compute-0 systemd[1]: libpod-504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d.scope: Deactivated successfully.
Nov 22 03:24:39 compute-0 podman[76408]: 2025-11-22 03:24:39.779220589 +0000 UTC m=+0.825410674 container died 504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d (image=quay.io/ceph/ceph:v18, name=suspicious_moser, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 03:24:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce2731f15ad87aac239a6a19ffa7b59733b706d955c9f51144b2c7988b480837-merged.mount: Deactivated successfully.
Nov 22 03:24:39 compute-0 podman[76408]: 2025-11-22 03:24:39.821825552 +0000 UTC m=+0.868015637 container remove 504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d (image=quay.io/ceph/ceph:v18, name=suspicious_moser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:24:39 compute-0 systemd[1]: libpod-conmon-504673e2ec1a676c1578f8d3a006108bbb9e2c8be3d942867e909690e93fde4d.scope: Deactivated successfully.
Nov 22 03:24:39 compute-0 podman[76462]: 2025-11-22 03:24:39.925647841 +0000 UTC m=+0.073435707 container create 3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643 (image=quay.io/ceph/ceph:v18, name=lucid_colden, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:39 compute-0 systemd[1]: Started libpod-conmon-3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643.scope.
Nov 22 03:24:39 compute-0 podman[76462]: 2025-11-22 03:24:39.892505022 +0000 UTC m=+0.040292958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1b294f93c68d0ef7684cc42c1938db667dc796b0a7b9bc8fbb65236c1c1784/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1b294f93c68d0ef7684cc42c1938db667dc796b0a7b9bc8fbb65236c1c1784/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1b294f93c68d0ef7684cc42c1938db667dc796b0a7b9bc8fbb65236c1c1784/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1b294f93c68d0ef7684cc42c1938db667dc796b0a7b9bc8fbb65236c1c1784/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1b294f93c68d0ef7684cc42c1938db667dc796b0a7b9bc8fbb65236c1c1784/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:40 compute-0 podman[76462]: 2025-11-22 03:24:40.016119284 +0000 UTC m=+0.163907130 container init 3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643 (image=quay.io/ceph/ceph:v18, name=lucid_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:24:40 compute-0 podman[76462]: 2025-11-22 03:24:40.025973712 +0000 UTC m=+0.173761578 container start 3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643 (image=quay.io/ceph/ceph:v18, name=lucid_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:24:40 compute-0 podman[76462]: 2025-11-22 03:24:40.03180747 +0000 UTC m=+0.179595346 container attach 3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643 (image=quay.io/ceph/ceph:v18, name=lucid_colden, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:24:40 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:40 compute-0 ceph-mon[75011]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:40 compute-0 ceph-mon[75011]: Set ssh ssh_user
Nov 22 03:24:40 compute-0 ceph-mon[75011]: Set ssh ssh_config
Nov 22 03:24:40 compute-0 ceph-mon[75011]: ssh user set to ceph-admin. sudo will be used
Nov 22 03:24:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019921736 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:24:40 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Nov 22 03:24:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:40 compute-0 ceph-mgr[75294]: [cephadm INFO root] Set ssh ssh_identity_pub
Nov 22 03:24:40 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Nov 22 03:24:40 compute-0 systemd[1]: libpod-3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643.scope: Deactivated successfully.
Nov 22 03:24:40 compute-0 podman[76462]: 2025-11-22 03:24:40.592557656 +0000 UTC m=+0.740345542 container died 3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643 (image=quay.io/ceph/ceph:v18, name=lucid_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f1b294f93c68d0ef7684cc42c1938db667dc796b0a7b9bc8fbb65236c1c1784-merged.mount: Deactivated successfully.
Nov 22 03:24:40 compute-0 podman[76462]: 2025-11-22 03:24:40.649380479 +0000 UTC m=+0.797168315 container remove 3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643 (image=quay.io/ceph/ceph:v18, name=lucid_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:40 compute-0 systemd[1]: libpod-conmon-3e02adfa44e86892a1c37027fff0819e3124f32ea339dadc587ddd0e9521f643.scope: Deactivated successfully.
Nov 22 03:24:40 compute-0 podman[76514]: 2025-11-22 03:24:40.737578389 +0000 UTC m=+0.056681878 container create d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96 (image=quay.io/ceph/ceph:v18, name=eloquent_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:24:40 compute-0 systemd[1]: Started libpod-conmon-d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96.scope.
Nov 22 03:24:40 compute-0 podman[76514]: 2025-11-22 03:24:40.709153564 +0000 UTC m=+0.028257093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678abefa2819b505fdb5013b1fe308deb3fad0b034db490b72a960c978ec9822/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678abefa2819b505fdb5013b1fe308deb3fad0b034db490b72a960c978ec9822/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678abefa2819b505fdb5013b1fe308deb3fad0b034db490b72a960c978ec9822/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:40 compute-0 podman[76514]: 2025-11-22 03:24:40.826039741 +0000 UTC m=+0.145143200 container init d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96 (image=quay.io/ceph/ceph:v18, name=eloquent_kepler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:24:40 compute-0 podman[76514]: 2025-11-22 03:24:40.83236764 +0000 UTC m=+0.151471109 container start d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96 (image=quay.io/ceph/ceph:v18, name=eloquent_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:24:40 compute-0 podman[76514]: 2025-11-22 03:24:40.835818735 +0000 UTC m=+0.154922194 container attach d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96 (image=quay.io/ceph/ceph:v18, name=eloquent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:24:41 compute-0 ceph-mon[75011]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:41 compute-0 ceph-mon[75011]: Set ssh ssh_identity_key
Nov 22 03:24:41 compute-0 ceph-mon[75011]: Set ssh private key
Nov 22 03:24:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:41 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:41 compute-0 eloquent_kepler[76531]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzvHkpFwFKmsWwKAX4W6Jql7CymaMeQ3LfTQY77AEVxRX936shlQl4SnimEA9M5ail+XleZDZepk8RVLlo6e7s8SQkEvIFWyfVpLuHySXqTWFtCcKzt5yy05jm7Q/xf4qqqag1aROCCMlpTHKQQJywXUFwRqYz5qfj1VzGKjBzDGxgbYB+RXQW7ys1qpVWNl/VxPYqB1vncKSWQSBGbQ1APrwt9GN3uiz2fSwv4JeevCzzQKHjYYdhOZ/DgCgBLM0/Sb1ItdCdw8y1aJ506sJLafANc3R65f/wgZpNzyI+DOFdSH2+y3hS1zlVToIyaDlR/OxT5Uu2twUcE65lHdffJ3ZppZ728KRjHUYpsyRxrYYjB+aXv6G0AcpMESxVcNmR5o1JoeMCZUOUtlGrVYiiJCvo0GysgimXIIE36duqDLNi5FvDrVPmwX7T2P8/EFaMf14b0nlA3SV6CFC8PJleYQtiUeW4PD/yMHPWnW8aanv/BxCwoQ/XioxqDq5tb78= zuul@controller
Nov 22 03:24:41 compute-0 systemd[1]: libpod-d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96.scope: Deactivated successfully.
Nov 22 03:24:41 compute-0 podman[76514]: 2025-11-22 03:24:41.370862301 +0000 UTC m=+0.689965750 container died d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96 (image=quay.io/ceph/ceph:v18, name=eloquent_kepler, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-678abefa2819b505fdb5013b1fe308deb3fad0b034db490b72a960c978ec9822-merged.mount: Deactivated successfully.
Nov 22 03:24:41 compute-0 podman[76514]: 2025-11-22 03:24:41.419678142 +0000 UTC m=+0.738781621 container remove d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96 (image=quay.io/ceph/ceph:v18, name=eloquent_kepler, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:41 compute-0 systemd[1]: libpod-conmon-d98e86240559fb37230f60e4c2989f9c3c4e91bdc30f099528882151c92a0e96.scope: Deactivated successfully.
Nov 22 03:24:41 compute-0 podman[76567]: 2025-11-22 03:24:41.496833702 +0000 UTC m=+0.052526914 container create b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f (image=quay.io/ceph/ceph:v18, name=charming_perlman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:24:41 compute-0 systemd[1]: Started libpod-conmon-b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f.scope.
Nov 22 03:24:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7bed22596f534a98c15c4ea1c326db69df3978df00331d7b959ceef6e01490/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7bed22596f534a98c15c4ea1c326db69df3978df00331d7b959ceef6e01490/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e7bed22596f534a98c15c4ea1c326db69df3978df00331d7b959ceef6e01490/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:41 compute-0 podman[76567]: 2025-11-22 03:24:41.474043578 +0000 UTC m=+0.029736800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:41 compute-0 podman[76567]: 2025-11-22 03:24:41.584541835 +0000 UTC m=+0.140235027 container init b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f (image=quay.io/ceph/ceph:v18, name=charming_perlman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:24:41 compute-0 podman[76567]: 2025-11-22 03:24:41.590826748 +0000 UTC m=+0.146519910 container start b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f (image=quay.io/ceph/ceph:v18, name=charming_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:24:41 compute-0 podman[76567]: 2025-11-22 03:24:41.601926108 +0000 UTC m=+0.157619270 container attach b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f (image=quay.io/ceph/ceph:v18, name=charming_perlman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:24:42 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:42 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:42 compute-0 ceph-mon[75011]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:42 compute-0 ceph-mon[75011]: Set ssh ssh_identity_pub
Nov 22 03:24:42 compute-0 sshd-session[76609]: Accepted publickey for ceph-admin from 192.168.122.100 port 38378 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:42 compute-0 systemd-logind[799]: New session 21 of user ceph-admin.
Nov 22 03:24:42 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Nov 22 03:24:42 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Nov 22 03:24:42 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Nov 22 03:24:42 compute-0 systemd[1]: Starting User Manager for UID 42477...
Nov 22 03:24:42 compute-0 systemd[76613]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:42 compute-0 sshd-session[76617]: Accepted publickey for ceph-admin from 192.168.122.100 port 38394 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:42 compute-0 systemd-logind[799]: New session 23 of user ceph-admin.
Nov 22 03:24:42 compute-0 systemd[76613]: Queued start job for default target Main User Target.
Nov 22 03:24:42 compute-0 systemd[76613]: Created slice User Application Slice.
Nov 22 03:24:42 compute-0 systemd[76613]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 22 03:24:42 compute-0 systemd[76613]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 03:24:42 compute-0 systemd[76613]: Reached target Paths.
Nov 22 03:24:42 compute-0 systemd[76613]: Reached target Timers.
Nov 22 03:24:42 compute-0 systemd[76613]: Starting D-Bus User Message Bus Socket...
Nov 22 03:24:42 compute-0 systemd[76613]: Starting Create User's Volatile Files and Directories...
Nov 22 03:24:42 compute-0 systemd[76613]: Finished Create User's Volatile Files and Directories.
Nov 22 03:24:42 compute-0 systemd[76613]: Listening on D-Bus User Message Bus Socket.
Nov 22 03:24:42 compute-0 systemd[76613]: Reached target Sockets.
Nov 22 03:24:42 compute-0 systemd[76613]: Reached target Basic System.
Nov 22 03:24:42 compute-0 systemd[76613]: Reached target Main User Target.
Nov 22 03:24:42 compute-0 systemd[76613]: Startup finished in 157ms.
Nov 22 03:24:42 compute-0 systemd[1]: Started User Manager for UID 42477.
Nov 22 03:24:42 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Nov 22 03:24:42 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Nov 22 03:24:42 compute-0 sshd-session[76609]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:42 compute-0 sshd-session[76617]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:42 compute-0 sudo[76634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:42 compute-0 sudo[76634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:42 compute-0 sudo[76634]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:42 compute-0 sudo[76659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:24:42 compute-0 sudo[76659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:42 compute-0 sudo[76659]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:43 compute-0 sshd-session[76684]: Accepted publickey for ceph-admin from 192.168.122.100 port 38398 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:43 compute-0 systemd-logind[799]: New session 24 of user ceph-admin.
Nov 22 03:24:43 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Nov 22 03:24:43 compute-0 sshd-session[76684]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:43 compute-0 sudo[76688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:43 compute-0 sudo[76688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:43 compute-0 sudo[76688]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:43 compute-0 sudo[76713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 22 03:24:43 compute-0 sudo[76713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:43 compute-0 ceph-mon[75011]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:43 compute-0 ceph-mon[75011]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:43 compute-0 sudo[76713]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:43 compute-0 sshd-session[76738]: Accepted publickey for ceph-admin from 192.168.122.100 port 38410 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:43 compute-0 systemd-logind[799]: New session 25 of user ceph-admin.
Nov 22 03:24:43 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Nov 22 03:24:43 compute-0 sshd-session[76738]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:43 compute-0 sudo[76742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:43 compute-0 sudo[76742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:43 compute-0 sudo[76742]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:43 compute-0 sudo[76767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 22 03:24:43 compute-0 sudo[76767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:43 compute-0 sudo[76767]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:43 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Nov 22 03:24:43 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Nov 22 03:24:43 compute-0 sshd-session[76792]: Accepted publickey for ceph-admin from 192.168.122.100 port 38422 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:43 compute-0 systemd-logind[799]: New session 26 of user ceph-admin.
Nov 22 03:24:43 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Nov 22 03:24:43 compute-0 sshd-session[76792]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:44 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:44 compute-0 sudo[76796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:44 compute-0 sudo[76796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:44 compute-0 sudo[76796]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:44 compute-0 sudo[76821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:24:44 compute-0 sudo[76821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:44 compute-0 sudo[76821]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:44 compute-0 ceph-mon[75011]: Deploying cephadm binary to compute-0
Nov 22 03:24:44 compute-0 sshd-session[76846]: Accepted publickey for ceph-admin from 192.168.122.100 port 38436 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:44 compute-0 systemd-logind[799]: New session 27 of user ceph-admin.
Nov 22 03:24:44 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Nov 22 03:24:44 compute-0 sshd-session[76846]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:44 compute-0 sudo[76850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:44 compute-0 sudo[76850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:44 compute-0 sudo[76850]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:44 compute-0 sudo[76875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:24:44 compute-0 sudo[76875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:44 compute-0 sudo[76875]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:44 compute-0 sshd-session[76900]: Accepted publickey for ceph-admin from 192.168.122.100 port 38444 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:44 compute-0 systemd-logind[799]: New session 28 of user ceph-admin.
Nov 22 03:24:45 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Nov 22 03:24:45 compute-0 sshd-session[76900]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:45 compute-0 sudo[76904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:45 compute-0 sudo[76904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:45 compute-0 sudo[76904]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:45 compute-0 sudo[76929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 22 03:24:45 compute-0 sudo[76929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:45 compute-0 sudo[76929]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:45 compute-0 sshd-session[76954]: Accepted publickey for ceph-admin from 192.168.122.100 port 38446 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:45 compute-0 systemd-logind[799]: New session 29 of user ceph-admin.
Nov 22 03:24:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053046 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:24:45 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Nov 22 03:24:45 compute-0 sshd-session[76954]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:45 compute-0 sudo[76958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:45 compute-0 sudo[76958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:45 compute-0 sudo[76958]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:45 compute-0 sudo[76983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:24:45 compute-0 sudo[76983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:45 compute-0 sudo[76983]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:45 compute-0 sshd-session[77008]: Accepted publickey for ceph-admin from 192.168.122.100 port 38462 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:45 compute-0 systemd-logind[799]: New session 30 of user ceph-admin.
Nov 22 03:24:45 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Nov 22 03:24:45 compute-0 sshd-session[77008]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:46 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:46 compute-0 sudo[77012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:46 compute-0 sudo[77012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:46 compute-0 sudo[77012]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:46 compute-0 sudo[77037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new
Nov 22 03:24:46 compute-0 sudo[77037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:46 compute-0 sudo[77037]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:46 compute-0 sshd-session[77062]: Accepted publickey for ceph-admin from 192.168.122.100 port 38474 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:46 compute-0 systemd-logind[799]: New session 31 of user ceph-admin.
Nov 22 03:24:46 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Nov 22 03:24:46 compute-0 sshd-session[77062]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:47 compute-0 sshd-session[77089]: Accepted publickey for ceph-admin from 192.168.122.100 port 38490 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:47 compute-0 systemd-logind[799]: New session 32 of user ceph-admin.
Nov 22 03:24:47 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Nov 22 03:24:47 compute-0 sshd-session[77089]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:47 compute-0 sudo[77093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:47 compute-0 sudo[77093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:47 compute-0 sudo[77093]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:47 compute-0 sudo[77118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d.new /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d
Nov 22 03:24:47 compute-0 sudo[77118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:47 compute-0 sudo[77118]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:47 compute-0 sshd-session[77143]: Accepted publickey for ceph-admin from 192.168.122.100 port 38500 ssh2: RSA SHA256:bZHzsxC2in/GWELjLpA7rZP25bRiryB+0z/4eCNUI/0
Nov 22 03:24:47 compute-0 systemd-logind[799]: New session 33 of user ceph-admin.
Nov 22 03:24:47 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Nov 22 03:24:47 compute-0 sshd-session[77143]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Nov 22 03:24:47 compute-0 sudo[77147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:47 compute-0 sudo[77147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:47 compute-0 sudo[77147]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:47 compute-0 sudo[77172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 22 03:24:47 compute-0 sudo[77172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:47 compute-0 sudo[77172]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:24:49 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:49 compute-0 ceph-mgr[75294]: [cephadm INFO root] Added host compute-0
Nov 22 03:24:49 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 22 03:24:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:24:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:49 compute-0 charming_perlman[76583]: Added host 'compute-0' with addr '192.168.122.100'
Nov 22 03:24:49 compute-0 systemd[1]: libpod-b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f.scope: Deactivated successfully.
Nov 22 03:24:49 compute-0 podman[76567]: 2025-11-22 03:24:49.357336666 +0000 UTC m=+7.913029848 container died b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f (image=quay.io/ceph/ceph:v18, name=charming_perlman, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:49 compute-0 sudo[77217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:49 compute-0 sudo[77217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e7bed22596f534a98c15c4ea1c326db69df3978df00331d7b959ceef6e01490-merged.mount: Deactivated successfully.
Nov 22 03:24:49 compute-0 sudo[77217]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:49 compute-0 podman[76567]: 2025-11-22 03:24:49.421955757 +0000 UTC m=+7.977648929 container remove b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f (image=quay.io/ceph/ceph:v18, name=charming_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:24:49 compute-0 systemd[1]: libpod-conmon-b061da9a7ff0b068c2d17cd2af28fc2ba381bf8ed4fce94949a6dff2e872060f.scope: Deactivated successfully.
Nov 22 03:24:49 compute-0 sudo[77254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:24:49 compute-0 sudo[77254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:49 compute-0 sudo[77254]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:49 compute-0 podman[77266]: 2025-11-22 03:24:49.493499465 +0000 UTC m=+0.043524816 container create 9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7 (image=quay.io/ceph/ceph:v18, name=zen_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:24:49 compute-0 systemd[1]: Started libpod-conmon-9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7.scope.
Nov 22 03:24:49 compute-0 sudo[77293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:49 compute-0 sudo[77293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:49 compute-0 sudo[77293]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:49 compute-0 podman[77266]: 2025-11-22 03:24:49.476843027 +0000 UTC m=+0.026868398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b6b604163b0299d2d90d53a6098680de559d9f695293174211caacfb2e4a6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b6b604163b0299d2d90d53a6098680de559d9f695293174211caacfb2e4a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33b6b604163b0299d2d90d53a6098680de559d9f695293174211caacfb2e4a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:49 compute-0 podman[77266]: 2025-11-22 03:24:49.594392143 +0000 UTC m=+0.144417574 container init 9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7 (image=quay.io/ceph/ceph:v18, name=zen_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:24:49 compute-0 podman[77266]: 2025-11-22 03:24:49.607215712 +0000 UTC m=+0.157241063 container start 9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7 (image=quay.io/ceph/ceph:v18, name=zen_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:49 compute-0 podman[77266]: 2025-11-22 03:24:49.611192966 +0000 UTC m=+0.161218417 container attach 9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7 (image=quay.io/ceph/ceph:v18, name=zen_kirch, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:24:49 compute-0 sudo[77323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph:v18 --timeout 895 inspect-image
Nov 22 03:24:49 compute-0 sudo[77323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:49 compute-0 podman[77376]: 2025-11-22 03:24:49.898673477 +0000 UTC m=+0.040180651 container create f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3 (image=quay.io/ceph/ceph:v18, name=crazy_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:49 compute-0 systemd[1]: Started libpod-conmon-f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3.scope.
Nov 22 03:24:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:49 compute-0 podman[77376]: 2025-11-22 03:24:49.975006828 +0000 UTC m=+0.116513972 container init f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3 (image=quay.io/ceph/ceph:v18, name=crazy_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:24:49 compute-0 podman[77376]: 2025-11-22 03:24:49.881258345 +0000 UTC m=+0.022765479 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:49 compute-0 podman[77376]: 2025-11-22 03:24:49.982799498 +0000 UTC m=+0.124306652 container start f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3 (image=quay.io/ceph/ceph:v18, name=crazy_rosalind, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:24:49 compute-0 podman[77376]: 2025-11-22 03:24:49.986347445 +0000 UTC m=+0.127854619 container attach f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3 (image=quay.io/ceph/ceph:v18, name=crazy_rosalind, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:50 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:50 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service mon spec with placement count:5
Nov 22 03:24:50 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Nov 22 03:24:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 03:24:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:50 compute-0 zen_kirch[77318]: Scheduled mon update...
Nov 22 03:24:50 compute-0 systemd[1]: libpod-9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7.scope: Deactivated successfully.
Nov 22 03:24:50 compute-0 podman[77266]: 2025-11-22 03:24:50.154883499 +0000 UTC m=+0.704908860 container died 9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7 (image=quay.io/ceph/ceph:v18, name=zen_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a33b6b604163b0299d2d90d53a6098680de559d9f695293174211caacfb2e4a6-merged.mount: Deactivated successfully.
Nov 22 03:24:50 compute-0 podman[77266]: 2025-11-22 03:24:50.200265788 +0000 UTC m=+0.750291149 container remove 9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7 (image=quay.io/ceph/ceph:v18, name=zen_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:50 compute-0 systemd[1]: libpod-conmon-9b25b4650948560d853d4b5f507d6a50ad2d1da576f55456b75ec7a89627d8e7.scope: Deactivated successfully.
Nov 22 03:24:50 compute-0 podman[77430]: 2025-11-22 03:24:50.268659267 +0000 UTC m=+0.046173730 container create 5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:50 compute-0 crazy_rosalind[77411]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Nov 22 03:24:50 compute-0 systemd[1]: Started libpod-conmon-5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e.scope.
Nov 22 03:24:50 compute-0 systemd[1]: libpod-f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3.scope: Deactivated successfully.
Nov 22 03:24:50 compute-0 podman[77376]: 2025-11-22 03:24:50.300493628 +0000 UTC m=+0.442000762 container died f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3 (image=quay.io/ceph/ceph:v18, name=crazy_rosalind, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbdb792e1b21ecbed8425f7e1ed46c35240be1733d6c36b787fd05229db8e52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbdb792e1b21ecbed8425f7e1ed46c35240be1733d6c36b787fd05229db8e52/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dbdb792e1b21ecbed8425f7e1ed46c35240be1733d6c36b787fd05229db8e52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:50 compute-0 ceph-mon[75011]: Added host compute-0
Nov 22 03:24:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:24:50 compute-0 ceph-mon[75011]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:50 compute-0 ceph-mon[75011]: Saving service mon spec with placement count:5
Nov 22 03:24:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:50 compute-0 podman[77430]: 2025-11-22 03:24:50.251480147 +0000 UTC m=+0.028994630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:50 compute-0 podman[77376]: 2025-11-22 03:24:50.355496999 +0000 UTC m=+0.497004133 container remove f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3 (image=quay.io/ceph/ceph:v18, name=crazy_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:50 compute-0 systemd[1]: libpod-conmon-f5e149c5ffc1cbfabef1cc77b4106c0b1c05e8d3956d9776f1f712cef25c90c3.scope: Deactivated successfully.
Nov 22 03:24:50 compute-0 podman[77430]: 2025-11-22 03:24:50.367694881 +0000 UTC m=+0.145209364 container init 5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:50 compute-0 podman[77430]: 2025-11-22 03:24:50.373372836 +0000 UTC m=+0.150887299 container start 5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:24:50 compute-0 podman[77430]: 2025-11-22 03:24:50.376865738 +0000 UTC m=+0.154380251 container attach 5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:24:50 compute-0 sudo[77323]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Nov 22 03:24:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-7700a41946be45f4333a20491de12a827ec729c8606a50612e686c1d08cb219f-merged.mount: Deactivated successfully.
Nov 22 03:24:50 compute-0 sudo[77465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:50 compute-0 sudo[77465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:50 compute-0 sudo[77465]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:24:50 compute-0 sudo[77490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:24:50 compute-0 sudo[77490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:50 compute-0 sudo[77490]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:50 compute-0 sudo[77515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:50 compute-0 sudo[77515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:50 compute-0 sudo[77515]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:50 compute-0 sudo[77540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 03:24:50 compute-0 sudo[77540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:50 compute-0 sudo[77540]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:24:50 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:50 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service mgr spec with placement count:2
Nov 22 03:24:50 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Nov 22 03:24:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:24:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:51 compute-0 relaxed_pascal[77446]: Scheduled mgr update...
Nov 22 03:24:51 compute-0 sudo[77605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:51 compute-0 sudo[77605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:51 compute-0 sudo[77605]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:51 compute-0 systemd[1]: libpod-5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e.scope: Deactivated successfully.
Nov 22 03:24:51 compute-0 podman[77430]: 2025-11-22 03:24:51.069845804 +0000 UTC m=+0.847360267 container died 5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:24:51 compute-0 sudo[77631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:24:51 compute-0 sudo[77631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:51 compute-0 sudo[77631]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dbdb792e1b21ecbed8425f7e1ed46c35240be1733d6c36b787fd05229db8e52-merged.mount: Deactivated successfully.
Nov 22 03:24:51 compute-0 podman[77430]: 2025-11-22 03:24:51.166616265 +0000 UTC m=+0.944130728 container remove 5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e (image=quay.io/ceph/ceph:v18, name=relaxed_pascal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:51 compute-0 systemd[1]: libpod-conmon-5c8dd67cceb306d22a75159078b9e0c08b2a49100eff63a7d773455013b9961e.scope: Deactivated successfully.
Nov 22 03:24:51 compute-0 sudo[77666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:51 compute-0 sudo[77666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:51 compute-0 sudo[77666]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:51 compute-0 podman[77684]: 2025-11-22 03:24:51.250493028 +0000 UTC m=+0.050955732 container create ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d (image=quay.io/ceph/ceph:v18, name=sad_kirch, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:51 compute-0 systemd[1]: Started libpod-conmon-ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d.scope.
Nov 22 03:24:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:51 compute-0 sudo[77702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7597c04174599c125c9ab396ccbb7d004b998cc36252fe3e27add8e7c53bd2c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7597c04174599c125c9ab396ccbb7d004b998cc36252fe3e27add8e7c53bd2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7597c04174599c125c9ab396ccbb7d004b998cc36252fe3e27add8e7c53bd2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:51 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:51 compute-0 sudo[77702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:51 compute-0 podman[77684]: 2025-11-22 03:24:51.227959145 +0000 UTC m=+0.028421889 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:51 compute-0 podman[77684]: 2025-11-22 03:24:51.328073958 +0000 UTC m=+0.128536642 container init ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d (image=quay.io/ceph/ceph:v18, name=sad_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:24:51 compute-0 podman[77684]: 2025-11-22 03:24:51.333389799 +0000 UTC m=+0.133852493 container start ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d (image=quay.io/ceph/ceph:v18, name=sad_kirch, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:24:51 compute-0 podman[77684]: 2025-11-22 03:24:51.336386962 +0000 UTC m=+0.136849676 container attach ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d (image=quay.io/ceph/ceph:v18, name=sad_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:24:51 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:51 compute-0 ceph-mon[75011]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:51 compute-0 ceph-mon[75011]: Saving service mgr spec with placement count:2
Nov 22 03:24:51 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:51 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:51 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:51 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service crash spec with placement *
Nov 22 03:24:51 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Nov 22 03:24:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 03:24:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:51 compute-0 podman[77828]: 2025-11-22 03:24:51.893274718 +0000 UTC m=+0.076557005 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:24:51 compute-0 sad_kirch[77731]: Scheduled crash update...
Nov 22 03:24:51 compute-0 systemd[1]: libpod-ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d.scope: Deactivated successfully.
Nov 22 03:24:51 compute-0 podman[77684]: 2025-11-22 03:24:51.912770899 +0000 UTC m=+0.713233603 container died ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d (image=quay.io/ceph/ceph:v18, name=sad_kirch, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:24:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7597c04174599c125c9ab396ccbb7d004b998cc36252fe3e27add8e7c53bd2c-merged.mount: Deactivated successfully.
Nov 22 03:24:51 compute-0 podman[77684]: 2025-11-22 03:24:51.967685132 +0000 UTC m=+0.768147846 container remove ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d (image=quay.io/ceph/ceph:v18, name=sad_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:51 compute-0 systemd[1]: libpod-conmon-ae32adf826dcf5e494b3e61557f78bca3592558ad4c48137ad2a85e867a5e61d.scope: Deactivated successfully.
Nov 22 03:24:52 compute-0 podman[77865]: 2025-11-22 03:24:52.040961759 +0000 UTC m=+0.045695404 container create ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b (image=quay.io/ceph/ceph:v18, name=sharp_zhukovsky, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:52 compute-0 systemd[1]: Started libpod-conmon-ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b.scope.
Nov 22 03:24:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1355fa9dcab27b0386b052f7fb19a00fe4181060461bd9dee12b6f68b72faa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1355fa9dcab27b0386b052f7fb19a00fe4181060461bd9dee12b6f68b72faa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed1355fa9dcab27b0386b052f7fb19a00fe4181060461bd9dee12b6f68b72faa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:52 compute-0 podman[77865]: 2025-11-22 03:24:52.017201874 +0000 UTC m=+0.021935529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:52 compute-0 podman[77865]: 2025-11-22 03:24:52.128271624 +0000 UTC m=+0.133005279 container init ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b (image=quay.io/ceph/ceph:v18, name=sharp_zhukovsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:52 compute-0 podman[77865]: 2025-11-22 03:24:52.140268838 +0000 UTC m=+0.145002443 container start ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b (image=quay.io/ceph/ceph:v18, name=sharp_zhukovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:52 compute-0 podman[77865]: 2025-11-22 03:24:52.146930201 +0000 UTC m=+0.151663886 container attach ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b (image=quay.io/ceph/ceph:v18, name=sharp_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:24:52 compute-0 podman[77828]: 2025-11-22 03:24:52.244000329 +0000 UTC m=+0.427282606 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:24:52 compute-0 sudo[77702]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:24:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:52 compute-0 sudo[77919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:52 compute-0 sudo[77919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:52 compute-0 sudo[77919]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:52 compute-0 sudo[77963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:24:52 compute-0 sudo[77963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:52 compute-0 sudo[77963]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:52 compute-0 sudo[77988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:52 compute-0 sudo[77988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:52 compute-0 sudo[77988]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Nov 22 03:24:52 compute-0 sudo[78013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:24:52 compute-0 sudo[78013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2806699758' entity='client.admin' 
Nov 22 03:24:52 compute-0 systemd[1]: libpod-ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b.scope: Deactivated successfully.
Nov 22 03:24:52 compute-0 podman[78040]: 2025-11-22 03:24:52.72805405 +0000 UTC m=+0.024128553 container died ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b (image=quay.io/ceph/ceph:v18, name=sharp_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed1355fa9dcab27b0386b052f7fb19a00fe4181060461bd9dee12b6f68b72faa-merged.mount: Deactivated successfully.
Nov 22 03:24:52 compute-0 podman[78040]: 2025-11-22 03:24:52.774600592 +0000 UTC m=+0.070675075 container remove ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b (image=quay.io/ceph/ceph:v18, name=sharp_zhukovsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:52 compute-0 systemd[1]: libpod-conmon-ee9832f4e57ca4d9eb3a2178f42b5f534fdfeab882d4a9b3f03f9b1b3e431c6b.scope: Deactivated successfully.
Nov 22 03:24:52 compute-0 podman[78055]: 2025-11-22 03:24:52.833841126 +0000 UTC m=+0.035015289 container create 5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a (image=quay.io/ceph/ceph:v18, name=intelligent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:24:52 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78078 (sysctl)
Nov 22 03:24:52 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 22 03:24:52 compute-0 systemd[1]: Started libpod-conmon-5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a.scope.
Nov 22 03:24:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:52 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 22 03:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e51a5358fb42a87f635a82688074c76ac470a854121238b5eab48c68d3ccb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e51a5358fb42a87f635a82688074c76ac470a854121238b5eab48c68d3ccb3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15e51a5358fb42a87f635a82688074c76ac470a854121238b5eab48c68d3ccb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:52 compute-0 podman[78055]: 2025-11-22 03:24:52.887597407 +0000 UTC m=+0.088771619 container init 5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a (image=quay.io/ceph/ceph:v18, name=intelligent_tu, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:24:52 compute-0 ceph-mon[75011]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:52 compute-0 ceph-mon[75011]: Saving service crash spec with placement *
Nov 22 03:24:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2806699758' entity='client.admin' 
Nov 22 03:24:52 compute-0 podman[78055]: 2025-11-22 03:24:52.89649627 +0000 UTC m=+0.097670433 container start 5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a (image=quay.io/ceph/ceph:v18, name=intelligent_tu, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:52 compute-0 podman[78055]: 2025-11-22 03:24:52.902107483 +0000 UTC m=+0.103281666 container attach 5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a (image=quay.io/ceph/ceph:v18, name=intelligent_tu, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:24:52 compute-0 podman[78055]: 2025-11-22 03:24:52.818669957 +0000 UTC m=+0.019844150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:53 compute-0 sudo[78013]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:53 compute-0 sudo[78108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:53 compute-0 sudo[78108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:53 compute-0 sudo[78108]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:53 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:53 compute-0 sudo[78152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:24:53 compute-0 sudo[78152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:53 compute-0 sudo[78152]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:53 compute-0 sudo[78177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:53 compute-0 sudo[78177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:53 compute-0 sudo[78177]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:53 compute-0 sudo[78202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 22 03:24:53 compute-0 sudo[78202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:53 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Nov 22 03:24:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:53 compute-0 systemd[1]: libpod-5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a.scope: Deactivated successfully.
Nov 22 03:24:53 compute-0 podman[78055]: 2025-11-22 03:24:53.508940798 +0000 UTC m=+0.710114961 container died 5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a (image=quay.io/ceph/ceph:v18, name=intelligent_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-15e51a5358fb42a87f635a82688074c76ac470a854121238b5eab48c68d3ccb3-merged.mount: Deactivated successfully.
Nov 22 03:24:53 compute-0 podman[78055]: 2025-11-22 03:24:53.562462017 +0000 UTC m=+0.763636180 container remove 5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a (image=quay.io/ceph/ceph:v18, name=intelligent_tu, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:24:53 compute-0 systemd[1]: libpod-conmon-5b36d1840172a82d33c079aa2af8286a248bb2ae4db5ed2231ed6f694552677a.scope: Deactivated successfully.
Nov 22 03:24:53 compute-0 podman[78239]: 2025-11-22 03:24:53.624592391 +0000 UTC m=+0.040564055 container create 6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:53 compute-0 systemd[1]: Started libpod-conmon-6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1.scope.
Nov 22 03:24:53 compute-0 sudo[78202]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6984be2bd61ddbe8c55a0a4285c73a9a662b074ead46cb699bc320de77b1c01c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6984be2bd61ddbe8c55a0a4285c73a9a662b074ead46cb699bc320de77b1c01c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6984be2bd61ddbe8c55a0a4285c73a9a662b074ead46cb699bc320de77b1c01c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:53 compute-0 podman[78239]: 2025-11-22 03:24:53.690066898 +0000 UTC m=+0.106038592 container init 6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:53 compute-0 podman[78239]: 2025-11-22 03:24:53.696056889 +0000 UTC m=+0.112028552 container start 6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:24:53 compute-0 podman[78239]: 2025-11-22 03:24:53.699202781 +0000 UTC m=+0.115174455 container attach 6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:24:53 compute-0 podman[78239]: 2025-11-22 03:24:53.60751316 +0000 UTC m=+0.023484844 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:53 compute-0 sudo[78274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:53 compute-0 sudo[78274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:53 compute-0 sudo[78274]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:53 compute-0 sudo[78301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:24:53 compute-0 sudo[78301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:53 compute-0 sudo[78301]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:53 compute-0 sudo[78326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:53 compute-0 sudo[78326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:53 compute-0 sudo[78326]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:53 compute-0 sudo[78351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- inventory --format=json-pretty --filter-for-batch
Nov 22 03:24:53 compute-0 sudo[78351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:54 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:24:54 compute-0 podman[78435]: 2025-11-22 03:24:54.185263746 +0000 UTC m=+0.019849415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:24:54 compute-0 podman[78435]: 2025-11-22 03:24:54.336527304 +0000 UTC m=+0.171112952 container create 4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:24:54 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:54 compute-0 ceph-mgr[75294]: [cephadm INFO root] Added label _admin to host compute-0
Nov 22 03:24:54 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Nov 22 03:24:54 compute-0 peaceful_hopper[78271]: Added label _admin to host compute-0
Nov 22 03:24:54 compute-0 systemd[1]: libpod-6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1.scope: Deactivated successfully.
Nov 22 03:24:54 compute-0 podman[78239]: 2025-11-22 03:24:54.406312617 +0000 UTC m=+0.822284301 container died 6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:24:54 compute-0 systemd[1]: Started libpod-conmon-4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba.scope.
Nov 22 03:24:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:54 compute-0 ceph-mon[75011]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:54 compute-0 ceph-mon[75011]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:24:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6984be2bd61ddbe8c55a0a4285c73a9a662b074ead46cb699bc320de77b1c01c-merged.mount: Deactivated successfully.
Nov 22 03:24:54 compute-0 podman[78435]: 2025-11-22 03:24:54.83630047 +0000 UTC m=+0.670886169 container init 4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:54 compute-0 podman[78435]: 2025-11-22 03:24:54.84378651 +0000 UTC m=+0.678372119 container start 4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:24:54 compute-0 flamboyant_swanson[78465]: 167 167
Nov 22 03:24:54 compute-0 systemd[1]: libpod-4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba.scope: Deactivated successfully.
Nov 22 03:24:54 compute-0 podman[78239]: 2025-11-22 03:24:54.897456468 +0000 UTC m=+1.313428132 container remove 6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1 (image=quay.io/ceph/ceph:v18, name=peaceful_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:24:55 compute-0 podman[78435]: 2025-11-22 03:24:55.004056761 +0000 UTC m=+0.838642410 container attach 4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:55 compute-0 podman[78435]: 2025-11-22 03:24:55.005267658 +0000 UTC m=+0.839853297 container died 4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-735731e76c3363347048b4be0e5842ae570cbed2cb6c68e34cdd74315aa0ecc6-merged.mount: Deactivated successfully.
Nov 22 03:24:55 compute-0 ceph-mgr[75294]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Nov 22 03:24:55 compute-0 podman[78435]: 2025-11-22 03:24:55.363880854 +0000 UTC m=+1.198466513 container remove 4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swanson, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:55 compute-0 systemd[1]: libpod-conmon-4c6ba88d7d26d2c238bd88758d8d2a05c200a1049edaa9d6c45e403acf0c47ba.scope: Deactivated successfully.
Nov 22 03:24:55 compute-0 podman[78482]: 2025-11-22 03:24:55.417797802 +0000 UTC m=+0.490250813 container create a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a (image=quay.io/ceph/ceph:v18, name=vigorous_bouman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:24:55 compute-0 systemd[1]: Started libpod-conmon-a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a.scope.
Nov 22 03:24:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:24:55 compute-0 systemd[1]: libpod-conmon-6a70e394162a2a54558605f03a8e10ba211154c6c8833c20360885f7d79423b1.scope: Deactivated successfully.
Nov 22 03:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29e3d9ea8537ba8aecfd5951c0024c907d841148a644d37907be509b267e00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29e3d9ea8537ba8aecfd5951c0024c907d841148a644d37907be509b267e00/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf29e3d9ea8537ba8aecfd5951c0024c907d841148a644d37907be509b267e00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:55 compute-0 podman[78482]: 2025-11-22 03:24:55.384583146 +0000 UTC m=+0.457036217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:55 compute-0 podman[78482]: 2025-11-22 03:24:55.491603376 +0000 UTC m=+0.564056447 container init a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a (image=quay.io/ceph/ceph:v18, name=vigorous_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:55 compute-0 podman[78482]: 2025-11-22 03:24:55.502047728 +0000 UTC m=+0.574500739 container start a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a (image=quay.io/ceph/ceph:v18, name=vigorous_bouman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:55 compute-0 podman[78482]: 2025-11-22 03:24:55.507008039 +0000 UTC m=+0.579461060 container attach a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a (image=quay.io/ceph/ceph:v18, name=vigorous_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:24:55 compute-0 ceph-mon[75011]: Added label _admin to host compute-0
Nov 22 03:24:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Nov 22 03:24:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/995357723' entity='client.admin' 
Nov 22 03:24:56 compute-0 systemd[1]: libpod-a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a.scope: Deactivated successfully.
Nov 22 03:24:56 compute-0 podman[78482]: 2025-11-22 03:24:56.128950786 +0000 UTC m=+1.201403797 container died a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a (image=quay.io/ceph/ceph:v18, name=vigorous_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf29e3d9ea8537ba8aecfd5951c0024c907d841148a644d37907be509b267e00-merged.mount: Deactivated successfully.
Nov 22 03:24:56 compute-0 podman[78482]: 2025-11-22 03:24:56.180243384 +0000 UTC m=+1.252696365 container remove a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a (image=quay.io/ceph/ceph:v18, name=vigorous_bouman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:24:56 compute-0 systemd[1]: libpod-conmon-a30ac7cdaf274db8bcb2d5d411dca96eae28fc54c2cf5e3a199338387499c46a.scope: Deactivated successfully.
Nov 22 03:24:56 compute-0 podman[78539]: 2025-11-22 03:24:56.282988022 +0000 UTC m=+0.067983944 container create 2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda (image=quay.io/ceph/ceph:v18, name=trusting_hawking, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:24:56 compute-0 systemd[1]: Started libpod-conmon-2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda.scope.
Nov 22 03:24:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e76e4ea5d58ee132b492306314bac754e25a2357996c2e83bbb8086deb7939/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e76e4ea5d58ee132b492306314bac754e25a2357996c2e83bbb8086deb7939/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12e76e4ea5d58ee132b492306314bac754e25a2357996c2e83bbb8086deb7939/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:56 compute-0 podman[78539]: 2025-11-22 03:24:56.259994373 +0000 UTC m=+0.044990325 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:56 compute-0 podman[78539]: 2025-11-22 03:24:56.354646324 +0000 UTC m=+0.139642296 container init 2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda (image=quay.io/ceph/ceph:v18, name=trusting_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:56 compute-0 podman[78539]: 2025-11-22 03:24:56.364599133 +0000 UTC m=+0.149595085 container start 2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda (image=quay.io/ceph/ceph:v18, name=trusting_hawking, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:56 compute-0 podman[78539]: 2025-11-22 03:24:56.369062455 +0000 UTC m=+0.154058407 container attach 2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda (image=quay.io/ceph/ceph:v18, name=trusting_hawking, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:24:56 compute-0 sshd-session[78551]: Invalid user  from 64.62.156.23 port 63253
Nov 22 03:24:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Nov 22 03:24:57 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/354891519' entity='client.admin' 
Nov 22 03:24:57 compute-0 trusting_hawking[78557]: set mgr/dashboard/cluster/status
Nov 22 03:24:57 compute-0 systemd[1]: libpod-2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda.scope: Deactivated successfully.
Nov 22 03:24:57 compute-0 podman[78539]: 2025-11-22 03:24:57.033350484 +0000 UTC m=+0.818346416 container died 2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda (image=quay.io/ceph/ceph:v18, name=trusting_hawking, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-12e76e4ea5d58ee132b492306314bac754e25a2357996c2e83bbb8086deb7939-merged.mount: Deactivated successfully.
Nov 22 03:24:57 compute-0 podman[78539]: 2025-11-22 03:24:57.076714323 +0000 UTC m=+0.861710235 container remove 2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda (image=quay.io/ceph/ceph:v18, name=trusting_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:24:57 compute-0 systemd[1]: libpod-conmon-2e993e487c24011cb6e402c592468ab096f5c4ccf04bce4f418bae2fef8fbeda.scope: Deactivated successfully.
Nov 22 03:24:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/995357723' entity='client.admin' 
Nov 22 03:24:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/354891519' entity='client.admin' 
Nov 22 03:24:57 compute-0 sudo[73989]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:57 compute-0 podman[78601]: 2025-11-22 03:24:57.307332959 +0000 UTC m=+0.078929532 container create acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:57 compute-0 ceph-mgr[75294]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Nov 22 03:24:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:24:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 22 03:24:57 compute-0 systemd[1]: Started libpod-conmon-acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1.scope.
Nov 22 03:24:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/011409c674d12d87bf3641ca537b255bc2619fb625337a81e183b7e129caeeb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/011409c674d12d87bf3641ca537b255bc2619fb625337a81e183b7e129caeeb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/011409c674d12d87bf3641ca537b255bc2619fb625337a81e183b7e129caeeb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/011409c674d12d87bf3641ca537b255bc2619fb625337a81e183b7e129caeeb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:57 compute-0 podman[78601]: 2025-11-22 03:24:57.288062558 +0000 UTC m=+0.059659181 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:24:57 compute-0 podman[78601]: 2025-11-22 03:24:57.39009595 +0000 UTC m=+0.161692523 container init acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:24:57 compute-0 podman[78601]: 2025-11-22 03:24:57.396932361 +0000 UTC m=+0.168528934 container start acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:57 compute-0 sudo[78642]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhdeesjaljobxllhkzpdwiepnofwrtlz ; /usr/bin/python3'
Nov 22 03:24:57 compute-0 podman[78601]: 2025-11-22 03:24:57.400510925 +0000 UTC m=+0.172107518 container attach acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:57 compute-0 sudo[78642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:24:57 compute-0 python3[78646]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:24:57 compute-0 podman[78647]: 2025-11-22 03:24:57.643453728 +0000 UTC m=+0.085082684 container create 8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a (image=quay.io/ceph/ceph:v18, name=interesting_ride, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:24:57 compute-0 podman[78647]: 2025-11-22 03:24:57.577222584 +0000 UTC m=+0.018851570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:24:57 compute-0 systemd[1]: Started libpod-conmon-8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a.scope.
Nov 22 03:24:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfc6d5f28b55475639937382fe9f3dc8d7d744aaf5b9e9aacc118279e899c43/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9cfc6d5f28b55475639937382fe9f3dc8d7d744aaf5b9e9aacc118279e899c43/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:24:57 compute-0 podman[78647]: 2025-11-22 03:24:57.996184257 +0000 UTC m=+0.437813223 container init 8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a (image=quay.io/ceph/ceph:v18, name=interesting_ride, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:24:58 compute-0 podman[78647]: 2025-11-22 03:24:58.002768541 +0000 UTC m=+0.444397497 container start 8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a (image=quay.io/ceph/ceph:v18, name=interesting_ride, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:24:58 compute-0 podman[78647]: 2025-11-22 03:24:58.113540774 +0000 UTC m=+0.555169740 container attach 8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a (image=quay.io/ceph/ceph:v18, name=interesting_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:24:58 compute-0 ceph-mon[75011]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Nov 22 03:24:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Nov 22 03:24:58 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/683284965' entity='client.admin' 
Nov 22 03:24:58 compute-0 systemd[1]: libpod-8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a.scope: Deactivated successfully.
Nov 22 03:24:58 compute-0 podman[78647]: 2025-11-22 03:24:58.653124421 +0000 UTC m=+1.094753377 container died 8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a (image=quay.io/ceph/ceph:v18, name=interesting_ride, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:24:58 compute-0 elastic_joliot[78617]: [
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:     {
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         "available": false,
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         "ceph_device": false,
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         "lsm_data": {},
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         "lvs": [],
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         "path": "/dev/sr0",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         "rejected_reasons": [
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "Insufficient space (<5GB)",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "Has a FileSystem"
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         ],
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         "sys_api": {
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "actuators": null,
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "device_nodes": "sr0",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "devname": "sr0",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "human_readable_size": "482.00 KB",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "id_bus": "ata",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "model": "QEMU DVD-ROM",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "nr_requests": "2",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "parent": "/dev/sr0",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "partitions": {},
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "path": "/dev/sr0",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "removable": "1",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "rev": "2.5+",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "ro": "0",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "rotational": "1",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "sas_address": "",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "sas_device_handle": "",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "scheduler_mode": "mq-deadline",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "sectors": 0,
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "sectorsize": "2048",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "size": 493568.0,
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "support_discard": "2048",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "type": "disk",
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:             "vendor": "QEMU"
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:         }
Nov 22 03:24:58 compute-0 elastic_joliot[78617]:     }
Nov 22 03:24:58 compute-0 elastic_joliot[78617]: ]
Nov 22 03:24:58 compute-0 systemd[1]: libpod-acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1.scope: Deactivated successfully.
Nov 22 03:24:58 compute-0 systemd[1]: libpod-acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1.scope: Consumed 1.336s CPU time.
Nov 22 03:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9cfc6d5f28b55475639937382fe9f3dc8d7d744aaf5b9e9aacc118279e899c43-merged.mount: Deactivated successfully.
Nov 22 03:24:58 compute-0 podman[78601]: 2025-11-22 03:24:58.745699452 +0000 UTC m=+1.517296025 container died acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:24:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-011409c674d12d87bf3641ca537b255bc2619fb625337a81e183b7e129caeeb7-merged.mount: Deactivated successfully.
Nov 22 03:24:59 compute-0 podman[78601]: 2025-11-22 03:24:59.106804033 +0000 UTC m=+1.878400606 container remove acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:24:59 compute-0 systemd[1]: libpod-conmon-acff221a1c63fda4ad3a17602b0829d758c07e0a0284fe57b4fd1f4fe24720f1.scope: Deactivated successfully.
Nov 22 03:24:59 compute-0 sudo[78351]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:24:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:24:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:59 compute-0 podman[78647]: 2025-11-22 03:24:59.267208489 +0000 UTC m=+1.708837485 container remove 8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a (image=quay.io/ceph/ceph:v18, name=interesting_ride, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:24:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:24:59 compute-0 systemd[1]: libpod-conmon-8059a48bf43858b90fde85f05d77d75924b4d608662ae4a2e9c063f8006b762a.scope: Deactivated successfully.
Nov 22 03:24:59 compute-0 ceph-mon[75011]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:24:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/683284965' entity='client.admin' 
Nov 22 03:24:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:59 compute-0 sudo[78642]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:24:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:24:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:24:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:24:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:24:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:24:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:24:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:24:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:24:59 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Nov 22 03:24:59 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Nov 22 03:24:59 compute-0 sudo[80256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:59 compute-0 sudo[80256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:59 compute-0 sudo[80256]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 sudo[80281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 22 03:24:59 compute-0 sudo[80281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:59 compute-0 sudo[80281]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 sudo[80306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:59 compute-0 sudo[80306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:59 compute-0 sudo[80306]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 sudo[80355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph
Nov 22 03:24:59 compute-0 sudo[80355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:59 compute-0 sudo[80355]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 sudo[80408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:59 compute-0 sudo[80408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:59 compute-0 sudo[80408]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 sudo[80456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.conf.new
Nov 22 03:24:59 compute-0 sudo[80456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:59 compute-0 sudo[80456]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 sudo[80481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:24:59 compute-0 sudo[80481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:59 compute-0 sudo[80481]: pam_unix(sudo:session): session closed for user root
Nov 22 03:24:59 compute-0 sudo[80506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:24:59 compute-0 sudo[80506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:24:59 compute-0 sudo[80506]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:00 compute-0 sudo[80554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80554]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.conf.new
Nov 22 03:25:00 compute-0 sudo[80603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80603]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izbsolrhjovboqrgyzcycmcvlxwgjlbv ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763781899.4386582-36317-229099339739445/async_wrapper.py j57353704226 30 /home/zuul/.ansible/tmp/ansible-tmp-1763781899.4386582-36317-229099339739445/AnsiballZ_command.py _'
Nov 22 03:25:00 compute-0 sudo[80652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:00 compute-0 sudo[80679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:00 compute-0 sudo[80679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80679]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 ansible-async_wrapper.py[80658]: Invoked with j57353704226 30 /home/zuul/.ansible/tmp/ansible-tmp-1763781899.4386582-36317-229099339739445/AnsiballZ_command.py _
Nov 22 03:25:00 compute-0 ansible-async_wrapper.py[80729]: Starting module and watcher
Nov 22 03:25:00 compute-0 ansible-async_wrapper.py[80729]: Start watching 80730 (30)
Nov 22 03:25:00 compute-0 ansible-async_wrapper.py[80730]: Start module (80730)
Nov 22 03:25:00 compute-0 ansible-async_wrapper.py[80658]: Return async_wrapper task started.
Nov 22 03:25:00 compute-0 sudo[80704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.conf.new
Nov 22 03:25:00 compute-0 sudo[80704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80652]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80704]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sshd-session[78551]: Connection closed by invalid user  64.62.156.23 port 63253 [preauth]
Nov 22 03:25:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:25:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:25:00 compute-0 sudo[80734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:00 compute-0 sudo[80734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80734]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.conf.new
Nov 22 03:25:00 compute-0 sudo[80759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80759]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 python3[80731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:00 compute-0 sudo[80784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:00 compute-0 sudo[80784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80784]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 podman[80788]: 2025-11-22 03:25:00.466136113 +0000 UTC m=+0.052589784 container create 1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a (image=quay.io/ceph/ceph:v18, name=stoic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:00 compute-0 sudo[80819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Nov 22 03:25:00 compute-0 sudo[80819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80819]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf
Nov 22 03:25:00 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf
Nov 22 03:25:00 compute-0 systemd[1]: Started libpod-conmon-1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a.scope.
Nov 22 03:25:00 compute-0 podman[80788]: 2025-11-22 03:25:00.444848079 +0000 UTC m=+0.031301790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3a60a777f65bd10cc5cb08882b2550877c65ee254b65372a0b202cdd08bbba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3a60a777f65bd10cc5cb08882b2550877c65ee254b65372a0b202cdd08bbba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:00 compute-0 podman[80788]: 2025-11-22 03:25:00.5672335 +0000 UTC m=+0.153687191 container init 1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a (image=quay.io/ceph/ceph:v18, name=stoic_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:00 compute-0 sudo[80850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:00 compute-0 sudo[80850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80850]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 podman[80788]: 2025-11-22 03:25:00.579841933 +0000 UTC m=+0.166295594 container start 1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a (image=quay.io/ceph/ceph:v18, name=stoic_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:25:00 compute-0 podman[80788]: 2025-11-22 03:25:00.584106076 +0000 UTC m=+0.170559797 container attach 1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a (image=quay.io/ceph/ceph:v18, name=stoic_keller, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:25:00 compute-0 sudo[80879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config
Nov 22 03:25:00 compute-0 sudo[80879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80879]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:00 compute-0 sudo[80904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80904]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config
Nov 22 03:25:00 compute-0 sudo[80929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80929]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:00 compute-0 sudo[80954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80954]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[80979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf.new
Nov 22 03:25:00 compute-0 sudo[80979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[80979]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[81014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:00 compute-0 sudo[81014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[81014]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:00 compute-0 sudo[81048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:00 compute-0 sudo[81048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:00 compute-0 sudo[81048]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81073]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf.new
Nov 22 03:25:01 compute-0 sudo[81098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81098]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:25:01 compute-0 stoic_keller[80853]: 
Nov 22 03:25:01 compute-0 stoic_keller[80853]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 03:25:01 compute-0 systemd[1]: libpod-1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a.scope: Deactivated successfully.
Nov 22 03:25:01 compute-0 podman[80788]: 2025-11-22 03:25:01.144033131 +0000 UTC m=+0.730486802 container died 1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a (image=quay.io/ceph/ceph:v18, name=stoic_keller, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f3a60a777f65bd10cc5cb08882b2550877c65ee254b65372a0b202cdd08bbba-merged.mount: Deactivated successfully.
Nov 22 03:25:01 compute-0 podman[80788]: 2025-11-22 03:25:01.191752295 +0000 UTC m=+0.778205966 container remove 1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a (image=quay.io/ceph/ceph:v18, name=stoic_keller, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:01 compute-0 systemd[1]: libpod-conmon-1d73462c8ccb0ca7bab51f7a1b02814b42a5b3b069e547f0afef1e57e393c14a.scope: Deactivated successfully.
Nov 22 03:25:01 compute-0 sudo[81149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81149]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 ansible-async_wrapper.py[80730]: Module complete (80730)
Nov 22 03:25:01 compute-0 sudo[81184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf.new
Nov 22 03:25:01 compute-0 sudo[81184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81184]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 ceph-mon[75011]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:01 compute-0 ceph-mon[75011]: Updating compute-0:/etc/ceph/ceph.conf
Nov 22 03:25:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:01 compute-0 sudo[81209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81209]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf.new
Nov 22 03:25:01 compute-0 sudo[81257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81257]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81282]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf.new /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf
Nov 22 03:25:01 compute-0 sudo[81307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81307]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 03:25:01 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 03:25:01 compute-0 sudo[81355]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yidmktxneukzspelgaqjytydxfcghyil ; /usr/bin/python3'
Nov 22 03:25:01 compute-0 sudo[81355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:01 compute-0 sudo[81356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81356]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Nov 22 03:25:01 compute-0 sudo[81383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81383]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 python3[81370]: ansible-ansible.legacy.async_status Invoked with jid=j57353704226.80658 mode=status _async_dir=/root/.ansible_async
Nov 22 03:25:01 compute-0 sudo[81355]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81408]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph
Nov 22 03:25:01 compute-0 sudo[81433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81433]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81481]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81527]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqbuhqzissofkowlqcumxbolcaummprj ; /usr/bin/python3'
Nov 22 03:25:01 compute-0 sudo[81527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:01 compute-0 sudo[81531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.client.admin.keyring.new
Nov 22 03:25:01 compute-0 sudo[81531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81531]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81557]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 python3[81532]: ansible-ansible.legacy.async_status Invoked with jid=j57353704226.80658 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 03:25:01 compute-0 sudo[81527]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:01 compute-0 sudo[81582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81582]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:01 compute-0 sudo[81607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:01 compute-0 sudo[81607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:01 compute-0 sudo[81607]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.client.admin.keyring.new
Nov 22 03:25:02 compute-0 sudo[81632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81632]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:02 compute-0 sudo[81680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81680]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.client.admin.keyring.new
Nov 22 03:25:02 compute-0 sudo[81705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81705]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:02 compute-0 sudo[81730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81730]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpmpzibkpsbmqvaomdxdjxmhmizqmbtk ; /usr/bin/python3'
Nov 22 03:25:02 compute-0 ceph-mon[75011]: Updating compute-0:/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.conf
Nov 22 03:25:02 compute-0 ceph-mon[75011]: from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:25:02 compute-0 sudo[81777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:02 compute-0 sudo[81779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.client.admin.keyring.new
Nov 22 03:25:02 compute-0 sudo[81779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81779]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:02 compute-0 sudo[81806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81806]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 python3[81784]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 22 03:25:02 compute-0 sudo[81831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Nov 22 03:25:02 compute-0 sudo[81831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81831]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring
Nov 22 03:25:02 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring
Nov 22 03:25:02 compute-0 sudo[81777]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:02 compute-0 sudo[81858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81858]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config
Nov 22 03:25:02 compute-0 sudo[81883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81883]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:02 compute-0 sudo[81908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81908]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config
Nov 22 03:25:02 compute-0 sudo[81933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81933]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:02 compute-0 sudo[81958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[81958]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[81986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring.new
Nov 22 03:25:02 compute-0 sudo[82029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cedsitlyrgvxzpasxmpxxovhrugcpwwt ; /usr/bin/python3'
Nov 22 03:25:02 compute-0 sudo[81986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[82029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:02 compute-0 sudo[81986]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[82034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:02 compute-0 sudo[82034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[82034]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 sudo[82059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:02 compute-0 sudo[82059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:02 compute-0 sudo[82059]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:02 compute-0 python3[82033]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:02 compute-0 sudo[82084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:02 compute-0 sudo[82084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82084]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 podman[82089]: 2025-11-22 03:25:03.007027827 +0000 UTC m=+0.037132954 container create 262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d (image=quay.io/ceph/ceph:v18, name=great_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:25:03 compute-0 systemd[1]: Started libpod-conmon-262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d.scope.
Nov 22 03:25:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:03 compute-0 sudo[82122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring.new
Nov 22 03:25:03 compute-0 sudo[82122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b7632a3794c42ef77a7c1f53a655006ba77ab9fbb24ba8ae5f1e6dc48167f5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b7632a3794c42ef77a7c1f53a655006ba77ab9fbb24ba8ae5f1e6dc48167f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16b7632a3794c42ef77a7c1f53a655006ba77ab9fbb24ba8ae5f1e6dc48167f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:03 compute-0 sudo[82122]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 podman[82089]: 2025-11-22 03:25:03.079382403 +0000 UTC m=+0.109487580 container init 262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d (image=quay.io/ceph/ceph:v18, name=great_dubinsky, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:25:03 compute-0 podman[82089]: 2025-11-22 03:25:02.990048877 +0000 UTC m=+0.020154034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:03 compute-0 podman[82089]: 2025-11-22 03:25:03.088677019 +0000 UTC m=+0.118782166 container start 262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d (image=quay.io/ceph/ceph:v18, name=great_dubinsky, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:03 compute-0 podman[82089]: 2025-11-22 03:25:03.093357313 +0000 UTC m=+0.123462440 container attach 262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d (image=quay.io/ceph/ceph:v18, name=great_dubinsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:03 compute-0 sudo[82176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:03 compute-0 sudo[82176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82176]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 sudo[82201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring.new
Nov 22 03:25:03 compute-0 sudo[82201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82201]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 sudo[82226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:03 compute-0 sudo[82226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82226]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 ceph-mon[75011]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:03 compute-0 ceph-mon[75011]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Nov 22 03:25:03 compute-0 ceph-mon[75011]: Updating compute-0:/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring
Nov 22 03:25:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:03 compute-0 sudo[82251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring.new
Nov 22 03:25:03 compute-0 sudo[82251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82251]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 sudo[82276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:03 compute-0 sudo[82276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82276]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 sudo[82320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-7adcc38b-6484-5de6-b879-33a0309153df/var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring.new /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/config/ceph.client.admin.keyring
Nov 22 03:25:03 compute-0 sudo[82320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82320]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:25:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:03 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev b06044b2-f9d4-4a7d-b20b-fd4805bafa5f (Updating crash deployment (+1 -> 1))
Nov 22 03:25:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Nov 22 03:25:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 22 03:25:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 22 03:25:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:03 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Nov 22 03:25:03 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Nov 22 03:25:03 compute-0 sudo[82345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:03 compute-0 sudo[82345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82345]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:25:03 compute-0 great_dubinsky[82147]: 
Nov 22 03:25:03 compute-0 great_dubinsky[82147]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 03:25:03 compute-0 sudo[82370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:03 compute-0 sudo[82370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 sudo[82370]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 systemd[1]: libpod-262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d.scope: Deactivated successfully.
Nov 22 03:25:03 compute-0 podman[82089]: 2025-11-22 03:25:03.652099277 +0000 UTC m=+0.682204404 container died 262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d (image=quay.io/ceph/ceph:v18, name=great_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:25:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-16b7632a3794c42ef77a7c1f53a655006ba77ab9fbb24ba8ae5f1e6dc48167f5-merged.mount: Deactivated successfully.
Nov 22 03:25:03 compute-0 podman[82089]: 2025-11-22 03:25:03.702610654 +0000 UTC m=+0.732715781 container remove 262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d (image=quay.io/ceph/ceph:v18, name=great_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:25:03 compute-0 sudo[82398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:03 compute-0 sudo[82398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:03 compute-0 systemd[1]: libpod-conmon-262f70373b4062ad7238e062dab6315a8178dba1a1a73843b5cc1abff8bfac6d.scope: Deactivated successfully.
Nov 22 03:25:03 compute-0 sudo[82398]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 sudo[82029]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:03 compute-0 sudo[82434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:03 compute-0 sudo[82434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:04 compute-0 sudo[82511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwyzddoldpzdnnrizsgyuihwzxldizge ; /usr/bin/python3'
Nov 22 03:25:04 compute-0 sudo[82511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:04 compute-0 podman[82525]: 2025-11-22 03:25:04.078705212 +0000 UTC m=+0.039607720 container create d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:04 compute-0 systemd[1]: Started libpod-conmon-d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3.scope.
Nov 22 03:25:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:04 compute-0 python3[82519]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:04 compute-0 podman[82525]: 2025-11-22 03:25:04.062176335 +0000 UTC m=+0.023078853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:04 compute-0 podman[82525]: 2025-11-22 03:25:04.158449324 +0000 UTC m=+0.119351882 container init d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:25:04 compute-0 podman[82525]: 2025-11-22 03:25:04.164137344 +0000 UTC m=+0.125039852 container start d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:25:04 compute-0 sleepy_zhukovsky[82541]: 167 167
Nov 22 03:25:04 compute-0 systemd[1]: libpod-d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3.scope: Deactivated successfully.
Nov 22 03:25:04 compute-0 podman[82525]: 2025-11-22 03:25:04.16964882 +0000 UTC m=+0.130551348 container attach d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:25:04 compute-0 podman[82525]: 2025-11-22 03:25:04.169967409 +0000 UTC m=+0.130869917 container died d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c4b0ff87f4653c1e2011bc019b78187445f9012bbe2c690262d5e23e1c3fe98-merged.mount: Deactivated successfully.
Nov 22 03:25:04 compute-0 podman[82544]: 2025-11-22 03:25:04.203041175 +0000 UTC m=+0.040598526 container create d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d (image=quay.io/ceph/ceph:v18, name=lucid_torvalds, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:04 compute-0 podman[82525]: 2025-11-22 03:25:04.209188247 +0000 UTC m=+0.170090785 container remove d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_zhukovsky, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:04 compute-0 systemd[1]: libpod-conmon-d7462800b8dc0fb9d7b29d64be0e06f6c395c294b03e0ac9af4315efba5edad3.scope: Deactivated successfully.
Nov 22 03:25:04 compute-0 systemd[1]: Started libpod-conmon-d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d.scope.
Nov 22 03:25:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:04 compute-0 systemd[1]: Reloading.
Nov 22 03:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a84f9bd153de1d0b7c2e5f16ef97fbc11f59652366c539d3d779f675e0d26/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a84f9bd153de1d0b7c2e5f16ef97fbc11f59652366c539d3d779f675e0d26/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7a84f9bd153de1d0b7c2e5f16ef97fbc11f59652366c539d3d779f675e0d26/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:04 compute-0 podman[82544]: 2025-11-22 03:25:04.181966047 +0000 UTC m=+0.019523428 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:04 compute-0 podman[82544]: 2025-11-22 03:25:04.286118304 +0000 UTC m=+0.123675695 container init d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d (image=quay.io/ceph/ceph:v18, name=lucid_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:04 compute-0 podman[82544]: 2025-11-22 03:25:04.294235559 +0000 UTC m=+0.131792930 container start d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d (image=quay.io/ceph/ceph:v18, name=lucid_torvalds, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:04 compute-0 podman[82544]: 2025-11-22 03:25:04.298920093 +0000 UTC m=+0.136477564 container attach d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d (image=quay.io/ceph/ceph:v18, name=lucid_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 03:25:04 compute-0 systemd-sysv-generator[82612]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:04 compute-0 systemd-rc-local-generator[82607]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:04 compute-0 ceph-mon[75011]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Nov 22 03:25:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Nov 22 03:25:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:04 compute-0 ceph-mon[75011]: Deploying daemon crash.compute-0 on compute-0
Nov 22 03:25:04 compute-0 ceph-mon[75011]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:25:04 compute-0 systemd[1]: Reloading.
Nov 22 03:25:04 compute-0 systemd-sysv-generator[82664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:04 compute-0 systemd-rc-local-generator[82655]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Nov 22 03:25:04 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:25:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4046332047' entity='client.admin' 
Nov 22 03:25:04 compute-0 systemd[1]: libpod-d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d.scope: Deactivated successfully.
Nov 22 03:25:04 compute-0 conmon[82574]: conmon d3ae14bd76e2b3684a99 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d.scope/container/memory.events
Nov 22 03:25:04 compute-0 podman[82544]: 2025-11-22 03:25:04.864117508 +0000 UTC m=+0.701674879 container died d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d (image=quay.io/ceph/ceph:v18, name=lucid_torvalds, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:25:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd7a84f9bd153de1d0b7c2e5f16ef97fbc11f59652366c539d3d779f675e0d26-merged.mount: Deactivated successfully.
Nov 22 03:25:04 compute-0 podman[82544]: 2025-11-22 03:25:04.950211937 +0000 UTC m=+0.787769298 container remove d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d (image=quay.io/ceph/ceph:v18, name=lucid_torvalds, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:25:04 compute-0 systemd[1]: libpod-conmon-d3ae14bd76e2b3684a99083ec7efef220f9398d709014bcaa817a77d20cc1d8d.scope: Deactivated successfully.
Nov 22 03:25:04 compute-0 sudo[82511]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:05 compute-0 podman[82736]: 2025-11-22 03:25:05.098850522 +0000 UTC m=+0.049092170 container create 0cd55069c528ea64ba5afc9864d3e1d090e57748b59182a0ca30587e723a837f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:05 compute-0 sudo[82772]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etcfweordvxymiygslzzswexqqsxiplw ; /usr/bin/python3'
Nov 22 03:25:05 compute-0 sudo[82772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:05 compute-0 podman[82736]: 2025-11-22 03:25:05.077538648 +0000 UTC m=+0.027780316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de617c9305dd4d3ed1cfba8d1c8f2e42be8687e68899b142ac879bb0d07c309/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de617c9305dd4d3ed1cfba8d1c8f2e42be8687e68899b142ac879bb0d07c309/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de617c9305dd4d3ed1cfba8d1c8f2e42be8687e68899b142ac879bb0d07c309/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6de617c9305dd4d3ed1cfba8d1c8f2e42be8687e68899b142ac879bb0d07c309/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:05 compute-0 podman[82736]: 2025-11-22 03:25:05.194816314 +0000 UTC m=+0.145057992 container init 0cd55069c528ea64ba5afc9864d3e1d090e57748b59182a0ca30587e723a837f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:05 compute-0 podman[82736]: 2025-11-22 03:25:05.203169225 +0000 UTC m=+0.153410883 container start 0cd55069c528ea64ba5afc9864d3e1d090e57748b59182a0ca30587e723a837f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:25:05 compute-0 bash[82736]: 0cd55069c528ea64ba5afc9864d3e1d090e57748b59182a0ca30587e723a837f
Nov 22 03:25:05 compute-0 systemd[1]: Started Ceph crash.compute-0 for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:25:05 compute-0 ansible-async_wrapper.py[80729]: Done in kid B.
Nov 22 03:25:05 compute-0 sudo[82434]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev b06044b2-f9d4-4a7d-b20b-fd4805bafa5f (Updating crash deployment (+1 -> 1))
Nov 22 03:25:05 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event b06044b2-f9d4-4a7d-b20b-fd4805bafa5f (Updating crash deployment (+1 -> 1)) in 2 seconds
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 38f2ff52-34a7-4d90-bc51-8439dc23a055 does not exist
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 03:25:05 compute-0 python3[82774]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 5f4d2e8e-859e-4674-9cc2-6a02a73d7d7d (Updating mgr deployment (+1 -> 2))
Nov 22 03:25:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.aximwf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aximwf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aximwf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:05 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.aximwf on compute-0
Nov 22 03:25:05 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.aximwf on compute-0
Nov 22 03:25:05 compute-0 podman[82782]: 2025-11-22 03:25:05.380060268 +0000 UTC m=+0.046288057 container create 201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54 (image=quay.io/ceph/ceph:v18, name=confident_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:25:05 compute-0 sudo[82787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:05 compute-0 sudo[82787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:05 compute-0 sudo[82787]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: INFO:ceph-crash:pinging cluster to exercise our key
Nov 22 03:25:05 compute-0 systemd[1]: Started libpod-conmon-201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54.scope.
Nov 22 03:25:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2060ce004833102da4d327609062e7e845b4b7d9d9efc970b5f7da368e2157b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2060ce004833102da4d327609062e7e845b4b7d9d9efc970b5f7da368e2157b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2060ce004833102da4d327609062e7e845b4b7d9d9efc970b5f7da368e2157b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:05 compute-0 podman[82782]: 2025-11-22 03:25:05.360338876 +0000 UTC m=+0.026566735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:05 compute-0 podman[82782]: 2025-11-22 03:25:05.475136855 +0000 UTC m=+0.141364644 container init 201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54 (image=quay.io/ceph/ceph:v18, name=confident_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:05 compute-0 sudo[82825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:05 compute-0 sudo[82825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:05 compute-0 sudo[82825]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:05 compute-0 podman[82782]: 2025-11-22 03:25:05.486705171 +0000 UTC m=+0.152932940 container start 201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54 (image=quay.io/ceph/ceph:v18, name=confident_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:25:05 compute-0 podman[82782]: 2025-11-22 03:25:05.491094638 +0000 UTC m=+0.157322397 container attach 201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54 (image=quay.io/ceph/ceph:v18, name=confident_golick, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:25:05 compute-0 sudo[82852]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:05 compute-0 sudo[82852]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:05 compute-0 sudo[82852]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: 2025-11-22T03:25:05.606+0000 7fc07f816640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: 2025-11-22T03:25:05.606+0000 7fc07f816640 -1 AuthRegistry(0x7fc078067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: 2025-11-22T03:25:05.608+0000 7fc07f816640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: 2025-11-22T03:25:05.608+0000 7fc07f816640 -1 AuthRegistry(0x7fc07f815000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Nov 22 03:25:05 compute-0 sudo[82877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: 2025-11-22T03:25:05.609+0000 7fc07d58b640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: 2025-11-22T03:25:05.609+0000 7fc07f816640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: [errno 13] RADOS permission denied (error connecting to the cluster)
Nov 22 03:25:05 compute-0 sudo[82877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:05 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-crash-compute-0[82777]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4046332047' entity='client.admin' 
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aximwf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.aximwf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 03:25:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:05 compute-0 podman[82972]: 2025-11-22 03:25:05.958605286 +0000 UTC m=+0.053347024 container create e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:05 compute-0 systemd[1]: Started libpod-conmon-e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896.scope.
Nov 22 03:25:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:06 compute-0 podman[82972]: 2025-11-22 03:25:05.927725538 +0000 UTC m=+0.022467326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:06 compute-0 podman[82972]: 2025-11-22 03:25:06.039783945 +0000 UTC m=+0.134525753 container init e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:25:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Nov 22 03:25:06 compute-0 podman[82972]: 2025-11-22 03:25:06.046617656 +0000 UTC m=+0.141359394 container start e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:06 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/523498724' entity='client.admin' 
Nov 22 03:25:06 compute-0 determined_hugle[82989]: 167 167
Nov 22 03:25:06 compute-0 systemd[1]: libpod-e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896.scope: Deactivated successfully.
Nov 22 03:25:06 compute-0 podman[82972]: 2025-11-22 03:25:06.053530109 +0000 UTC m=+0.148271907 container attach e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:25:06 compute-0 podman[82972]: 2025-11-22 03:25:06.053927569 +0000 UTC m=+0.148669317 container died e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:25:06 compute-0 systemd[1]: libpod-201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54.scope: Deactivated successfully.
Nov 22 03:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b18270676ff78b805c88aa17b40408f08217b3d0fa5f9e76a2f5deaa88ae697c-merged.mount: Deactivated successfully.
Nov 22 03:25:06 compute-0 podman[82972]: 2025-11-22 03:25:06.103697338 +0000 UTC m=+0.198439066 container remove e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hugle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:25:06 compute-0 podman[82782]: 2025-11-22 03:25:06.114037741 +0000 UTC m=+0.780265540 container died 201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54 (image=quay.io/ceph/ceph:v18, name=confident_golick, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 03:25:06 compute-0 systemd[1]: libpod-conmon-e3a79b8c2d7481c68a762a60be4b107fb0e901bedf9e5b42cecd5d18b1cf5896.scope: Deactivated successfully.
Nov 22 03:25:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2060ce004833102da4d327609062e7e845b4b7d9d9efc970b5f7da368e2157b-merged.mount: Deactivated successfully.
Nov 22 03:25:06 compute-0 systemd[1]: Reloading.
Nov 22 03:25:06 compute-0 ceph-mgr[75294]: [progress INFO root] Writing back 1 completed events
Nov 22 03:25:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:25:06 compute-0 podman[82782]: 2025-11-22 03:25:06.16758728 +0000 UTC m=+0.833815039 container remove 201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54 (image=quay.io/ceph/ceph:v18, name=confident_golick, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:06 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:25:06 compute-0 sudo[82772]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:06 compute-0 systemd-rc-local-generator[83046]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:06 compute-0 systemd-sysv-generator[83050]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:06 compute-0 systemd[1]: libpod-conmon-201564f171456f4beb4a0146c54bd3771f65e124a4fd3cb00367251de36a9a54.scope: Deactivated successfully.
Nov 22 03:25:06 compute-0 sudo[83081]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icwpadbawdvfvgxxoxnbhgomgfkovahk ; /usr/bin/python3'
Nov 22 03:25:06 compute-0 sudo[83081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:06 compute-0 systemd[1]: Reloading.
Nov 22 03:25:06 compute-0 systemd-rc-local-generator[83117]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:06 compute-0 systemd-sysv-generator[83120]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:06 compute-0 python3[83085]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:06 compute-0 podman[83124]: 2025-11-22 03:25:06.632407856 +0000 UTC m=+0.045466035 container create 4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4 (image=quay.io/ceph/ceph:v18, name=beautiful_montalcini, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:06 compute-0 podman[83124]: 2025-11-22 03:25:06.612699245 +0000 UTC m=+0.025757464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:06 compute-0 systemd[1]: Started libpod-conmon-4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4.scope.
Nov 22 03:25:06 compute-0 systemd[1]: Starting Ceph mgr.compute-0.aximwf for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:25:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe36ca9f6d5f95cd63608d4c35c3770a2257b79a93038d4cea936fea55063ebd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe36ca9f6d5f95cd63608d4c35c3770a2257b79a93038d4cea936fea55063ebd/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe36ca9f6d5f95cd63608d4c35c3770a2257b79a93038d4cea936fea55063ebd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:06 compute-0 podman[83124]: 2025-11-22 03:25:06.772596168 +0000 UTC m=+0.185654357 container init 4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4 (image=quay.io/ceph/ceph:v18, name=beautiful_montalcini, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:06 compute-0 podman[83124]: 2025-11-22 03:25:06.780619391 +0000 UTC m=+0.193677570 container start 4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4 (image=quay.io/ceph/ceph:v18, name=beautiful_montalcini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:06 compute-0 podman[83124]: 2025-11-22 03:25:06.786162977 +0000 UTC m=+0.199221186 container attach 4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4 (image=quay.io/ceph/ceph:v18, name=beautiful_montalcini, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:06 compute-0 podman[83194]: 2025-11-22 03:25:06.943495193 +0000 UTC m=+0.040422591 container create e96b150ed68980eb815f376f6adcc39457a31382f2e53be34d71ef25a1119f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c09448b08f1ea1e0420847aba97fda485bc01f9e9d4d5aa9afaf7f9bc014509/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c09448b08f1ea1e0420847aba97fda485bc01f9e9d4d5aa9afaf7f9bc014509/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c09448b08f1ea1e0420847aba97fda485bc01f9e9d4d5aa9afaf7f9bc014509/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c09448b08f1ea1e0420847aba97fda485bc01f9e9d4d5aa9afaf7f9bc014509/merged/var/lib/ceph/mgr/ceph-compute-0.aximwf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:07 compute-0 podman[83194]: 2025-11-22 03:25:06.92524881 +0000 UTC m=+0.022176228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:07 compute-0 podman[83194]: 2025-11-22 03:25:07.022461613 +0000 UTC m=+0.119389031 container init e96b150ed68980eb815f376f6adcc39457a31382f2e53be34d71ef25a1119f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:07 compute-0 podman[83194]: 2025-11-22 03:25:07.027178039 +0000 UTC m=+0.124105437 container start e96b150ed68980eb815f376f6adcc39457a31382f2e53be34d71ef25a1119f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:25:07 compute-0 bash[83194]: e96b150ed68980eb815f376f6adcc39457a31382f2e53be34d71ef25a1119f6a
Nov 22 03:25:07 compute-0 systemd[1]: Started Ceph mgr.compute-0.aximwf for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:25:07 compute-0 ceph-mon[75011]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:07 compute-0 ceph-mon[75011]: Deploying daemon mgr.compute-0.aximwf on compute-0
Nov 22 03:25:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/523498724' entity='client.admin' 
Nov 22 03:25:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:07 compute-0 ceph-mgr[83214]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:25:07 compute-0 ceph-mgr[83214]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Nov 22 03:25:07 compute-0 ceph-mgr[83214]: pidfile_write: ignore empty --pid-file
Nov 22 03:25:07 compute-0 sudo[82877]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:25:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:07 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 5f4d2e8e-859e-4674-9cc2-6a02a73d7d7d (Updating mgr deployment (+1 -> 2))
Nov 22 03:25:07 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 5f4d2e8e-859e-4674-9cc2-6a02a73d7d7d (Updating mgr deployment (+1 -> 2)) in 2 seconds
Nov 22 03:25:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:25:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:07 compute-0 sudo[83258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:07 compute-0 sudo[83258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:07 compute-0 sudo[83258]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:07 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'alerts'
Nov 22 03:25:07 compute-0 sudo[83283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:25:07 compute-0 sudo[83283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:07 compute-0 sudo[83283]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:07 compute-0 sudo[83308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:07 compute-0 sudo[83308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:07 compute-0 sudo[83308]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Nov 22 03:25:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/589224846' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 22 03:25:07 compute-0 sudo[83333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:07 compute-0 sudo[83333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:07 compute-0 sudo[83333]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:07 compute-0 sudo[83359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:07 compute-0 sudo[83359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:07 compute-0 sudo[83359]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:07 compute-0 sudo[83384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:25:07 compute-0 sudo[83384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:07 compute-0 ceph-mgr[83214]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:25:07 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'balancer'
Nov 22 03:25:07 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]: 2025-11-22T03:25:07.534+0000 7fbb95749140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Nov 22 03:25:07 compute-0 ceph-mgr[83214]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:25:07 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'cephadm'
Nov 22 03:25:07 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]: 2025-11-22T03:25:07.776+0000 7fbb95749140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Nov 22 03:25:08 compute-0 podman[83478]: 2025-11-22 03:25:08.094948569 +0000 UTC m=+0.084404155 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:25:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/589224846' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/589224846' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Nov 22 03:25:08 compute-0 beautiful_montalcini[83142]: set require_min_compat_client to mimic
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Nov 22 03:25:08 compute-0 systemd[1]: libpod-4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4.scope: Deactivated successfully.
Nov 22 03:25:08 compute-0 podman[83497]: 2025-11-22 03:25:08.200936826 +0000 UTC m=+0.033898959 container died 4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4 (image=quay.io/ceph/ceph:v18, name=beautiful_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe36ca9f6d5f95cd63608d4c35c3770a2257b79a93038d4cea936fea55063ebd-merged.mount: Deactivated successfully.
Nov 22 03:25:08 compute-0 podman[83497]: 2025-11-22 03:25:08.24795439 +0000 UTC m=+0.080916503 container remove 4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4 (image=quay.io/ceph/ceph:v18, name=beautiful_montalcini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:25:08 compute-0 systemd[1]: libpod-conmon-4b5f84bb13a44c98d042ce8ba381e407a9b5c22ae26ca864d7d88cc70b392eb4.scope: Deactivated successfully.
Nov 22 03:25:08 compute-0 podman[83478]: 2025-11-22 03:25:08.275680754 +0000 UTC m=+0.265136340 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:25:08 compute-0 sudo[83081]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:08 compute-0 sudo[83384]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c132df21-fc43-4b45-90ef-2d21b81fe721 does not exist
Nov 22 03:25:08 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8fc83db2-37cd-44ac-8e2c-2df704ac45fd does not exist
Nov 22 03:25:08 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7daf6dd7-ff48-49f3-9871-596398f5fa28 does not exist
Nov 22 03:25:08 compute-0 sudo[83577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:08 compute-0 sudo[83577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:08 compute-0 sudo[83577]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:08 compute-0 sudo[83624]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niswpdzzsagoaukghxqiqncvpzpuogns ; /usr/bin/python3'
Nov 22 03:25:08 compute-0 sudo[83624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:08 compute-0 sudo[83627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:25:08 compute-0 sudo[83627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:08 compute-0 sudo[83627]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:08 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 03:25:08 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 22 03:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:08 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 03:25:08 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 03:25:08 compute-0 python3[83628]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:08 compute-0 sudo[83653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:08 compute-0 sudo[83653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:08 compute-0 sudo[83653]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:08 compute-0 podman[83673]: 2025-11-22 03:25:08.927020449 +0000 UTC m=+0.053700462 container create 1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:08 compute-0 systemd[1]: Started libpod-conmon-1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717.scope.
Nov 22 03:25:08 compute-0 sudo[83688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:08 compute-0 sudo[83688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:08 compute-0 sudo[83688]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd40505ecfb721070e8fda187e6ac9dc93dba685e5e39f21ab4c08a86e2424e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd40505ecfb721070e8fda187e6ac9dc93dba685e5e39f21ab4c08a86e2424e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edd40505ecfb721070e8fda187e6ac9dc93dba685e5e39f21ab4c08a86e2424e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:09 compute-0 podman[83673]: 2025-11-22 03:25:08.911619081 +0000 UTC m=+0.038299124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:09 compute-0 podman[83673]: 2025-11-22 03:25:09.036448887 +0000 UTC m=+0.163128950 container init 1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:25:09 compute-0 podman[83673]: 2025-11-22 03:25:09.043478213 +0000 UTC m=+0.170158236 container start 1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:09 compute-0 podman[83673]: 2025-11-22 03:25:09.062479886 +0000 UTC m=+0.189159939 container attach 1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:25:09 compute-0 sudo[83721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:09 compute-0 sudo[83721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 sudo[83721]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 ceph-mon[75011]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/589224846' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Nov 22 03:25:09 compute-0 ceph-mon[75011]: osdmap e3: 0 total, 0 up, 0 in
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Nov 22 03:25:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:09 compute-0 sudo[83747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:09 compute-0 sudo[83747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:09 compute-0 podman[83817]: 2025-11-22 03:25:09.472524333 +0000 UTC m=+0.067392656 container create bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dirac, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:25:09 compute-0 systemd[1]: Started libpod-conmon-bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617.scope.
Nov 22 03:25:09 compute-0 podman[83817]: 2025-11-22 03:25:09.444727967 +0000 UTC m=+0.039596340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:09 compute-0 podman[83817]: 2025-11-22 03:25:09.554692179 +0000 UTC m=+0.149560542 container init bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:09 compute-0 podman[83817]: 2025-11-22 03:25:09.563371318 +0000 UTC m=+0.158239601 container start bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dirac, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:09 compute-0 podman[83817]: 2025-11-22 03:25:09.566987534 +0000 UTC m=+0.161855917 container attach bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dirac, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:25:09 compute-0 amazing_dirac[83834]: 167 167
Nov 22 03:25:09 compute-0 systemd[1]: libpod-bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617.scope: Deactivated successfully.
Nov 22 03:25:09 compute-0 podman[83817]: 2025-11-22 03:25:09.579514896 +0000 UTC m=+0.174383209 container died bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:25:09 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f63b9bbcfe35cada006616818c12b6125a5da6d17cf1ad3720a6a903261770f6-merged.mount: Deactivated successfully.
Nov 22 03:25:09 compute-0 podman[83817]: 2025-11-22 03:25:09.62914396 +0000 UTC m=+0.224012243 container remove bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:09 compute-0 sudo[83841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:09 compute-0 sudo[83841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 systemd[1]: libpod-conmon-bcb47eaa9044c5656a7bc4b9b50c44eac8faffe59ea0a614eb7cdfdb79e85617.scope: Deactivated successfully.
Nov 22 03:25:09 compute-0 sudo[83841]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 sudo[83747]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:09 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.wbwfxq (unknown last config time)...
Nov 22 03:25:09 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.wbwfxq (unknown last config time)...
Nov 22 03:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.wbwfxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Nov 22 03:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wbwfxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 03:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 03:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 03:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:09 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.wbwfxq on compute-0
Nov 22 03:25:09 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.wbwfxq on compute-0
Nov 22 03:25:09 compute-0 sudo[83878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:09 compute-0 sudo[83878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 sudo[83878]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'crash'
Nov 22 03:25:09 compute-0 sudo[83901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:09 compute-0 sudo[83901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 sudo[83911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:09 compute-0 sudo[83901]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 sudo[83911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 sudo[83911]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 sudo[83953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:09 compute-0 sudo[83953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 sudo[83953]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 sudo[83954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host --expect-hostname compute-0
Nov 22 03:25:09 compute-0 sudo[83954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 sudo[84003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:09 compute-0 sudo[84003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:09 compute-0 sudo[84003]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:09 compute-0 sudo[84028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:09 compute-0 sudo[84028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:10 compute-0 ceph-mgr[83214]: mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:25:10 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'dashboard'
Nov 22 03:25:10 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]: 2025-11-22T03:25:10.045+0000 7fbb95749140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Nov 22 03:25:10 compute-0 sudo[83954]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: [cephadm INFO root] Added host compute-0
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Added host compute-0
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service mon spec with placement compute-0
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: Reconfiguring mon.compute-0 (unknown last config time)...
Nov 22 03:25:10 compute-0 ceph-mon[75011]: Reconfiguring daemon mon.compute-0 on compute-0
Nov 22 03:25:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.wbwfxq", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Nov 22 03:25:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 03:25:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Nov 22 03:25:10 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 elated_mccarthy[83717]: Added host 'compute-0' with addr '192.168.122.100'
Nov 22 03:25:10 compute-0 elated_mccarthy[83717]: Scheduled mon update...
Nov 22 03:25:10 compute-0 elated_mccarthy[83717]: Scheduled mgr update...
Nov 22 03:25:10 compute-0 elated_mccarthy[83717]: Scheduled osd.default_drive_group update...
Nov 22 03:25:10 compute-0 systemd[1]: libpod-1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717.scope: Deactivated successfully.
Nov 22 03:25:10 compute-0 podman[83673]: 2025-11-22 03:25:10.196498191 +0000 UTC m=+1.323178224 container died 1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-edd40505ecfb721070e8fda187e6ac9dc93dba685e5e39f21ab4c08a86e2424e-merged.mount: Deactivated successfully.
Nov 22 03:25:10 compute-0 podman[83673]: 2025-11-22 03:25:10.240563878 +0000 UTC m=+1.367243891 container remove 1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717 (image=quay.io/ceph/ceph:v18, name=elated_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:10 compute-0 systemd[1]: libpod-conmon-1b8be1a6203fd3e55f3f005ae21fb692b005cef69ecd36ab4aeadf4e44cfa717.scope: Deactivated successfully.
Nov 22 03:25:10 compute-0 sudo[83624]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:10 compute-0 podman[84098]: 2025-11-22 03:25:10.269976087 +0000 UTC m=+0.044968452 container create c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:10 compute-0 systemd[1]: Started libpod-conmon-c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a.scope.
Nov 22 03:25:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:10 compute-0 podman[84098]: 2025-11-22 03:25:10.342161568 +0000 UTC m=+0.117153963 container init c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:10 compute-0 podman[84098]: 2025-11-22 03:25:10.349230815 +0000 UTC m=+0.124223180 container start c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:10 compute-0 podman[84098]: 2025-11-22 03:25:10.25501078 +0000 UTC m=+0.030003175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:10 compute-0 podman[84098]: 2025-11-22 03:25:10.353275052 +0000 UTC m=+0.128267477 container attach c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:25:10 compute-0 objective_curran[84118]: 167 167
Nov 22 03:25:10 compute-0 systemd[1]: libpod-c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a.scope: Deactivated successfully.
Nov 22 03:25:10 compute-0 podman[84098]: 2025-11-22 03:25:10.355628315 +0000 UTC m=+0.130620680 container died c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfa8b587134275e8f7f0caa20253e7bba34c469f440ff507af3f796c80537a0a-merged.mount: Deactivated successfully.
Nov 22 03:25:10 compute-0 podman[84098]: 2025-11-22 03:25:10.391597017 +0000 UTC m=+0.166589382 container remove c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_curran, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:25:10 compute-0 systemd[1]: libpod-conmon-c449d2657b60131cac57c8f1b4fd4d8e8a8ac57dfa4fb3274f108af4fb02534a.scope: Deactivated successfully.
Nov 22 03:25:10 compute-0 sudo[84028]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:10 compute-0 sudo[84162]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbfrgwzxeolojcjafdertgijghgziath ; /usr/bin/python3'
Nov 22 03:25:10 compute-0 sudo[84162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:10 compute-0 sudo[84160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:10 compute-0 sudo[84160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:10 compute-0 sudo[84160]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:10 compute-0 sudo[84188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:10 compute-0 sudo[84188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:10 compute-0 sudo[84188]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:10 compute-0 sudo[84213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:10 compute-0 sudo[84213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:10 compute-0 python3[84185]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:10 compute-0 sudo[84213]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:10 compute-0 sudo[84241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:25:10 compute-0 sudo[84241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:10 compute-0 podman[84239]: 2025-11-22 03:25:10.744757557 +0000 UTC m=+0.058564571 container create 1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276 (image=quay.io/ceph/ceph:v18, name=wizardly_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:25:10 compute-0 systemd[1]: Started libpod-conmon-1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276.scope.
Nov 22 03:25:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:10 compute-0 podman[84239]: 2025-11-22 03:25:10.729156655 +0000 UTC m=+0.042963689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e234af07bf6691b68c85f6d4bc0d350576f3b0fba2bc2924d80497e38c3cabed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e234af07bf6691b68c85f6d4bc0d350576f3b0fba2bc2924d80497e38c3cabed/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e234af07bf6691b68c85f6d4bc0d350576f3b0fba2bc2924d80497e38c3cabed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:10 compute-0 podman[84239]: 2025-11-22 03:25:10.842776953 +0000 UTC m=+0.156584067 container init 1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276 (image=quay.io/ceph/ceph:v18, name=wizardly_mendeleev, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:10 compute-0 podman[84239]: 2025-11-22 03:25:10.852949832 +0000 UTC m=+0.166756846 container start 1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276 (image=quay.io/ceph/ceph:v18, name=wizardly_mendeleev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:10 compute-0 podman[84239]: 2025-11-22 03:25:10.860017289 +0000 UTC m=+0.173824363 container attach 1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276 (image=quay.io/ceph/ceph:v18, name=wizardly_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:11 compute-0 ceph-mon[75011]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:11 compute-0 ceph-mon[75011]: from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:25:11 compute-0 ceph-mon[75011]: Reconfiguring mgr.compute-0.wbwfxq (unknown last config time)...
Nov 22 03:25:11 compute-0 ceph-mon[75011]: Reconfiguring daemon mgr.compute-0.wbwfxq on compute-0
Nov 22 03:25:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: Added host compute-0
Nov 22 03:25:11 compute-0 ceph-mon[75011]: Saving service mon spec with placement compute-0
Nov 22 03:25:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: Saving service mgr spec with placement compute-0
Nov 22 03:25:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: Marking host: compute-0 for OSDSpec preview refresh.
Nov 22 03:25:11 compute-0 ceph-mon[75011]: Saving service osd.default_drive_group spec with placement compute-0
Nov 22 03:25:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mgr[75294]: [progress INFO root] Writing back 2 completed events
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 podman[84354]: 2025-11-22 03:25:11.211230858 +0000 UTC m=+0.053365324 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:11 compute-0 podman[84354]: 2025-11-22 03:25:11.302884524 +0000 UTC m=+0.145018980 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:11 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'devicehealth'
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/363677906' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:25:11 compute-0 wizardly_mendeleev[84280]: 
Nov 22 03:25:11 compute-0 wizardly_mendeleev[84280]: {"fsid":"7adcc38b-6484-5de6-b879-33a0309153df","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":81,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-11-22T03:23:47.511353+0000","services":{}},"progress_events":{}}
Nov 22 03:25:11 compute-0 systemd[1]: libpod-1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276.scope: Deactivated successfully.
Nov 22 03:25:11 compute-0 podman[84239]: 2025-11-22 03:25:11.502856489 +0000 UTC m=+0.816663543 container died 1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276 (image=quay.io/ceph/ceph:v18, name=wizardly_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e234af07bf6691b68c85f6d4bc0d350576f3b0fba2bc2924d80497e38c3cabed-merged.mount: Deactivated successfully.
Nov 22 03:25:11 compute-0 podman[84239]: 2025-11-22 03:25:11.577861255 +0000 UTC m=+0.891668269 container remove 1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276 (image=quay.io/ceph/ceph:v18, name=wizardly_mendeleev, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:25:11 compute-0 systemd[1]: libpod-conmon-1c4b9dc42b7119aeac982a6692a2e86477a60b71231f3d0243cd28a01d134276.scope: Deactivated successfully.
Nov 22 03:25:11 compute-0 sudo[84162]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:11 compute-0 sudo[84241]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 47575de5-7903-46ad-af04-9d6df69c57e9 does not exist
Nov 22 03:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Nov 22 03:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:11 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev e645a9d9-3da1-475b-a469-16988b3e0666 (Updating mgr deployment (-1 -> 1))
Nov 22 03:25:11 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.aximwf from compute-0 -- ports [8765]
Nov 22 03:25:11 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.aximwf from compute-0 -- ports [8765]
Nov 22 03:25:11 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]: 2025-11-22T03:25:11.715+0000 7fbb95749140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:25:11 compute-0 ceph-mgr[83214]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Nov 22 03:25:11 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'diskprediction_local'
Nov 22 03:25:11 compute-0 sudo[84478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:11 compute-0 sudo[84478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:11 compute-0 sudo[84478]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:11 compute-0 sudo[84503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:11 compute-0 sudo[84503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:11 compute-0 sudo[84503]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:11 compute-0 sudo[84528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:11 compute-0 sudo[84528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:11 compute-0 sudo[84528]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:12 compute-0 sudo[84553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 rm-daemon --fsid 7adcc38b-6484-5de6-b879-33a0309153df --name mgr.compute-0.aximwf --force --tcp-ports 8765
Nov 22 03:25:12 compute-0 sudo[84553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/363677906' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:12 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Nov 22 03:25:12 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Nov 22 03:25:12 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]:   from numpy import show_config as show_numpy_config
Nov 22 03:25:12 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]: 2025-11-22T03:25:12.231+0000 7fbb95749140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:25:12 compute-0 ceph-mgr[83214]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Nov 22 03:25:12 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'influx'
Nov 22 03:25:12 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.aximwf for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:25:12 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf[83210]: 2025-11-22T03:25:12.462+0000 7fbb95749140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:25:12 compute-0 ceph-mgr[83214]: mgr[py] Module influx has missing NOTIFY_TYPES member
Nov 22 03:25:12 compute-0 ceph-mgr[83214]: mgr[py] Loading python module 'insights'
Nov 22 03:25:12 compute-0 podman[84646]: 2025-11-22 03:25:12.600795788 +0000 UTC m=+0.078954381 container died e96b150ed68980eb815f376f6adcc39457a31382f2e53be34d71ef25a1119f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c09448b08f1ea1e0420847aba97fda485bc01f9e9d4d5aa9afaf7f9bc014509-merged.mount: Deactivated successfully.
Nov 22 03:25:12 compute-0 podman[84646]: 2025-11-22 03:25:12.657399858 +0000 UTC m=+0.135558441 container remove e96b150ed68980eb815f376f6adcc39457a31382f2e53be34d71ef25a1119f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:25:12 compute-0 bash[84646]: ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-aximwf
Nov 22 03:25:12 compute-0 systemd[1]: ceph-7adcc38b-6484-5de6-b879-33a0309153df@mgr.compute-0.aximwf.service: Main process exited, code=exited, status=143/n/a
Nov 22 03:25:12 compute-0 systemd[1]: ceph-7adcc38b-6484-5de6-b879-33a0309153df@mgr.compute-0.aximwf.service: Failed with result 'exit-code'.
Nov 22 03:25:12 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.aximwf for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:25:12 compute-0 systemd[1]: ceph-7adcc38b-6484-5de6-b879-33a0309153df@mgr.compute-0.aximwf.service: Consumed 6.476s CPU time.
Nov 22 03:25:12 compute-0 systemd[1]: Reloading.
Nov 22 03:25:13 compute-0 systemd-rc-local-generator[84732]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:13 compute-0 systemd-sysv-generator[84736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:13 compute-0 ceph-mon[75011]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:13 compute-0 ceph-mon[75011]: Removing daemon mgr.compute-0.aximwf from compute-0 -- ports [8765]
Nov 22 03:25:13 compute-0 sudo[84553]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:13 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.aximwf
Nov 22 03:25:13 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.aximwf
Nov 22 03:25:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.aximwf"} v 0) v1
Nov 22 03:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.aximwf"}]: dispatch
Nov 22 03:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.aximwf"}]': finished
Nov 22 03:25:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:13 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev e645a9d9-3da1-475b-a469-16988b3e0666 (Updating mgr deployment (-1 -> 1))
Nov 22 03:25:13 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event e645a9d9-3da1-475b-a469-16988b3e0666 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Nov 22 03:25:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Nov 22 03:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:13 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f96e7a91-2b31-4bbf-b47d-315b548ad33f does not exist
Nov 22 03:25:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:25:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:25:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:13 compute-0 sudo[84743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:13 compute-0 sudo[84743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:13 compute-0 sudo[84743]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:13 compute-0 sudo[84768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:13 compute-0 sudo[84768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:13 compute-0 sudo[84768]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:13 compute-0 sudo[84793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:13 compute-0 sudo[84793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:13 compute-0 sudo[84793]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:13 compute-0 sudo[84818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:25:13 compute-0 sudo[84818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:13 compute-0 podman[84882]: 2025-11-22 03:25:13.90857736 +0000 UTC m=+0.050591374 container create 9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:13 compute-0 systemd[1]: Started libpod-conmon-9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406.scope.
Nov 22 03:25:13 compute-0 podman[84882]: 2025-11-22 03:25:13.883933773 +0000 UTC m=+0.025947867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:14 compute-0 podman[84882]: 2025-11-22 03:25:14.011656932 +0000 UTC m=+0.153671037 container init 9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:25:14 compute-0 podman[84882]: 2025-11-22 03:25:14.024484121 +0000 UTC m=+0.166498166 container start 9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:14 compute-0 podman[84882]: 2025-11-22 03:25:14.028362433 +0000 UTC m=+0.170376438 container attach 9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:14 compute-0 bold_sinoussi[84899]: 167 167
Nov 22 03:25:14 compute-0 systemd[1]: libpod-9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406.scope: Deactivated successfully.
Nov 22 03:25:14 compute-0 podman[84882]: 2025-11-22 03:25:14.032292353 +0000 UTC m=+0.174306398 container died 9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a3681b454ddbf1687fc9d356a2f53cc3583d5b5a5da8510615470cb3f71f797-merged.mount: Deactivated successfully.
Nov 22 03:25:14 compute-0 podman[84882]: 2025-11-22 03:25:14.074070041 +0000 UTC m=+0.216084056 container remove 9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_sinoussi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:25:14 compute-0 systemd[1]: libpod-conmon-9413b02bcdcbd670f3cffa2531d650506fb24efe2539a5f3f7044f0556a59406.scope: Deactivated successfully.
Nov 22 03:25:14 compute-0 ceph-mon[75011]: Removing key for mgr.compute-0.aximwf
Nov 22 03:25:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.aximwf"}]: dispatch
Nov 22 03:25:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.aximwf"}]': finished
Nov 22 03:25:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:25:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:25:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:14 compute-0 podman[84923]: 2025-11-22 03:25:14.270186684 +0000 UTC m=+0.052792305 container create 6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_haibt, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:25:14 compute-0 systemd[1]: Started libpod-conmon-6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57.scope.
Nov 22 03:25:14 compute-0 podman[84923]: 2025-11-22 03:25:14.244953525 +0000 UTC m=+0.027559235 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b7b87ad77e59e593ccf32ea5c686259c13701b0f9731fe9b95dcdd854bc083/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b7b87ad77e59e593ccf32ea5c686259c13701b0f9731fe9b95dcdd854bc083/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b7b87ad77e59e593ccf32ea5c686259c13701b0f9731fe9b95dcdd854bc083/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b7b87ad77e59e593ccf32ea5c686259c13701b0f9731fe9b95dcdd854bc083/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90b7b87ad77e59e593ccf32ea5c686259c13701b0f9731fe9b95dcdd854bc083/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:14 compute-0 podman[84923]: 2025-11-22 03:25:14.369932912 +0000 UTC m=+0.152538612 container init 6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:25:14 compute-0 podman[84923]: 2025-11-22 03:25:14.386149473 +0000 UTC m=+0.168755124 container start 6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_haibt, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:25:14 compute-0 podman[84923]: 2025-11-22 03:25:14.390363854 +0000 UTC m=+0.172969565 container attach 6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_haibt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:25:15 compute-0 ceph-mon[75011]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:15 compute-0 determined_haibt[84939]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:25:15 compute-0 determined_haibt[84939]: --> relative data size: 1.0
Nov 22 03:25:15 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:25:15 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 8bea6992-7a26-4e04-a61e-1d348ad79289
Nov 22 03:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289"} v 0) v1
Nov 22 03:25:16 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3459891137' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289"}]: dispatch
Nov 22 03:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Nov 22 03:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:16 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3459891137' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289"}]': finished
Nov 22 03:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Nov 22 03:25:16 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Nov 22 03:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:16 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:16 compute-0 ceph-mgr[75294]: [progress INFO root] Writing back 3 completed events
Nov 22 03:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:25:16 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:25:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3459891137' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289"}]: dispatch
Nov 22 03:25:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3459891137' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289"}]': finished
Nov 22 03:25:16 compute-0 ceph-mon[75011]: osdmap e4: 1 total, 0 up, 1 in
Nov 22 03:25:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:16 compute-0 lvm[85001]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:25:16 compute-0 lvm[85001]: VG ceph_vg0 finished
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Nov 22 03:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 03:25:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1941145039' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:25:16 compute-0 determined_haibt[84939]:  stderr: got monmap epoch 1
Nov 22 03:25:16 compute-0 determined_haibt[84939]: --> Creating keyring file for osd.0
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Nov 22 03:25:16 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 8bea6992-7a26-4e04-a61e-1d348ad79289 --setuser ceph --setgroup ceph
Nov 22 03:25:17 compute-0 ceph-mon[75011]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1941145039' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:25:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 22 03:25:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:25:18 compute-0 ceph-mon[75011]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:18 compute-0 ceph-mon[75011]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Nov 22 03:25:18 compute-0 ceph-mon[75011]: Cluster is now healthy
Nov 22 03:25:19 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:16.869+0000 7f4b1373c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:19 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:16.869+0000 7f4b1373c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:19 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:16.869+0000 7f4b1373c740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:19 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:16.869+0000 7f4b1373c740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Nov 22 03:25:19 compute-0 determined_haibt[84939]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Nov 22 03:25:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:19 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:25:19 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Nov 22 03:25:19 compute-0 determined_haibt[84939]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:19 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:19 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 03:25:19 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:25:19 compute-0 determined_haibt[84939]: --> ceph-volume lvm activate successful for osd ID: 0
Nov 22 03:25:19 compute-0 determined_haibt[84939]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Nov 22 03:25:19 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:25:19 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 104ff426-5a1d-4d63-8f77-501ee5d58b1f
Nov 22 03:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f"} v 0) v1
Nov 22 03:25:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4181800900' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f"}]: dispatch
Nov 22 03:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Nov 22 03:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4181800900' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f"}]': finished
Nov 22 03:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Nov 22 03:25:19 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Nov 22 03:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:19 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:19 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:20 compute-0 lvm[85935]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:25:20 compute-0 lvm[85935]: VG ceph_vg1 finished
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Nov 22 03:25:20 compute-0 ceph-mon[75011]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4181800900' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f"}]: dispatch
Nov 22 03:25:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4181800900' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f"}]': finished
Nov 22 03:25:20 compute-0 ceph-mon[75011]: osdmap e5: 2 total, 0 up, 2 in
Nov 22 03:25:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 03:25:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1062604062' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:25:20 compute-0 determined_haibt[84939]:  stderr: got monmap epoch 1
Nov 22 03:25:20 compute-0 determined_haibt[84939]: --> Creating keyring file for osd.1
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Nov 22 03:25:20 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 104ff426-5a1d-4d63-8f77-501ee5d58b1f --setuser ceph --setgroup ceph
Nov 22 03:25:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1062604062' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:25:22 compute-0 ceph-mon[75011]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:23 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:20.688+0000 7f1b44fe8740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:23 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:20.688+0000 7f1b44fe8740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:23 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:20.688+0000 7f1b44fe8740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:23 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:20.688+0000 7f1b44fe8740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Nov 22 03:25:23 compute-0 determined_haibt[84939]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:25:23 compute-0 determined_haibt[84939]: --> ceph-volume lvm activate successful for osd ID: 1
Nov 22 03:25:23 compute-0 determined_haibt[84939]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Nov 22 03:25:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new da204276-98db-4558-b1d5-f5821c78e391
Nov 22 03:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "da204276-98db-4558-b1d5-f5821c78e391"} v 0) v1
Nov 22 03:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1586829571' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "da204276-98db-4558-b1d5-f5821c78e391"}]: dispatch
Nov 22 03:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Nov 22 03:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1586829571' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "da204276-98db-4558-b1d5-f5821c78e391"}]': finished
Nov 22 03:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Nov 22 03:25:23 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Nov 22 03:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:23 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:23 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:23 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:23 compute-0 lvm[86867]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:25:23 compute-0 lvm[86867]: VG ceph_vg2 finished
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-authtool --gen-print-key
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:23 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Nov 22 03:25:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Nov 22 03:25:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2000918747' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:25:24 compute-0 determined_haibt[84939]:  stderr: got monmap epoch 1
Nov 22 03:25:24 compute-0 determined_haibt[84939]: --> Creating keyring file for osd.2
Nov 22 03:25:24 compute-0 ceph-mon[75011]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1586829571' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "da204276-98db-4558-b1d5-f5821c78e391"}]: dispatch
Nov 22 03:25:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1586829571' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "da204276-98db-4558-b1d5-f5821c78e391"}]': finished
Nov 22 03:25:24 compute-0 ceph-mon[75011]: osdmap e6: 3 total, 0 up, 3 in
Nov 22 03:25:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2000918747' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Nov 22 03:25:24 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Nov 22 03:25:24 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Nov 22 03:25:24 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid da204276-98db-4558-b1d5-f5821c78e391 --setuser ceph --setgroup ceph
Nov 22 03:25:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:26 compute-0 ceph-mon[75011]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:26 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:24.482+0000 7f6d5f7ed740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:26 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:24.482+0000 7f6d5f7ed740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:26 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:24.482+0000 7f6d5f7ed740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Nov 22 03:25:26 compute-0 determined_haibt[84939]:  stderr: 2025-11-22T03:25:24.482+0000 7f6d5f7ed740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Nov 22 03:25:26 compute-0 determined_haibt[84939]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Nov 22 03:25:27 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:25:27 compute-0 determined_haibt[84939]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Nov 22 03:25:27 compute-0 determined_haibt[84939]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:27 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:27 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 03:25:27 compute-0 determined_haibt[84939]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:25:27 compute-0 determined_haibt[84939]: --> ceph-volume lvm activate successful for osd ID: 2
Nov 22 03:25:27 compute-0 determined_haibt[84939]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Nov 22 03:25:27 compute-0 systemd[1]: libpod-6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57.scope: Deactivated successfully.
Nov 22 03:25:27 compute-0 systemd[1]: libpod-6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57.scope: Consumed 6.628s CPU time.
Nov 22 03:25:27 compute-0 podman[84923]: 2025-11-22 03:25:27.100108089 +0000 UTC m=+12.882713730 container died 6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_haibt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:25:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-90b7b87ad77e59e593ccf32ea5c686259c13701b0f9731fe9b95dcdd854bc083-merged.mount: Deactivated successfully.
Nov 22 03:25:27 compute-0 podman[84923]: 2025-11-22 03:25:27.185595857 +0000 UTC m=+12.968201518 container remove 6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_haibt, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:27 compute-0 systemd[1]: libpod-conmon-6084f91eb47458a86f0c6b2c72fc7e0b8a1d3fe5efe4ea0b8d5c91d2a6a5fe57.scope: Deactivated successfully.
Nov 22 03:25:27 compute-0 sudo[84818]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:27 compute-0 sudo[87784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:27 compute-0 sudo[87784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:27 compute-0 sudo[87784]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:27 compute-0 sudo[87809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:27 compute-0 sudo[87809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:27 compute-0 sudo[87809]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:27 compute-0 sudo[87834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:27 compute-0 sudo[87834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:27 compute-0 sudo[87834]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:27 compute-0 sudo[87859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:25:27 compute-0 sudo[87859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:27 compute-0 podman[87923]: 2025-11-22 03:25:27.939761396 +0000 UTC m=+0.061749273 container create b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:27 compute-0 systemd[1]: Started libpod-conmon-b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f.scope.
Nov 22 03:25:28 compute-0 podman[87923]: 2025-11-22 03:25:27.914979105 +0000 UTC m=+0.036967072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:28 compute-0 podman[87923]: 2025-11-22 03:25:28.036502387 +0000 UTC m=+0.158490344 container init b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:25:28 compute-0 podman[87923]: 2025-11-22 03:25:28.044519817 +0000 UTC m=+0.166507694 container start b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:25:28 compute-0 podman[87923]: 2025-11-22 03:25:28.047574896 +0000 UTC m=+0.169562793 container attach b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:25:28 compute-0 agitated_shamir[87940]: 167 167
Nov 22 03:25:28 compute-0 systemd[1]: libpod-b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f.scope: Deactivated successfully.
Nov 22 03:25:28 compute-0 podman[87945]: 2025-11-22 03:25:28.10072394 +0000 UTC m=+0.033926258 container died b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:25:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6511ca3a9a63d7c068fac2ed358583852086e17c90eba422f7cfbebf5a086c22-merged.mount: Deactivated successfully.
Nov 22 03:25:28 compute-0 podman[87945]: 2025-11-22 03:25:28.146555966 +0000 UTC m=+0.079758224 container remove b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_shamir, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:25:28 compute-0 systemd[1]: libpod-conmon-b518f35f248b5d23aa90cc77b39a8a86e3ca540e6e56ff8cc5184c7a0c57d94f.scope: Deactivated successfully.
Nov 22 03:25:28 compute-0 podman[87965]: 2025-11-22 03:25:28.321864664 +0000 UTC m=+0.047140617 container create 3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:25:28 compute-0 systemd[1]: Started libpod-conmon-3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb.scope.
Nov 22 03:25:28 compute-0 podman[87965]: 2025-11-22 03:25:28.302715864 +0000 UTC m=+0.027991837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a813cf9f9ba8e986707773af069730b7a21041dc6675c5744eb846171c4ca24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a813cf9f9ba8e986707773af069730b7a21041dc6675c5744eb846171c4ca24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a813cf9f9ba8e986707773af069730b7a21041dc6675c5744eb846171c4ca24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a813cf9f9ba8e986707773af069730b7a21041dc6675c5744eb846171c4ca24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:28 compute-0 ceph-mon[75011]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:28 compute-0 podman[87965]: 2025-11-22 03:25:28.432359254 +0000 UTC m=+0.157635287 container init 3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:28 compute-0 podman[87965]: 2025-11-22 03:25:28.4414951 +0000 UTC m=+0.166771043 container start 3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jang, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:28 compute-0 podman[87965]: 2025-11-22 03:25:28.446169649 +0000 UTC m=+0.171445611 container attach 3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:29 compute-0 loving_jang[87981]: {
Nov 22 03:25:29 compute-0 loving_jang[87981]:     "0": [
Nov 22 03:25:29 compute-0 loving_jang[87981]:         {
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "devices": [
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "/dev/loop3"
Nov 22 03:25:29 compute-0 loving_jang[87981]:             ],
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_name": "ceph_lv0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_size": "21470642176",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "name": "ceph_lv0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "tags": {
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cluster_name": "ceph",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.crush_device_class": "",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.encrypted": "0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osd_id": "0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.type": "block",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.vdo": "0"
Nov 22 03:25:29 compute-0 loving_jang[87981]:             },
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "type": "block",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "vg_name": "ceph_vg0"
Nov 22 03:25:29 compute-0 loving_jang[87981]:         }
Nov 22 03:25:29 compute-0 loving_jang[87981]:     ],
Nov 22 03:25:29 compute-0 loving_jang[87981]:     "1": [
Nov 22 03:25:29 compute-0 loving_jang[87981]:         {
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "devices": [
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "/dev/loop4"
Nov 22 03:25:29 compute-0 loving_jang[87981]:             ],
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_name": "ceph_lv1",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_size": "21470642176",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "name": "ceph_lv1",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "tags": {
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cluster_name": "ceph",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.crush_device_class": "",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.encrypted": "0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osd_id": "1",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.type": "block",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.vdo": "0"
Nov 22 03:25:29 compute-0 loving_jang[87981]:             },
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "type": "block",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "vg_name": "ceph_vg1"
Nov 22 03:25:29 compute-0 loving_jang[87981]:         }
Nov 22 03:25:29 compute-0 loving_jang[87981]:     ],
Nov 22 03:25:29 compute-0 loving_jang[87981]:     "2": [
Nov 22 03:25:29 compute-0 loving_jang[87981]:         {
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "devices": [
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "/dev/loop5"
Nov 22 03:25:29 compute-0 loving_jang[87981]:             ],
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_name": "ceph_lv2",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_size": "21470642176",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "name": "ceph_lv2",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "tags": {
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.cluster_name": "ceph",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.crush_device_class": "",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.encrypted": "0",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osd_id": "2",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.type": "block",
Nov 22 03:25:29 compute-0 loving_jang[87981]:                 "ceph.vdo": "0"
Nov 22 03:25:29 compute-0 loving_jang[87981]:             },
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "type": "block",
Nov 22 03:25:29 compute-0 loving_jang[87981]:             "vg_name": "ceph_vg2"
Nov 22 03:25:29 compute-0 loving_jang[87981]:         }
Nov 22 03:25:29 compute-0 loving_jang[87981]:     ]
Nov 22 03:25:29 compute-0 loving_jang[87981]: }
Nov 22 03:25:29 compute-0 systemd[1]: libpod-3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb.scope: Deactivated successfully.
Nov 22 03:25:29 compute-0 conmon[87981]: conmon 3bf34821c054c3008750 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb.scope/container/memory.events
Nov 22 03:25:29 compute-0 podman[87965]: 2025-11-22 03:25:29.215516395 +0000 UTC m=+0.940792348 container died 3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:25:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a813cf9f9ba8e986707773af069730b7a21041dc6675c5744eb846171c4ca24-merged.mount: Deactivated successfully.
Nov 22 03:25:29 compute-0 podman[87965]: 2025-11-22 03:25:29.285387231 +0000 UTC m=+1.010663184 container remove 3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 03:25:29 compute-0 systemd[1]: libpod-conmon-3bf34821c054c300875019f0e5345747a504b185640eab13e964d32f29b928cb.scope: Deactivated successfully.
Nov 22 03:25:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:29 compute-0 sudo[87859]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Nov 22 03:25:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 22 03:25:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:29 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Nov 22 03:25:29 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Nov 22 03:25:29 compute-0 sudo[88002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:29 compute-0 sudo[88002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:29 compute-0 sudo[88002]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Nov 22 03:25:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:29 compute-0 sudo[88027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:29 compute-0 sudo[88027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:29 compute-0 sudo[88027]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:29 compute-0 sudo[88052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:29 compute-0 sudo[88052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:29 compute-0 sudo[88052]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:29 compute-0 sudo[88077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:29 compute-0 sudo[88077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:30 compute-0 podman[88142]: 2025-11-22 03:25:30.02649653 +0000 UTC m=+0.043179700 container create 12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_satoshi, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:30 compute-0 systemd[1]: Started libpod-conmon-12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2.scope.
Nov 22 03:25:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:30 compute-0 podman[88142]: 2025-11-22 03:25:30.0090009 +0000 UTC m=+0.025684060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:30 compute-0 podman[88142]: 2025-11-22 03:25:30.117809897 +0000 UTC m=+0.134493056 container init 12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_satoshi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:25:30 compute-0 podman[88142]: 2025-11-22 03:25:30.124025897 +0000 UTC m=+0.140709036 container start 12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_satoshi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:30 compute-0 podman[88142]: 2025-11-22 03:25:30.128964512 +0000 UTC m=+0.145647652 container attach 12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_satoshi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:30 compute-0 loving_satoshi[88159]: 167 167
Nov 22 03:25:30 compute-0 systemd[1]: libpod-12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2.scope: Deactivated successfully.
Nov 22 03:25:30 compute-0 podman[88142]: 2025-11-22 03:25:30.132268955 +0000 UTC m=+0.148952125 container died 12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:25:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-68006b7945b05a034da8dbc60f23a6f51291a466d6bc2fcb6727fbec0475d605-merged.mount: Deactivated successfully.
Nov 22 03:25:30 compute-0 podman[88142]: 2025-11-22 03:25:30.178714906 +0000 UTC m=+0.195398045 container remove 12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:30 compute-0 systemd[1]: libpod-conmon-12c833256ac22f3bf06275206543a92929048e287a4ff7f0fa36833388e55aa2.scope: Deactivated successfully.
Nov 22 03:25:30 compute-0 ceph-mon[75011]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:30 compute-0 ceph-mon[75011]: Deploying daemon osd.0 on compute-0
Nov 22 03:25:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:30 compute-0 podman[88190]: 2025-11-22 03:25:30.516931987 +0000 UTC m=+0.058914364 container create 773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:25:30 compute-0 systemd[1]: Started libpod-conmon-773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630.scope.
Nov 22 03:25:30 compute-0 podman[88190]: 2025-11-22 03:25:30.495998568 +0000 UTC m=+0.037980955 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f150161e8bb0f590eafa799ce70e4333d2a71673f66223c816e5c7c80615a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f150161e8bb0f590eafa799ce70e4333d2a71673f66223c816e5c7c80615a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f150161e8bb0f590eafa799ce70e4333d2a71673f66223c816e5c7c80615a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f150161e8bb0f590eafa799ce70e4333d2a71673f66223c816e5c7c80615a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20f150161e8bb0f590eafa799ce70e4333d2a71673f66223c816e5c7c80615a1/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:30 compute-0 podman[88190]: 2025-11-22 03:25:30.61902575 +0000 UTC m=+0.161008197 container init 773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:25:30 compute-0 podman[88190]: 2025-11-22 03:25:30.635330115 +0000 UTC m=+0.177312512 container start 773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:30 compute-0 podman[88190]: 2025-11-22 03:25:30.640089223 +0000 UTC m=+0.182071690 container attach 773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:25:31 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test[88206]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 03:25:31 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test[88206]:                             [--no-systemd] [--no-tmpfs]
Nov 22 03:25:31 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test[88206]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 03:25:31 compute-0 systemd[1]: libpod-773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630.scope: Deactivated successfully.
Nov 22 03:25:31 compute-0 podman[88190]: 2025-11-22 03:25:31.306611108 +0000 UTC m=+0.848593524 container died 773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:25:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-20f150161e8bb0f590eafa799ce70e4333d2a71673f66223c816e5c7c80615a1-merged.mount: Deactivated successfully.
Nov 22 03:25:31 compute-0 podman[88190]: 2025-11-22 03:25:31.38066427 +0000 UTC m=+0.922646636 container remove 773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:25:31 compute-0 systemd[1]: libpod-conmon-773dcbc4484d6b9aabae7a612f73965bd750bddfba82baa889b0a38b0d75d630.scope: Deactivated successfully.
Nov 22 03:25:31 compute-0 systemd[1]: Reloading.
Nov 22 03:25:31 compute-0 systemd-rc-local-generator[88270]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:31 compute-0 systemd-sysv-generator[88273]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:32 compute-0 systemd[1]: Reloading.
Nov 22 03:25:32 compute-0 systemd-sysv-generator[88312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:32 compute-0 systemd-rc-local-generator[88306]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:32 compute-0 systemd[1]: Starting Ceph osd.0 for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:25:32 compute-0 ceph-mon[75011]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:32 compute-0 podman[88367]: 2025-11-22 03:25:32.62540393 +0000 UTC m=+0.059533668 container create 33d8ad3654b2e5a8dbb858d611adc330300d2f1d3425a00ba3c1836a716770ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:32 compute-0 podman[88367]: 2025-11-22 03:25:32.594207506 +0000 UTC m=+0.028337344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c723c45e3ddc3332ba72b7a679092a7dcecef58a1b454a6de93b11c8c90af40c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c723c45e3ddc3332ba72b7a679092a7dcecef58a1b454a6de93b11c8c90af40c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c723c45e3ddc3332ba72b7a679092a7dcecef58a1b454a6de93b11c8c90af40c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c723c45e3ddc3332ba72b7a679092a7dcecef58a1b454a6de93b11c8c90af40c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c723c45e3ddc3332ba72b7a679092a7dcecef58a1b454a6de93b11c8c90af40c/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:32 compute-0 podman[88367]: 2025-11-22 03:25:32.728800766 +0000 UTC m=+0.162930604 container init 33d8ad3654b2e5a8dbb858d611adc330300d2f1d3425a00ba3c1836a716770ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:32 compute-0 podman[88367]: 2025-11-22 03:25:32.740366953 +0000 UTC m=+0.174496690 container start 33d8ad3654b2e5a8dbb858d611adc330300d2f1d3425a00ba3c1836a716770ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:32 compute-0 podman[88367]: 2025-11-22 03:25:32.744500055 +0000 UTC m=+0.178629882 container attach 33d8ad3654b2e5a8dbb858d611adc330300d2f1d3425a00ba3c1836a716770ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate[88382]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:25:33 compute-0 bash[88367]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:25:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate[88382]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 03:25:33 compute-0 bash[88367]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 03:25:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate[88382]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 03:25:33 compute-0 bash[88367]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Nov 22 03:25:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate[88382]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 03:25:33 compute-0 bash[88367]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Nov 22 03:25:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate[88382]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:33 compute-0 bash[88367]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate[88382]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:25:33 compute-0 bash[88367]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Nov 22 03:25:33 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate[88382]: --> ceph-volume raw activate successful for osd ID: 0
Nov 22 03:25:33 compute-0 bash[88367]: --> ceph-volume raw activate successful for osd ID: 0
Nov 22 03:25:33 compute-0 systemd[1]: libpod-33d8ad3654b2e5a8dbb858d611adc330300d2f1d3425a00ba3c1836a716770ad.scope: Deactivated successfully.
Nov 22 03:25:33 compute-0 podman[88367]: 2025-11-22 03:25:33.886867577 +0000 UTC m=+1.320997294 container died 33d8ad3654b2e5a8dbb858d611adc330300d2f1d3425a00ba3c1836a716770ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:33 compute-0 systemd[1]: libpod-33d8ad3654b2e5a8dbb858d611adc330300d2f1d3425a00ba3c1836a716770ad.scope: Consumed 1.167s CPU time.
Nov 22 03:25:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c723c45e3ddc3332ba72b7a679092a7dcecef58a1b454a6de93b11c8c90af40c-merged.mount: Deactivated successfully.
Nov 22 03:25:33 compute-0 podman[88367]: 2025-11-22 03:25:33.955207815 +0000 UTC m=+1.389337543 container remove 33d8ad3654b2e5a8dbb858d611adc330300d2f1d3425a00ba3c1836a716770ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0-activate, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:25:34 compute-0 podman[88556]: 2025-11-22 03:25:34.18648043 +0000 UTC m=+0.062018458 container create 36937a248e0f4da72fb3de3f2123cc8c73357ccde4dc7988b59a8e3542102020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3e1228eccedfaaa99c32bb505361d94867ea8bb39f17e7b67b1c19485902c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:34 compute-0 podman[88556]: 2025-11-22 03:25:34.15519389 +0000 UTC m=+0.030732027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3e1228eccedfaaa99c32bb505361d94867ea8bb39f17e7b67b1c19485902c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3e1228eccedfaaa99c32bb505361d94867ea8bb39f17e7b67b1c19485902c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3e1228eccedfaaa99c32bb505361d94867ea8bb39f17e7b67b1c19485902c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c3e1228eccedfaaa99c32bb505361d94867ea8bb39f17e7b67b1c19485902c2/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:34 compute-0 podman[88556]: 2025-11-22 03:25:34.265883211 +0000 UTC m=+0.141421309 container init 36937a248e0f4da72fb3de3f2123cc8c73357ccde4dc7988b59a8e3542102020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:34 compute-0 podman[88556]: 2025-11-22 03:25:34.271229498 +0000 UTC m=+0.146767556 container start 36937a248e0f4da72fb3de3f2123cc8c73357ccde4dc7988b59a8e3542102020 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:25:34 compute-0 bash[88556]: 36937a248e0f4da72fb3de3f2123cc8c73357ccde4dc7988b59a8e3542102020
Nov 22 03:25:34 compute-0 systemd[1]: Started Ceph osd.0 for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:25:34 compute-0 ceph-osd[88575]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:25:34 compute-0 ceph-osd[88575]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 03:25:34 compute-0 ceph-osd[88575]: pidfile_write: ignore empty --pid-file
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036d42b800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036d42b800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:25:34 compute-0 sudo[88077]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036d42b800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036d42b800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e26d800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e26d800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e26d800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e26d800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e26d800 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:25:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:34 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:34 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Nov 22 03:25:34 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 22 03:25:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:34 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Nov 22 03:25:34 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Nov 22 03:25:34 compute-0 sudo[88588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:34 compute-0 sudo[88588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:34 compute-0 sudo[88588]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:34 compute-0 ceph-mon[75011]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Nov 22 03:25:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:34 compute-0 sudo[88613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:34 compute-0 sudo[88613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:34 compute-0 sudo[88613]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:34 compute-0 sudo[88638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:34 compute-0 sudo[88638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:34 compute-0 sudo[88638]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036d42b800 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:25:34 compute-0 sudo[88663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:34 compute-0 sudo[88663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:34 compute-0 ceph-osd[88575]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Nov 22 03:25:34 compute-0 ceph-osd[88575]: load: jerasure load: lrc 
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:34 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:25:35 compute-0 podman[88737]: 2025-11-22 03:25:35.086167866 +0000 UTC m=+0.051576277 container create c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:25:35 compute-0 systemd[1]: Started libpod-conmon-c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626.scope.
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:25:35 compute-0 podman[88737]: 2025-11-22 03:25:35.063668852 +0000 UTC m=+0.029077432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:35 compute-0 podman[88737]: 2025-11-22 03:25:35.205835725 +0000 UTC m=+0.171244116 container init c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:25:35 compute-0 podman[88737]: 2025-11-22 03:25:35.218454644 +0000 UTC m=+0.183863055 container start c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:25:35 compute-0 podman[88737]: 2025-11-22 03:25:35.222971932 +0000 UTC m=+0.188380303 container attach c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:25:35 compute-0 romantic_lamport[88758]: 167 167
Nov 22 03:25:35 compute-0 systemd[1]: libpod-c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626.scope: Deactivated successfully.
Nov 22 03:25:35 compute-0 podman[88737]: 2025-11-22 03:25:35.229236029 +0000 UTC m=+0.194644399 container died c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:25:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-08a5f24cb04b4a6ce5ebf0141307f92aafbe1613c0bd2e50df70a4aea85d9f4c-merged.mount: Deactivated successfully.
Nov 22 03:25:35 compute-0 podman[88737]: 2025-11-22 03:25:35.270272681 +0000 UTC m=+0.235681052 container remove c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:25:35 compute-0 systemd[1]: libpod-conmon-c61ba243eec78c9fc3f309bed5cfa9015ca2a2e51cc4494206d056de7c993626.scope: Deactivated successfully.
Nov 22 03:25:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:35 compute-0 ceph-osd[88575]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 03:25:35 compute-0 ceph-osd[88575]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2eec00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs mount
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs mount shared_bdev_used = 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Git sha 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: DB SUMMARY
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: DB Session ID:  OORT3Z2LGJB03XPV3HGP
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                                     Options.env: 0x56036e2bfc70
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                                Options.info_log: 0x56036d4b28a0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.write_buffer_manager: 0x56036e3c8460
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.row_cache: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                              Options.wal_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.wal_compression: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Compression algorithms supported:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kZSTD supported: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kXpressCompression supported: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kBZip2Compression supported: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kLZ4Compression supported: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kZlibCompression supported: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kSnappyCompression supported: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b22c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ca74a6dd-323c-4bca-b533-9ddd2ce44731
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781935469098, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781935469544, "job": 1, "event": "recovery_finished"}
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: freelist init
Nov 22 03:25:35 compute-0 ceph-osd[88575]: freelist _read_cfg
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs umount
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) close
Nov 22 03:25:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:35 compute-0 ceph-mon[75011]: Deploying daemon osd.1 on compute-0
Nov 22 03:25:35 compute-0 podman[88984]: 2025-11-22 03:25:35.605828806 +0000 UTC m=+0.071049304 container create 0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:25:35 compute-0 systemd[1]: Started libpod-conmon-0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc.scope.
Nov 22 03:25:35 compute-0 podman[88984]: 2025-11-22 03:25:35.578768657 +0000 UTC m=+0.043989205 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bae68c6b5b8442ebfa0fcdeb7c796b5ffeb9534488dbf0ec9471c1015fde84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bae68c6b5b8442ebfa0fcdeb7c796b5ffeb9534488dbf0ec9471c1015fde84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bae68c6b5b8442ebfa0fcdeb7c796b5ffeb9534488dbf0ec9471c1015fde84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bae68c6b5b8442ebfa0fcdeb7c796b5ffeb9534488dbf0ec9471c1015fde84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0bae68c6b5b8442ebfa0fcdeb7c796b5ffeb9534488dbf0ec9471c1015fde84/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bdev(0x56036e2ef400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs mount
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluefs mount shared_bdev_used = 4718592
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:25:35 compute-0 podman[88984]: 2025-11-22 03:25:35.70446788 +0000 UTC m=+0.169688438 container init 0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Git sha 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: DB SUMMARY
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: DB Session ID:  OORT3Z2LGJB03XPV3HGO
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                                     Options.env: 0x56036e470460
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                                Options.info_log: 0x56036d4b2600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.write_buffer_manager: 0x56036e3c8460
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.row_cache: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                              Options.wal_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.wal_compression: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Compression algorithms supported:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kZSTD supported: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kXpressCompression supported: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kBZip2Compression supported: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kLZ4Compression supported: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kZlibCompression supported: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         kSnappyCompression supported: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 podman[88984]: 2025-11-22 03:25:35.719913257 +0000 UTC m=+0.185133755 container start 0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 podman[88984]: 2025-11-22 03:25:35.726893358 +0000 UTC m=+0.192113866 container attach 0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f1f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56036d4b2380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56036d49f090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ca74a6dd-323c-4bca-b533-9ddd2ce44731
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781935734137, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781935737911, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781935, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ca74a6dd-323c-4bca-b533-9ddd2ce44731", "db_session_id": "OORT3Z2LGJB03XPV3HGO", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781935741960, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781935, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ca74a6dd-323c-4bca-b533-9ddd2ce44731", "db_session_id": "OORT3Z2LGJB03XPV3HGO", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781935744697, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781935, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ca74a6dd-323c-4bca-b533-9ddd2ce44731", "db_session_id": "OORT3Z2LGJB03XPV3HGO", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781935746553, "job": 1, "event": "recovery_finished"}
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56036d60c000
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: DB pointer 0x56036e3b1a00
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Nov 22 03:25:35 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:25:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:25:35 compute-0 ceph-osd[88575]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 03:25:35 compute-0 ceph-osd[88575]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 03:25:35 compute-0 ceph-osd[88575]: _get_class not permitted to load lua
Nov 22 03:25:35 compute-0 ceph-osd[88575]: _get_class not permitted to load sdk
Nov 22 03:25:35 compute-0 ceph-osd[88575]: _get_class not permitted to load test_remote_reads
Nov 22 03:25:35 compute-0 ceph-osd[88575]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 03:25:35 compute-0 ceph-osd[88575]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 03:25:35 compute-0 ceph-osd[88575]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 03:25:35 compute-0 ceph-osd[88575]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 03:25:35 compute-0 ceph-osd[88575]: osd.0 0 load_pgs
Nov 22 03:25:35 compute-0 ceph-osd[88575]: osd.0 0 load_pgs opened 0 pgs
Nov 22 03:25:35 compute-0 ceph-osd[88575]: osd.0 0 log_to_monitors true
Nov 22 03:25:35 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0[88571]: 2025-11-22T03:25:35.777+0000 7fc4599ad740 -1 osd.0 0 log_to_monitors true
Nov 22 03:25:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Nov 22 03:25:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:25:36
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [balancer INFO root] No pools available
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:25:36 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test[89001]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 03:25:36 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test[89001]:                             [--no-systemd] [--no-tmpfs]
Nov 22 03:25:36 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test[89001]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 03:25:36 compute-0 systemd[1]: libpod-0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc.scope: Deactivated successfully.
Nov 22 03:25:36 compute-0 podman[88984]: 2025-11-22 03:25:36.344928036 +0000 UTC m=+0.810148593 container died 0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0bae68c6b5b8442ebfa0fcdeb7c796b5ffeb9534488dbf0ec9471c1015fde84-merged.mount: Deactivated successfully.
Nov 22 03:25:36 compute-0 podman[88984]: 2025-11-22 03:25:36.419459353 +0000 UTC m=+0.884679831 container remove 0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:25:36 compute-0 systemd[1]: libpod-conmon-0e542966797d5c6bc14bbca872a5a36d3f9933938e16c58f927191ac2ed321fc.scope: Deactivated successfully.
Nov 22 03:25:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Nov 22 03:25:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:36 compute-0 ceph-mon[75011]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:36 compute-0 ceph-mon[75011]: from='osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Nov 22 03:25:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 22 03:25:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Nov 22 03:25:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Nov 22 03:25:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 03:25:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:25:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 03:25:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:36 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:36 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 03:25:36 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 03:25:36 compute-0 systemd[1]: Reloading.
Nov 22 03:25:36 compute-0 systemd-rc-local-generator[89280]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:36 compute-0 systemd-sysv-generator[89283]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:37 compute-0 systemd[1]: Reloading.
Nov 22 03:25:37 compute-0 systemd-rc-local-generator[89321]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:37 compute-0 systemd-sysv-generator[89324]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:37 compute-0 systemd[1]: Starting Ceph osd.1 for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:25:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Nov 22 03:25:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:25:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Nov 22 03:25:37 compute-0 ceph-osd[88575]: osd.0 0 done with init, starting boot process
Nov 22 03:25:37 compute-0 ceph-osd[88575]: osd.0 0 start_boot
Nov 22 03:25:37 compute-0 ceph-osd[88575]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 03:25:37 compute-0 ceph-osd[88575]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 03:25:37 compute-0 ceph-osd[88575]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 03:25:37 compute-0 ceph-osd[88575]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 03:25:37 compute-0 ceph-osd[88575]: osd.0 0  bench count 12288000 bsize 4 KiB
Nov 22 03:25:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Nov 22 03:25:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:37 compute-0 ceph-mon[75011]: from='osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Nov 22 03:25:37 compute-0 ceph-mon[75011]: osdmap e7: 3 total, 0 up, 3 in
Nov 22 03:25:37 compute-0 ceph-mon[75011]: from='osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:25:37 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:37 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:37 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:37 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1640049735; not ready for session (expect reconnect)
Nov 22 03:25:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:37 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:37 compute-0 podman[89376]: 2025-11-22 03:25:37.637065277 +0000 UTC m=+0.054524009 container create 32c1c3132779479d7d66777a20ce094167f23c98b05e8744e18e61f546eed7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:25:37 compute-0 podman[89376]: 2025-11-22 03:25:37.610092302 +0000 UTC m=+0.027551074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746931e005a6e237a4248799692eeb2b7b86cbc2c1bfd130dfd6b00fe0bf0891/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746931e005a6e237a4248799692eeb2b7b86cbc2c1bfd130dfd6b00fe0bf0891/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746931e005a6e237a4248799692eeb2b7b86cbc2c1bfd130dfd6b00fe0bf0891/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746931e005a6e237a4248799692eeb2b7b86cbc2c1bfd130dfd6b00fe0bf0891/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/746931e005a6e237a4248799692eeb2b7b86cbc2c1bfd130dfd6b00fe0bf0891/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:37 compute-0 podman[89376]: 2025-11-22 03:25:37.75507833 +0000 UTC m=+0.172537072 container init 32c1c3132779479d7d66777a20ce094167f23c98b05e8744e18e61f546eed7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:25:37 compute-0 podman[89376]: 2025-11-22 03:25:37.775534872 +0000 UTC m=+0.192993604 container start 32c1c3132779479d7d66777a20ce094167f23c98b05e8744e18e61f546eed7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:37 compute-0 podman[89376]: 2025-11-22 03:25:37.782018382 +0000 UTC m=+0.199477113 container attach 32c1c3132779479d7d66777a20ce094167f23c98b05e8744e18e61f546eed7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:38 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1640049735; not ready for session (expect reconnect)
Nov 22 03:25:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:38 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:38 compute-0 ceph-mon[75011]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:38 compute-0 ceph-mon[75011]: from='osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:25:38 compute-0 ceph-mon[75011]: osdmap e8: 3 total, 0 up, 3 in
Nov 22 03:25:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:38 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate[89393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:25:38 compute-0 bash[89376]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:25:38 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate[89393]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 03:25:38 compute-0 bash[89376]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 03:25:38 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate[89393]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 03:25:38 compute-0 bash[89376]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Nov 22 03:25:38 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate[89393]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 03:25:38 compute-0 bash[89376]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Nov 22 03:25:38 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate[89393]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:38 compute-0 bash[89376]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:38 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate[89393]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:25:38 compute-0 bash[89376]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Nov 22 03:25:38 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate[89393]: --> ceph-volume raw activate successful for osd ID: 1
Nov 22 03:25:38 compute-0 bash[89376]: --> ceph-volume raw activate successful for osd ID: 1
Nov 22 03:25:38 compute-0 systemd[1]: libpod-32c1c3132779479d7d66777a20ce094167f23c98b05e8744e18e61f546eed7af.scope: Deactivated successfully.
Nov 22 03:25:38 compute-0 podman[89376]: 2025-11-22 03:25:38.875141739 +0000 UTC m=+1.292600471 container died 32c1c3132779479d7d66777a20ce094167f23c98b05e8744e18e61f546eed7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:38 compute-0 systemd[1]: libpod-32c1c3132779479d7d66777a20ce094167f23c98b05e8744e18e61f546eed7af.scope: Consumed 1.110s CPU time.
Nov 22 03:25:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-746931e005a6e237a4248799692eeb2b7b86cbc2c1bfd130dfd6b00fe0bf0891-merged.mount: Deactivated successfully.
Nov 22 03:25:38 compute-0 podman[89376]: 2025-11-22 03:25:38.96593081 +0000 UTC m=+1.383389512 container remove 32c1c3132779479d7d66777a20ce094167f23c98b05e8744e18e61f546eed7af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1-activate, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:25:39 compute-0 podman[89565]: 2025-11-22 03:25:39.222713893 +0000 UTC m=+0.061513068 container create bddca431f46f3f62c764b44b77f7b5a0ef98eb7a8bc34a7559f3f75a2814288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:25:39 compute-0 podman[89565]: 2025-11-22 03:25:39.187924087 +0000 UTC m=+0.026723312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136c39bedebdb6d83b718cc59a0231a3d9fe407504d3906a4a750a78e5590b35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136c39bedebdb6d83b718cc59a0231a3d9fe407504d3906a4a750a78e5590b35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136c39bedebdb6d83b718cc59a0231a3d9fe407504d3906a4a750a78e5590b35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136c39bedebdb6d83b718cc59a0231a3d9fe407504d3906a4a750a78e5590b35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/136c39bedebdb6d83b718cc59a0231a3d9fe407504d3906a4a750a78e5590b35/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:39 compute-0 podman[89565]: 2025-11-22 03:25:39.34012652 +0000 UTC m=+0.178925675 container init bddca431f46f3f62c764b44b77f7b5a0ef98eb7a8bc34a7559f3f75a2814288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:25:39 compute-0 podman[89565]: 2025-11-22 03:25:39.346285444 +0000 UTC m=+0.185084579 container start bddca431f46f3f62c764b44b77f7b5a0ef98eb7a8bc34a7559f3f75a2814288a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:25:39 compute-0 bash[89565]: bddca431f46f3f62c764b44b77f7b5a0ef98eb7a8bc34a7559f3f75a2814288a
Nov 22 03:25:39 compute-0 systemd[1]: Started Ceph osd.1 for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:25:39 compute-0 sudo[88663]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:39 compute-0 ceph-osd[89585]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:25:39 compute-0 ceph-osd[89585]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 03:25:39 compute-0 ceph-osd[89585]: pidfile_write: ignore empty --pid-file
Nov 22 03:25:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ddfd800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ddfd800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ddfd800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ddfd800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ec35800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ec35800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ec35800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ec35800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ec35800 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:25:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Nov 22 03:25:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 22 03:25:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:39 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Nov 22 03:25:39 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Nov 22 03:25:39 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1640049735; not ready for session (expect reconnect)
Nov 22 03:25:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:39 compute-0 sudo[89598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:39 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:39 compute-0 sudo[89598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:39 compute-0 sudo[89598]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:39 compute-0 ceph-mon[75011]: purged_snaps scrub starts
Nov 22 03:25:39 compute-0 ceph-mon[75011]: purged_snaps scrub ok
Nov 22 03:25:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Nov 22 03:25:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:39 compute-0 sudo[89623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:39 compute-0 sudo[89623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:39 compute-0 sudo[89623]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:39 compute-0 sudo[89648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:39 compute-0 sudo[89648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:39 compute-0 sudo[89648]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094ddfd800 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:25:39 compute-0 sudo[89673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:25:39 compute-0 sudo[89673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:39 compute-0 ceph-osd[89585]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Nov 22 03:25:39 compute-0 ceph-osd[89585]: load: jerasure load: lrc 
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:39 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:25:40 compute-0 podman[89743]: 2025-11-22 03:25:40.108796739 +0000 UTC m=+0.050336594 container create 97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:40 compute-0 systemd[1]: Started libpod-conmon-97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d.scope.
Nov 22 03:25:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:40 compute-0 podman[89743]: 2025-11-22 03:25:40.081686998 +0000 UTC m=+0.023226882 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:40 compute-0 podman[89743]: 2025-11-22 03:25:40.191661072 +0000 UTC m=+0.133200957 container init 97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:25:40 compute-0 podman[89743]: 2025-11-22 03:25:40.197219649 +0000 UTC m=+0.138759514 container start 97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:40 compute-0 podman[89743]: 2025-11-22 03:25:40.201153141 +0000 UTC m=+0.142693315 container attach 97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:25:40 compute-0 friendly_greider[89759]: 167 167
Nov 22 03:25:40 compute-0 systemd[1]: libpod-97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d.scope: Deactivated successfully.
Nov 22 03:25:40 compute-0 podman[89743]: 2025-11-22 03:25:40.2047309 +0000 UTC m=+0.146270765 container died 97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:25:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-44520b868eff99cbd834c16ecbcd434827c9fbaf8c116bbae03b8139498983b1-merged.mount: Deactivated successfully.
Nov 22 03:25:40 compute-0 podman[89743]: 2025-11-22 03:25:40.294199649 +0000 UTC m=+0.235739524 container remove 97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_greider, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:25:40 compute-0 systemd[1]: libpod-conmon-97f86a5e48e558780be59420fa514d6f6f28c67b7d8dcf6096d300c56f516c0d.scope: Deactivated successfully.
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:40 compute-0 ceph-osd[89585]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 03:25:40 compute-0 ceph-osd[89585]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6cc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs mount
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs mount shared_bdev_used = 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:25:40 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1640049735; not ready for session (expect reconnect)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Git sha 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: DB SUMMARY
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: DB Session ID:  OXSWLULCGAS6TC5GFNMG
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                                     Options.env: 0x56094ec87d50
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                                Options.info_log: 0x56094de88ba0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.write_buffer_manager: 0x56094ed98460
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.row_cache: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                              Options.wal_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.wal_compression: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Compression algorithms supported:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kZSTD supported: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kXpressCompression supported: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kBZip2Compression supported: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kLZ4Compression supported: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kZlibCompression supported: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kSnappyCompression supported: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.113 iops: 7964.982 elapsed_sec: 0.377
Nov 22 03:25:40 compute-0 ceph-osd[88575]: log_channel(cluster) log [WRN] : OSD bench result of 7964.982200 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 0 waiting for initial osdmap
Nov 22 03:25:40 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0[88571]: 2025-11-22T03:25:40.526+0000 7fc45592d640 -1 osd.0 0 waiting for initial osdmap
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89200)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 8 crush map has features 288514050185494528, adjusting msgr requires for clients
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 8 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 8 crush map has features 3314932999778484224, adjusting msgr requires for osds
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 8 check_osdmap_features require_osd_release unknown -> reef
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ba944b0d-60f0-49b7-a223-68ddf0b55cb7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781940542321, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781940542572, "job": 1, "event": "recovery_finished"}
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: freelist init
Nov 22 03:25:40 compute-0 ceph-osd[89585]: freelist _read_cfg
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 03:25:40 compute-0 podman[89794]: 2025-11-22 03:25:40.544100929 +0000 UTC m=+0.038167766 container create 8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs umount
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) close
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 8 set_numa_affinity not setting numa affinity
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 8 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Nov 22 03:25:40 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-0[88571]: 2025-11-22T03:25:40.543+0000 7fc450f55640 -1 osd.0 8 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:25:40 compute-0 systemd[1]: Started libpod-conmon-8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b.scope.
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e9 e9: 3 total, 1 up, 3 in
Nov 22 03:25:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735] boot
Nov 22 03:25:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 1 up, 3 in
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Nov 22 03:25:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:40 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:40 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:40 compute-0 ceph-osd[88575]: osd.0 9 state: booting -> active
Nov 22 03:25:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:40 compute-0 ceph-mon[75011]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:40 compute-0 ceph-mon[75011]: Deploying daemon osd.2 on compute-0
Nov 22 03:25:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8582b6ba414a1a1a5fe099109f206a7d7017c5ba85c47228497fcf561e4d562c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8582b6ba414a1a1a5fe099109f206a7d7017c5ba85c47228497fcf561e4d562c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8582b6ba414a1a1a5fe099109f206a7d7017c5ba85c47228497fcf561e4d562c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8582b6ba414a1a1a5fe099109f206a7d7017c5ba85c47228497fcf561e4d562c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8582b6ba414a1a1a5fe099109f206a7d7017c5ba85c47228497fcf561e4d562c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:40 compute-0 podman[89794]: 2025-11-22 03:25:40.622770375 +0000 UTC m=+0.116837232 container init 8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:40 compute-0 podman[89794]: 2025-11-22 03:25:40.527931447 +0000 UTC m=+0.021998313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:40 compute-0 podman[89794]: 2025-11-22 03:25:40.633241889 +0000 UTC m=+0.127308726 container start 8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:25:40 compute-0 podman[89794]: 2025-11-22 03:25:40.642633274 +0000 UTC m=+0.136700111 container attach 8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bdev(0x56094de6d400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs mount
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluefs mount shared_bdev_used = 4718592
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Git sha 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: DB SUMMARY
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: DB Session ID:  OXSWLULCGAS6TC5GFNMH
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                                     Options.env: 0x56094ee48b60
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                                Options.info_log: 0x56094de88900
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.write_buffer_manager: 0x56094ed98460
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.row_cache: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                              Options.wal_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.wal_compression: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Compression algorithms supported:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kZSTD supported: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kXpressCompression supported: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kBZip2Compression supported: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kLZ4Compression supported: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kZlibCompression supported: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         kSnappyCompression supported: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de88d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de88d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de88d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de88d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de88d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de88d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de88d60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70dd0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56094de89320)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x56094de70430
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ba944b0d-60f0-49b7-a223-68ddf0b55cb7
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781940814245, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781940823508, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781940, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba944b0d-60f0-49b7-a223-68ddf0b55cb7", "db_session_id": "OXSWLULCGAS6TC5GFNMH", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781940826666, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781940, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba944b0d-60f0-49b7-a223-68ddf0b55cb7", "db_session_id": "OXSWLULCGAS6TC5GFNMH", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781940835866, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781940, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba944b0d-60f0-49b7-a223-68ddf0b55cb7", "db_session_id": "OXSWLULCGAS6TC5GFNMH", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781940839849, "job": 1, "event": "recovery_finished"}
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56094ee54000
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: DB pointer 0x56094deb3a00
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Nov 22 03:25:40 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:25:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:25:40 compute-0 ceph-osd[89585]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 03:25:40 compute-0 ceph-osd[89585]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 03:25:40 compute-0 ceph-osd[89585]: _get_class not permitted to load lua
Nov 22 03:25:40 compute-0 ceph-osd[89585]: _get_class not permitted to load sdk
Nov 22 03:25:40 compute-0 ceph-osd[89585]: _get_class not permitted to load test_remote_reads
Nov 22 03:25:40 compute-0 ceph-osd[89585]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 03:25:40 compute-0 ceph-osd[89585]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 03:25:40 compute-0 ceph-osd[89585]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 03:25:40 compute-0 ceph-osd[89585]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 03:25:40 compute-0 ceph-osd[89585]: osd.1 0 load_pgs
Nov 22 03:25:40 compute-0 ceph-osd[89585]: osd.1 0 load_pgs opened 0 pgs
Nov 22 03:25:40 compute-0 ceph-osd[89585]: osd.1 0 log_to_monitors true
Nov 22 03:25:40 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1[89581]: 2025-11-22T03:25:40.871+0000 7fc074e50740 -1 osd.1 0 log_to_monitors true
Nov 22 03:25:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Nov 22 03:25:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 22 03:25:41 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test[90005]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Nov 22 03:25:41 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test[90005]:                             [--no-systemd] [--no-tmpfs]
Nov 22 03:25:41 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test[90005]: ceph-volume activate: error: unrecognized arguments: --bad-option
Nov 22 03:25:41 compute-0 systemd[1]: libpod-8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b.scope: Deactivated successfully.
Nov 22 03:25:41 compute-0 podman[89794]: 2025-11-22 03:25:41.275793941 +0000 UTC m=+0.769860778 container died 8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8582b6ba414a1a1a5fe099109f206a7d7017c5ba85c47228497fcf561e4d562c-merged.mount: Deactivated successfully.
Nov 22 03:25:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:41 compute-0 podman[89794]: 2025-11-22 03:25:41.360462817 +0000 UTC m=+0.854529664 container remove 8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:41 compute-0 systemd[1]: libpod-conmon-8b32e3b8fb0917bff4485c8ce14f435eb64323d9e50280c32654f94eadfbc49b.scope: Deactivated successfully.
Nov 22 03:25:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Nov 22 03:25:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:41 compute-0 ceph-mon[75011]: OSD bench result of 7964.982200 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:25:41 compute-0 ceph-mon[75011]: osd.0 [v2:192.168.122.100:6802/1640049735,v1:192.168.122.100:6803/1640049735] boot
Nov 22 03:25:41 compute-0 ceph-mon[75011]: osdmap e9: 3 total, 1 up, 3 in
Nov 22 03:25:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Nov 22 03:25:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:41 compute-0 ceph-mon[75011]: from='osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Nov 22 03:25:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 22 03:25:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Nov 22 03:25:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Nov 22 03:25:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 03:25:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:25:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e10 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 03:25:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:41 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:41 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:41 compute-0 sudo[90274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fivjooxwujnqdwgceojyjyjfmazjzemk ; /usr/bin/python3'
Nov 22 03:25:41 compute-0 sudo[90274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:41 compute-0 systemd[1]: Reloading.
Nov 22 03:25:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 03:25:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 03:25:41 compute-0 systemd-rc-local-generator[90312]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:41 compute-0 systemd-sysv-generator[90315]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:41 compute-0 python3[90282]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:41 compute-0 podman[90320]: 2025-11-22 03:25:41.96373118 +0000 UTC m=+0.072165695 container create 5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0 (image=quay.io/ceph/ceph:v18, name=musing_haslett, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:25:42 compute-0 podman[90320]: 2025-11-22 03:25:41.935655263 +0000 UTC m=+0.044089827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:42 compute-0 systemd[1]: Started libpod-conmon-5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0.scope.
Nov 22 03:25:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6f093d2426c0280932f5bb374baf91179663c30907c3e6906ff59a7773e209/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6f093d2426c0280932f5bb374baf91179663c30907c3e6906ff59a7773e209/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca6f093d2426c0280932f5bb374baf91179663c30907c3e6906ff59a7773e209/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:42 compute-0 systemd[1]: Reloading.
Nov 22 03:25:42 compute-0 podman[90320]: 2025-11-22 03:25:42.093208033 +0000 UTC m=+0.201642517 container init 5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0 (image=quay.io/ceph/ceph:v18, name=musing_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:25:42 compute-0 podman[90320]: 2025-11-22 03:25:42.100989267 +0000 UTC m=+0.209423782 container start 5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0 (image=quay.io/ceph/ceph:v18, name=musing_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:42 compute-0 podman[90320]: 2025-11-22 03:25:42.10601451 +0000 UTC m=+0.214448995 container attach 5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0 (image=quay.io/ceph/ceph:v18, name=musing_haslett, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:25:42 compute-0 systemd-rc-local-generator[90370]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:25:42 compute-0 systemd-sysv-generator[90374]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:25:42 compute-0 ceph-mgr[75294]: [devicehealth INFO root] creating mgr pool
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 22 03:25:42 compute-0 systemd[1]: Starting Ceph osd.2 for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:25:42 compute-0 podman[90451]: 2025-11-22 03:25:42.622023415 +0000 UTC m=+0.072393390 container create f8feac6c59c53937748c631cf8b86785f3ef69c7604a05ca357bef1bb596e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Nov 22 03:25:42 compute-0 ceph-osd[89585]: osd.1 0 done with init, starting boot process
Nov 22 03:25:42 compute-0 ceph-osd[89585]: osd.1 0 start_boot
Nov 22 03:25:42 compute-0 ceph-osd[89585]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 03:25:42 compute-0 ceph-osd[89585]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 03:25:42 compute-0 ceph-osd[89585]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 03:25:42 compute-0 ceph-osd[89585]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 03:25:42 compute-0 ceph-osd[89585]: osd.1 0  bench count 12288000 bsize 4 KiB
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:42 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:42 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:42 compute-0 ceph-mon[75011]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Nov 22 03:25:42 compute-0 ceph-mon[75011]: from='osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Nov 22 03:25:42 compute-0 ceph-mon[75011]: osdmap e10: 3 total, 1 up, 3 in
Nov 22 03:25:42 compute-0 ceph-mon[75011]: from='osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:25:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Nov 22 03:25:42 compute-0 ceph-osd[88575]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 03:25:42 compute-0 ceph-osd[88575]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Nov 22 03:25:42 compute-0 ceph-osd[88575]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 03:25:42 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1185012923; not ready for session (expect reconnect)
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:42 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:25:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395008375' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:25:42 compute-0 musing_haslett[90339]: 
Nov 22 03:25:42 compute-0 musing_haslett[90339]: {"fsid":"7adcc38b-6484-5de6-b879-33a0309153df","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":112,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":11,"num_osds":3,"num_up_osds":1,"osd_up_since":1763781940,"num_in_osds":3,"osd_in_since":1763781923,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T03:25:37.325184+0000","services":{}},"progress_events":{}}
Nov 22 03:25:42 compute-0 podman[90451]: 2025-11-22 03:25:42.59251031 +0000 UTC m=+0.042880304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:42 compute-0 podman[90320]: 2025-11-22 03:25:42.699594427 +0000 UTC m=+0.808028922 container died 5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0 (image=quay.io/ceph/ceph:v18, name=musing_haslett, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:42 compute-0 systemd[1]: libpod-5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0.scope: Deactivated successfully.
Nov 22 03:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6878f3c0cbb397ef4fcdb36f7e6f769c72ab6963c448b7af84fe844a56bb448b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6878f3c0cbb397ef4fcdb36f7e6f769c72ab6963c448b7af84fe844a56bb448b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6878f3c0cbb397ef4fcdb36f7e6f769c72ab6963c448b7af84fe844a56bb448b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6878f3c0cbb397ef4fcdb36f7e6f769c72ab6963c448b7af84fe844a56bb448b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6878f3c0cbb397ef4fcdb36f7e6f769c72ab6963c448b7af84fe844a56bb448b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:42 compute-0 podman[90451]: 2025-11-22 03:25:42.755293529 +0000 UTC m=+0.205663464 container init f8feac6c59c53937748c631cf8b86785f3ef69c7604a05ca357bef1bb596e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:25:42 compute-0 podman[90451]: 2025-11-22 03:25:42.766172992 +0000 UTC m=+0.216542917 container start f8feac6c59c53937748c631cf8b86785f3ef69c7604a05ca357bef1bb596e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:42 compute-0 podman[90451]: 2025-11-22 03:25:42.785695363 +0000 UTC m=+0.236065298 container attach f8feac6c59c53937748c631cf8b86785f3ef69c7604a05ca357bef1bb596e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca6f093d2426c0280932f5bb374baf91179663c30907c3e6906ff59a7773e209-merged.mount: Deactivated successfully.
Nov 22 03:25:42 compute-0 podman[90320]: 2025-11-22 03:25:42.889229954 +0000 UTC m=+0.997664439 container remove 5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0 (image=quay.io/ceph/ceph:v18, name=musing_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:25:42 compute-0 systemd[1]: libpod-conmon-5b166ec95f3b794730b9d98343d16a0e8bc3d4b306e6ff2e52f5b5951194fcd0.scope: Deactivated successfully.
Nov 22 03:25:42 compute-0 sudo[90274]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:43 compute-0 sudo[90510]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-porijfxhxhghuuloygztqarriofwpjwg ; /usr/bin/python3'
Nov 22 03:25:43 compute-0 sudo[90510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 03:25:43 compute-0 python3[90512]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:43 compute-0 podman[90518]: 2025-11-22 03:25:43.507150142 +0000 UTC m=+0.057052088 container create e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca (image=quay.io/ceph/ceph:v18, name=exciting_dijkstra, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:43 compute-0 podman[90518]: 2025-11-22 03:25:43.473699933 +0000 UTC m=+0.023601909 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:43 compute-0 systemd[1]: Started libpod-conmon-e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca.scope.
Nov 22 03:25:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f1250303489e86b8170010d84a84d13a71b1a61a40c1cfd4f6b28afcceb5df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f1250303489e86b8170010d84a84d13a71b1a61a40c1cfd4f6b28afcceb5df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:43 compute-0 podman[90518]: 2025-11-22 03:25:43.638088674 +0000 UTC m=+0.187990629 container init e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca (image=quay.io/ceph/ceph:v18, name=exciting_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:43 compute-0 podman[90518]: 2025-11-22 03:25:43.64863926 +0000 UTC m=+0.198541196 container start e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca (image=quay.io/ceph/ceph:v18, name=exciting_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:25:43 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1185012923; not ready for session (expect reconnect)
Nov 22 03:25:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Nov 22 03:25:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:43 compute-0 podman[90518]: 2025-11-22 03:25:43.676833041 +0000 UTC m=+0.226735017 container attach e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca (image=quay.io/ceph/ceph:v18, name=exciting_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:43 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 22 03:25:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Nov 22 03:25:43 compute-0 ceph-mon[75011]: from='osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:25:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Nov 22 03:25:43 compute-0 ceph-mon[75011]: osdmap e11: 3 total, 1 up, 3 in
Nov 22 03:25:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Nov 22 03:25:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1395008375' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:25:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Nov 22 03:25:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:43 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:43 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:43 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate[90467]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:25:43 compute-0 bash[90451]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:25:43 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate[90467]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 03:25:43 compute-0 bash[90451]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 03:25:43 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate[90467]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 03:25:43 compute-0 bash[90451]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Nov 22 03:25:43 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate[90467]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 03:25:43 compute-0 bash[90451]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Nov 22 03:25:43 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate[90467]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:43 compute-0 bash[90451]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:43 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate[90467]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:25:43 compute-0 bash[90451]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Nov 22 03:25:43 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate[90467]: --> ceph-volume raw activate successful for osd ID: 2
Nov 22 03:25:43 compute-0 bash[90451]: --> ceph-volume raw activate successful for osd ID: 2
Nov 22 03:25:43 compute-0 systemd[1]: libpod-f8feac6c59c53937748c631cf8b86785f3ef69c7604a05ca357bef1bb596e1d7.scope: Deactivated successfully.
Nov 22 03:25:43 compute-0 systemd[1]: libpod-f8feac6c59c53937748c631cf8b86785f3ef69c7604a05ca357bef1bb596e1d7.scope: Consumed 1.036s CPU time.
Nov 22 03:25:43 compute-0 podman[90451]: 2025-11-22 03:25:43.79601273 +0000 UTC m=+1.246382665 container died f8feac6c59c53937748c631cf8b86785f3ef69c7604a05ca357bef1bb596e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:25:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6878f3c0cbb397ef4fcdb36f7e6f769c72ab6963c448b7af84fe844a56bb448b-merged.mount: Deactivated successfully.
Nov 22 03:25:44 compute-0 podman[90451]: 2025-11-22 03:25:44.012527431 +0000 UTC m=+1.462897366 container remove f8feac6c59c53937748c631cf8b86785f3ef69c7604a05ca357bef1bb596e1d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:25:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:25:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3247887026' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:44 compute-0 podman[90730]: 2025-11-22 03:25:44.226720806 +0000 UTC m=+0.059908166 container create 3a066733ca2be9e5036c220a98f652724baa9c02aa1db70d7b49920cd9046392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:44 compute-0 podman[90730]: 2025-11-22 03:25:44.194353968 +0000 UTC m=+0.027541348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c214e0e4cf9f3509c5233a5dcd6d990f004f9e27a2c58a1e3069f447e357a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c214e0e4cf9f3509c5233a5dcd6d990f004f9e27a2c58a1e3069f447e357a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c214e0e4cf9f3509c5233a5dcd6d990f004f9e27a2c58a1e3069f447e357a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c214e0e4cf9f3509c5233a5dcd6d990f004f9e27a2c58a1e3069f447e357a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c214e0e4cf9f3509c5233a5dcd6d990f004f9e27a2c58a1e3069f447e357a8/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:44 compute-0 podman[90730]: 2025-11-22 03:25:44.358365944 +0000 UTC m=+0.191553314 container init 3a066733ca2be9e5036c220a98f652724baa9c02aa1db70d7b49920cd9046392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:25:44 compute-0 podman[90730]: 2025-11-22 03:25:44.366148537 +0000 UTC m=+0.199335887 container start 3a066733ca2be9e5036c220a98f652724baa9c02aa1db70d7b49920cd9046392 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:44 compute-0 bash[90730]: 3a066733ca2be9e5036c220a98f652724baa9c02aa1db70d7b49920cd9046392
Nov 22 03:25:44 compute-0 systemd[1]: Started Ceph osd.2 for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:25:44 compute-0 ceph-osd[90752]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:25:44 compute-0 ceph-osd[90752]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Nov 22 03:25:44 compute-0 ceph-osd[90752]: pidfile_write: ignore empty --pid-file
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936b02f800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936b02f800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936b02f800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936b02f800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936be67800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936be67800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936be67800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936be67800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936be67800 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:25:44 compute-0 sudo[89673]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:44 compute-0 sudo[90765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:44 compute-0 sudo[90765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:44 compute-0 sudo[90765]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:44 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1185012923; not ready for session (expect reconnect)
Nov 22 03:25:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:44 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:44 compute-0 sudo[90790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:44 compute-0 sudo[90790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:44 compute-0 sudo[90790]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936b02f800 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:25:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Nov 22 03:25:44 compute-0 sudo[90815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:44 compute-0 sudo[90815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:44 compute-0 sudo[90815]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3247887026' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Nov 22 03:25:44 compute-0 exciting_dijkstra[90546]: pool 'vms' created
Nov 22 03:25:44 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Nov 22 03:25:44 compute-0 systemd[1]: libpod-e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca.scope: Deactivated successfully.
Nov 22 03:25:44 compute-0 podman[90518]: 2025-11-22 03:25:44.768777324 +0000 UTC m=+1.318679270 container died e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca (image=quay.io/ceph/ceph:v18, name=exciting_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:25:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:44 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:44 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:44 compute-0 ceph-mon[75011]: purged_snaps scrub starts
Nov 22 03:25:44 compute-0 ceph-mon[75011]: purged_snaps scrub ok
Nov 22 03:25:44 compute-0 ceph-mon[75011]: pgmap v35: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 03:25:44 compute-0 ceph-mon[75011]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:25:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Nov 22 03:25:44 compute-0 ceph-mon[75011]: osdmap e12: 3 total, 1 up, 3 in
Nov 22 03:25:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3247887026' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:44 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 13 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:25:44 compute-0 sudo[90842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:25:44 compute-0 sudo[90842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0f1250303489e86b8170010d84a84d13a71b1a61a40c1cfd4f6b28afcceb5df-merged.mount: Deactivated successfully.
Nov 22 03:25:44 compute-0 podman[90518]: 2025-11-22 03:25:44.934818385 +0000 UTC m=+1.484720341 container remove e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca (image=quay.io/ceph/ceph:v18, name=exciting_dijkstra, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:25:44 compute-0 ceph-osd[90752]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Nov 22 03:25:44 compute-0 systemd[1]: libpod-conmon-e244d5c9ea4d6530993c5811dd380713da79d746a4533fcb197518558c084eca.scope: Deactivated successfully.
Nov 22 03:25:44 compute-0 ceph-osd[90752]: load: jerasure load: lrc 
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:44 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:25:44 compute-0 sudo[90510]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:45 compute-0 sudo[90947]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrwzdybknalixqohezzyjvlpofmtqbel ; /usr/bin/python3'
Nov 22 03:25:45 compute-0 sudo[90947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:45 compute-0 podman[90940]: 2025-11-22 03:25:45.189981062 +0000 UTC m=+0.064828415 container create f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:25:45 compute-0 systemd[1]: Started libpod-conmon-f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47.scope.
Nov 22 03:25:45 compute-0 podman[90940]: 2025-11-22 03:25:45.16843644 +0000 UTC m=+0.043283804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:45 compute-0 python3[90955]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:45 compute-0 podman[90940]: 2025-11-22 03:25:45.314839443 +0000 UTC m=+0.189686887 container init f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:45 compute-0 podman[90940]: 2025-11-22 03:25:45.324515539 +0000 UTC m=+0.199362923 container start f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v38: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 03:25:45 compute-0 frosty_lamarr[90968]: 167 167
Nov 22 03:25:45 compute-0 systemd[1]: libpod-f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47.scope: Deactivated successfully.
Nov 22 03:25:45 compute-0 podman[90940]: 2025-11-22 03:25:45.35072928 +0000 UTC m=+0.225576744 container attach f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:45 compute-0 podman[90940]: 2025-11-22 03:25:45.353553892 +0000 UTC m=+0.228401256 container died f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1abd24adca817dd3417449fced1f3ad34c811511da6ab9c69c5753ad5551f068-merged.mount: Deactivated successfully.
Nov 22 03:25:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:45 compute-0 podman[90940]: 2025-11-22 03:25:45.481343055 +0000 UTC m=+0.356190449 container remove f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_lamarr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:45 compute-0 systemd[1]: libpod-conmon-f8d33eb46f50e6a64604e59e66eadd7980433759a679f5e0dac5896efd982e47.scope: Deactivated successfully.
Nov 22 03:25:45 compute-0 podman[90971]: 2025-11-22 03:25:45.410537401 +0000 UTC m=+0.080833046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:45 compute-0 ceph-osd[90752]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Nov 22 03:25:45 compute-0 ceph-osd[90752]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee8c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs mount
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs mount shared_bdev_used = 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Git sha 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: DB SUMMARY
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: DB Session ID:  A53MB5T46XIJPNINZGLV
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                                     Options.env: 0x55936beb9c70
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                                Options.info_log: 0x55936b0b68a0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.write_buffer_manager: 0x55936bfcc460
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.row_cache: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                              Options.wal_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.wal_compression: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Compression algorithms supported:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kZSTD supported: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kXpressCompression supported: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kBZip2Compression supported: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kLZ4Compression supported: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kZlibCompression supported: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kSnappyCompression supported: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 podman[90971]: 2025-11-22 03:25:45.541016074 +0000 UTC m=+0.211311729 container create dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9 (image=quay.io/ceph/ceph:v18, name=magical_euler, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b62c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6240)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f0944072-255f-4d92-82d2-c661ab9a233e
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781945546595, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781945546812, "job": 1, "event": "recovery_finished"}
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: freelist init
Nov 22 03:25:45 compute-0 ceph-osd[90752]: freelist _read_cfg
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs umount
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) close
Nov 22 03:25:45 compute-0 systemd[1]: Started libpod-conmon-dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9.scope.
Nov 22 03:25:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/390e58468a4567b812c24789fb7b484d915f5186adf82f8fdd3cdac084b7907e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/390e58468a4567b812c24789fb7b484d915f5186adf82f8fdd3cdac084b7907e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:45 compute-0 podman[90971]: 2025-11-22 03:25:45.643235312 +0000 UTC m=+0.313530946 container init dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9 (image=quay.io/ceph/ceph:v18, name=magical_euler, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:25:45 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1185012923; not ready for session (expect reconnect)
Nov 22 03:25:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:45 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:45 compute-0 podman[90971]: 2025-11-22 03:25:45.653273475 +0000 UTC m=+0.323569139 container start dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9 (image=quay.io/ceph/ceph:v18, name=magical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:25:45 compute-0 podman[90971]: 2025-11-22 03:25:45.678693904 +0000 UTC m=+0.348989539 container attach dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9 (image=quay.io/ceph/ceph:v18, name=magical_euler, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:45 compute-0 podman[91205]: 2025-11-22 03:25:45.727069502 +0000 UTC m=+0.077278541 container create 50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:45 compute-0 podman[91205]: 2025-11-22 03:25:45.685384745 +0000 UTC m=+0.035593793 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bdev(0x55936bee9400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs mount
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluefs mount shared_bdev_used = 4718592
Nov 22 03:25:45 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: RocksDB version: 7.9.2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Git sha 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Compile date 2025-05-06 23:30:25
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: DB SUMMARY
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: DB Session ID:  A53MB5T46XIJPNINZGLU
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: CURRENT file:  CURRENT
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: IDENTITY file:  IDENTITY
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                         Options.error_if_exists: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.create_if_missing: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                         Options.paranoid_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.flush_verify_memtable_count: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                                     Options.env: 0x55936c074b60
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                                      Options.fs: LegacyFileSystem
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                                Options.info_log: 0x55936b0b6620
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_file_opening_threads: 16
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                              Options.statistics: (nil)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.use_fsync: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.max_log_file_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.log_file_time_to_roll: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.keep_log_file_num: 1000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.recycle_log_file_num: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                         Options.allow_fallocate: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.allow_mmap_reads: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.allow_mmap_writes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.use_direct_reads: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.create_missing_column_families: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                              Options.db_log_dir: 
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                                 Options.wal_dir: db.wal
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.table_cache_numshardbits: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                         Options.WAL_ttl_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.WAL_size_limit_MB: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.manifest_preallocation_size: 4194304
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                     Options.is_fd_close_on_exec: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.advise_random_on_open: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.db_write_buffer_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.write_buffer_manager: 0x55936bfcc6e0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.access_hint_on_compaction_start: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                      Options.use_adaptive_mutex: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                            Options.rate_limiter: (nil)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.wal_recovery_mode: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.enable_thread_tracking: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.enable_pipelined_write: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.unordered_write: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.write_thread_max_yield_usec: 100
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.row_cache: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                              Options.wal_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.avoid_flush_during_recovery: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.allow_ingest_behind: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.two_write_queues: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.manual_wal_flush: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.wal_compression: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.atomic_flush: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.persist_stats_to_disk: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.write_dbid_to_manifest: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.log_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.best_efforts_recovery: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.allow_data_in_errors: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.db_host_id: __hostname__
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.enforce_single_del_contracts: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.max_background_jobs: 4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.max_background_compactions: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.max_subcompactions: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.writable_file_max_buffer_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.delayed_write_rate : 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.max_total_wal_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.stats_dump_period_sec: 600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.stats_persist_period_sec: 600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.max_open_files: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.bytes_per_sync: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                      Options.wal_bytes_per_sync: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.strict_bytes_per_sync: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.compaction_readahead_size: 2097152
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.max_background_flushes: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Compression algorithms supported:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kZSTD supported: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kXpressCompression supported: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kBZip2Compression supported: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kZSTDNotFinalCompression supported: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kLZ4Compression supported: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kZlibCompression supported: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kLZ4HCCompression supported: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         kSnappyCompression supported: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Fast CRC32 supported: Supported on x86
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: DMutex implementation: pthread_mutex_t
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 systemd[1]: Started libpod-conmon-50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b.scope.
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6a20)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a31f0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:           Options.merge_operator: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.compaction_filter_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.sst_partitioner_factory: None
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.memtable_factory: SkipListFactory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.table_factory: BlockBasedTable
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55936b0b6380)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55936b0a3090
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.write_buffer_size: 16777216
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.max_write_buffer_number: 64
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.compression: LZ4
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression: Disabled
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.num_levels: 7
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:            Options.compression_opts.window_bits: -14
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.level: 32767
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.compression_opts.strategy: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.parallel_threads: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                  Options.compression_opts.enabled: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:              Options.level0_stop_writes_trigger: 36
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.target_file_size_base: 67108864
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:             Options.target_file_size_multiplier: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.arena_block_size: 1048576
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.disable_auto_compactions: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.inplace_update_support: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                 Options.inplace_update_num_locks: 10000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:               Options.memtable_whole_key_filtering: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:   Options.memtable_huge_page_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.bloom_locality: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                    Options.max_successive_merges: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.optimize_filters_for_hits: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.paranoid_file_checks: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.force_consistency_checks: 1
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.report_bg_io_stats: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                               Options.ttl: 2592000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.periodic_compaction_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:    Options.preserve_internal_time_seconds: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                       Options.enable_blob_files: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                           Options.min_blob_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                          Options.blob_file_size: 268435456
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                   Options.blob_compression_type: NoCompression
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.enable_blob_garbage_collection: false
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:          Options.blob_compaction_readahead_size: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb:                Options.blob_file_starting_level: 0
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Nov 22 03:25:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Nov 22 03:25:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3247887026' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:45 compute-0 ceph-mon[75011]: osdmap e13: 3 total, 1 up, 3 in
Nov 22 03:25:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: f0944072-255f-4d92-82d2-c661ab9a233e
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781945845390, "job": 1, "event": "recovery_started", "wal_files": [31]}
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Nov 22 03:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8597a9a6797dabb7105146a0f975bb210201e58fc96866dc0cebdfef4430ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8597a9a6797dabb7105146a0f975bb210201e58fc96866dc0cebdfef4430ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8597a9a6797dabb7105146a0f975bb210201e58fc96866dc0cebdfef4430ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a8597a9a6797dabb7105146a0f975bb210201e58fc96866dc0cebdfef4430ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Nov 22 03:25:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:45 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:45 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781945863957, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781945, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f0944072-255f-4d92-82d2-c661ab9a233e", "db_session_id": "A53MB5T46XIJPNINZGLU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:45 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 14 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:25:45 compute-0 podman[91205]: 2025-11-22 03:25:45.871679276 +0000 UTC m=+0.221888355 container init 50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:25:45 compute-0 podman[91205]: 2025-11-22 03:25:45.879803863 +0000 UTC m=+0.230012902 container start 50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781945889204, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781945, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f0944072-255f-4d92-82d2-c661ab9a233e", "db_session_id": "A53MB5T46XIJPNINZGLU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:45 compute-0 podman[91205]: 2025-11-22 03:25:45.90990648 +0000 UTC m=+0.260115559 container attach 50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781945916170, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781945, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "f0944072-255f-4d92-82d2-c661ab9a233e", "db_session_id": "A53MB5T46XIJPNINZGLU", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763781945918254, "job": 1, "event": "recovery_finished"}
Nov 22 03:25:45 compute-0 ceph-osd[90752]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Nov 22 03:25:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55936b210000
Nov 22 03:25:46 compute-0 ceph-osd[90752]: rocksdb: DB pointer 0x55936bfaba00
Nov 22 03:25:46 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Nov 22 03:25:46 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Nov 22 03:25:46 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Nov 22 03:25:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:25:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:25:46 compute-0 ceph-osd[90752]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Nov 22 03:25:46 compute-0 ceph-osd[90752]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Nov 22 03:25:46 compute-0 ceph-osd[90752]: _get_class not permitted to load lua
Nov 22 03:25:46 compute-0 ceph-osd[90752]: _get_class not permitted to load sdk
Nov 22 03:25:46 compute-0 ceph-osd[90752]: _get_class not permitted to load test_remote_reads
Nov 22 03:25:46 compute-0 ceph-osd[90752]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Nov 22 03:25:46 compute-0 ceph-osd[90752]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Nov 22 03:25:46 compute-0 ceph-osd[90752]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Nov 22 03:25:46 compute-0 ceph-osd[90752]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Nov 22 03:25:46 compute-0 ceph-osd[90752]: osd.2 0 load_pgs
Nov 22 03:25:46 compute-0 ceph-osd[90752]: osd.2 0 load_pgs opened 0 pgs
Nov 22 03:25:46 compute-0 ceph-osd[90752]: osd.2 0 log_to_monitors true
Nov 22 03:25:46 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2[90748]: 2025-11-22T03:25:46.033+0000 7f7e8e623740 -1 osd.2 0 log_to_monitors true
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2127919432' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1185012923; not ready for session (expect reconnect)
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:46 compute-0 frosty_villani[91224]: {
Nov 22 03:25:46 compute-0 frosty_villani[91224]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "osd_id": 1,
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "type": "bluestore"
Nov 22 03:25:46 compute-0 frosty_villani[91224]:     },
Nov 22 03:25:46 compute-0 frosty_villani[91224]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "osd_id": 0,
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "type": "bluestore"
Nov 22 03:25:46 compute-0 frosty_villani[91224]:     },
Nov 22 03:25:46 compute-0 frosty_villani[91224]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "osd_id": 2,
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:25:46 compute-0 frosty_villani[91224]:         "type": "bluestore"
Nov 22 03:25:46 compute-0 frosty_villani[91224]:     }
Nov 22 03:25:46 compute-0 frosty_villani[91224]: }
Nov 22 03:25:46 compute-0 systemd[1]: libpod-50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b.scope: Deactivated successfully.
Nov 22 03:25:46 compute-0 podman[91205]: 2025-11-22 03:25:46.748017418 +0000 UTC m=+1.098226487 container died 50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2127919432' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e15 e15: 3 total, 1 up, 3 in
Nov 22 03:25:46 compute-0 magical_euler[91197]: pool 'volumes' created
Nov 22 03:25:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a8597a9a6797dabb7105146a0f975bb210201e58fc96866dc0cebdfef4430ec-merged.mount: Deactivated successfully.
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 1 up, 3 in
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e15 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:46 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:46 compute-0 systemd[1]: libpod-dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9.scope: Deactivated successfully.
Nov 22 03:25:46 compute-0 ceph-mon[75011]: pgmap v38: 2 pgs: 2 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Nov 22 03:25:46 compute-0 ceph-mon[75011]: osdmap e14: 3 total, 1 up, 3 in
Nov 22 03:25:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mon[75011]: from='osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2127919432' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:46 compute-0 podman[90971]: 2025-11-22 03:25:46.997995258 +0000 UTC m=+1.668290893 container died dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9 (image=quay.io/ceph/ceph:v18, name=magical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:25:47 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Nov 22 03:25:47 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Nov 22 03:25:47 compute-0 podman[91205]: 2025-11-22 03:25:47.072868081 +0000 UTC m=+1.423077110 container remove 50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_villani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:47 compute-0 systemd[1]: libpod-conmon-50cc8500660c4174bbd5e65d2f319c4935b7d6d59e1d357a4590d9fa6164079b.scope: Deactivated successfully.
Nov 22 03:25:47 compute-0 sudo[90842]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-390e58468a4567b812c24789fb7b484d915f5186adf82f8fdd3cdac084b7907e-merged.mount: Deactivated successfully.
Nov 22 03:25:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:47 compute-0 podman[91506]: 2025-11-22 03:25:47.219660667 +0000 UTC m=+0.253718128 container remove dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9 (image=quay.io/ceph/ceph:v18, name=magical_euler, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:25:47 compute-0 systemd[1]: libpod-conmon-dd0c4e0048fe99a0b19cf3f6d809a95420941ad803aab785de0754bfe5c6a9c9.scope: Deactivated successfully.
Nov 22 03:25:47 compute-0 sudo[91518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:47 compute-0 sudo[91518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:47 compute-0 sudo[91518]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:47 compute-0 sudo[90947]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:47 compute-0 sudo[91543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:25:47 compute-0 sudo[91543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:47 compute-0 sudo[91543]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 13.020 iops: 3333.146 elapsed_sec: 0.900
Nov 22 03:25:47 compute-0 ceph-osd[89585]: log_channel(cluster) log [WRN] : OSD bench result of 3333.146007 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 0 waiting for initial osdmap
Nov 22 03:25:47 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1[89581]: 2025-11-22T03:25:47.315+0000 7fc0715e7640 -1 osd.1 0 waiting for initial osdmap
Nov 22 03:25:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v41: 3 pgs: 1 active+clean, 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 15 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 15 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 15 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 15 check_osdmap_features require_osd_release unknown -> reef
Nov 22 03:25:47 compute-0 sudo[91568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:47 compute-0 sudo[91568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:47 compute-0 sudo[91614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhovqmymoyshemesmzjwwtvwkqaafcdu ; /usr/bin/python3'
Nov 22 03:25:47 compute-0 sudo[91568]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 15 set_numa_affinity not setting numa affinity
Nov 22 03:25:47 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-1[89581]: 2025-11-22T03:25:47.359+0000 7fc06c3f8640 -1 osd.1 15 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:25:47 compute-0 sudo[91614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:47 compute-0 ceph-osd[89585]: osd.1 15 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Nov 22 03:25:47 compute-0 sudo[91620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:47 compute-0 sudo[91620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:47 compute-0 sudo[91620]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:47 compute-0 sudo[91645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:47 compute-0 sudo[91645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:47 compute-0 sudo[91645]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:47 compute-0 python3[91619]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:47 compute-0 sudo[91670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:25:47 compute-0 sudo[91670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:47 compute-0 podman[91674]: 2025-11-22 03:25:47.542499165 +0000 UTC m=+0.039881802 container create c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe (image=quay.io/ceph/ceph:v18, name=wizardly_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:25:47 compute-0 systemd[1]: Started libpod-conmon-c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe.scope.
Nov 22 03:25:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bc7e1dbc0a29643080c95c82ac8071acf53c86e0fdd8743c7b85bc4f12a3f7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1bc7e1dbc0a29643080c95c82ac8071acf53c86e0fdd8743c7b85bc4f12a3f7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:47 compute-0 podman[91674]: 2025-11-22 03:25:47.525389726 +0000 UTC m=+0.022772383 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:47 compute-0 podman[91674]: 2025-11-22 03:25:47.626890105 +0000 UTC m=+0.124272771 container init c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe (image=quay.io/ceph/ceph:v18, name=wizardly_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:47 compute-0 podman[91674]: 2025-11-22 03:25:47.63275962 +0000 UTC m=+0.130142257 container start c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe (image=quay.io/ceph/ceph:v18, name=wizardly_kowalevski, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:25:47 compute-0 podman[91674]: 2025-11-22 03:25:47.637236603 +0000 UTC m=+0.134619270 container attach c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe (image=quay.io/ceph/ceph:v18, name=wizardly_kowalevski, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:25:47 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1185012923; not ready for session (expect reconnect)
Nov 22 03:25:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:47 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Nov 22 03:25:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Nov 22 03:25:48 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:25:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Nov 22 03:25:48 compute-0 ceph-osd[90752]: osd.2 0 done with init, starting boot process
Nov 22 03:25:48 compute-0 ceph-osd[90752]: osd.2 0 start_boot
Nov 22 03:25:48 compute-0 ceph-osd[90752]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Nov 22 03:25:48 compute-0 ceph-osd[90752]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Nov 22 03:25:48 compute-0 ceph-osd[90752]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Nov 22 03:25:48 compute-0 ceph-osd[90752]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Nov 22 03:25:48 compute-0 ceph-osd[90752]: osd.2 0  bench count 12288000 bsize 4 KiB
Nov 22 03:25:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923] boot
Nov 22 03:25:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Nov 22 03:25:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Nov 22 03:25:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:48 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:48 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=16 pruub=13.808191299s) [] r=-1 lpr=16 pi=[13,16)/1 crt=0'0 mlcod 0'0 active pruub 26.099430084s@ mbc={}] start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:25:48 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 16 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=16 pruub=13.808191299s) [] r=-1 lpr=16 pi=[13,16)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.099430084s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:25:48 compute-0 ceph-mon[75011]: from='osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Nov 22 03:25:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2127919432' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:48 compute-0 ceph-mon[75011]: osdmap e15: 3 total, 1 up, 3 in
Nov 22 03:25:48 compute-0 ceph-mon[75011]: from='osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Nov 22 03:25:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:48 compute-0 ceph-mon[75011]: OSD bench result of 3333.146007 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:25:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:48 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4004892304; not ready for session (expect reconnect)
Nov 22 03:25:48 compute-0 ceph-osd[89585]: osd.1 16 state: booting -> active
Nov 22 03:25:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 pi=[11,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:25:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 16 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 pi=[15,16)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:25:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:48 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:48 compute-0 podman[91796]: 2025-11-22 03:25:48.183296328 +0000 UTC m=+0.123522458 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:25:48 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1769003972' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:48 compute-0 podman[91796]: 2025-11-22 03:25:48.304766086 +0000 UTC m=+0.244992196 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:48 compute-0 sudo[91670]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:48 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Nov 22 03:25:49 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4004892304; not ready for session (expect reconnect)
Nov 22 03:25:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:49 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:49 compute-0 sudo[91929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1769003972' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Nov 22 03:25:49 compute-0 wizardly_kowalevski[91710]: pool 'backups' created
Nov 22 03:25:49 compute-0 sudo[91929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:49 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Nov 22 03:25:49 compute-0 sudo[91929]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:49 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:49 compute-0 systemd[1]: libpod-c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe.scope: Deactivated successfully.
Nov 22 03:25:49 compute-0 podman[91674]: 2025-11-22 03:25:49.141020153 +0000 UTC m=+1.638402850 container died c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe (image=quay.io/ceph/ceph:v18, name=wizardly_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:25:49 compute-0 ceph-mon[75011]: purged_snaps scrub starts
Nov 22 03:25:49 compute-0 ceph-mon[75011]: purged_snaps scrub ok
Nov 22 03:25:49 compute-0 ceph-mon[75011]: pgmap v41: 3 pgs: 1 active+clean, 2 unknown; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail
Nov 22 03:25:49 compute-0 ceph-mon[75011]: from='osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Nov 22 03:25:49 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:25:49 compute-0 ceph-mon[75011]: osd.1 [v2:192.168.122.100:6806/1185012923,v1:192.168.122.100:6807/1185012923] boot
Nov 22 03:25:49 compute-0 ceph-mon[75011]: osdmap e16: 3 total, 2 up, 3 in
Nov 22 03:25:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Nov 22 03:25:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1769003972' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:49 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 17 pg[1.0( empty local-lis/les=16/17 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 pi=[11,16)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:25:49 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=16/17 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=16) [1] r=0 lpr=16 pi=[15,16)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:25:49 compute-0 sudo[91955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:49 compute-0 sudo[91955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:49 compute-0 sudo[91955]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1bc7e1dbc0a29643080c95c82ac8071acf53c86e0fdd8743c7b85bc4f12a3f7-merged.mount: Deactivated successfully.
Nov 22 03:25:49 compute-0 sudo[91992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:49 compute-0 sudo[91992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:49 compute-0 sudo[91992]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:49 compute-0 podman[91674]: 2025-11-22 03:25:49.290700054 +0000 UTC m=+1.788082681 container remove c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe (image=quay.io/ceph/ceph:v18, name=wizardly_kowalevski, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 22 03:25:49 compute-0 systemd[1]: libpod-conmon-c4bc70f3801e3b3b6523966d8cd3853aedb117e2ed57ff893d64ced6e5491fbe.scope: Deactivated successfully.
Nov 22 03:25:49 compute-0 sudo[91614]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v44: 4 pgs: 1 unknown, 2 creating+peering, 1 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 22 03:25:49 compute-0 sudo[92017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- inventory --format=json-pretty --filter-for-batch
Nov 22 03:25:49 compute-0 sudo[92017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:49 compute-0 sudo[92065]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uekamgxuseomxziqwjrcijlzflcmkhhp ; /usr/bin/python3'
Nov 22 03:25:49 compute-0 sudo[92065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:49 compute-0 python3[92067]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:49 compute-0 podman[92101]: 2025-11-22 03:25:49.674624165 +0000 UTC m=+0.075769423 container create e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d (image=quay.io/ceph/ceph:v18, name=youthful_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:25:49 compute-0 systemd[1]: Started libpod-conmon-e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d.scope.
Nov 22 03:25:49 compute-0 podman[92101]: 2025-11-22 03:25:49.62587287 +0000 UTC m=+0.027018188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38842ae9a62641981c75e8680c4788389cb91e003d8a46840c7e148a07d0e38/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e38842ae9a62641981c75e8680c4788389cb91e003d8a46840c7e148a07d0e38/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:49 compute-0 podman[92119]: 2025-11-22 03:25:49.76782477 +0000 UTC m=+0.132203805 container create b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:25:49 compute-0 podman[92101]: 2025-11-22 03:25:49.781177911 +0000 UTC m=+0.182323149 container init e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d (image=quay.io/ceph/ceph:v18, name=youthful_hypatia, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:49 compute-0 podman[92101]: 2025-11-22 03:25:49.79380971 +0000 UTC m=+0.194954978 container start e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d (image=quay.io/ceph/ceph:v18, name=youthful_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:49 compute-0 podman[92119]: 2025-11-22 03:25:49.717676673 +0000 UTC m=+0.082055768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:49 compute-0 podman[92101]: 2025-11-22 03:25:49.827619193 +0000 UTC m=+0.228764441 container attach e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d (image=quay.io/ceph/ceph:v18, name=youthful_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:25:49 compute-0 systemd[1]: Started libpod-conmon-b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926.scope.
Nov 22 03:25:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:49 compute-0 podman[92119]: 2025-11-22 03:25:49.911195939 +0000 UTC m=+0.275574984 container init b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:49 compute-0 podman[92119]: 2025-11-22 03:25:49.923091151 +0000 UTC m=+0.287470196 container start b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cannon, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:25:49 compute-0 sharp_cannon[92143]: 167 167
Nov 22 03:25:49 compute-0 systemd[1]: libpod-b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926.scope: Deactivated successfully.
Nov 22 03:25:49 compute-0 podman[92119]: 2025-11-22 03:25:49.961980189 +0000 UTC m=+0.326359293 container attach b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:25:49 compute-0 podman[92119]: 2025-11-22 03:25:49.962571607 +0000 UTC m=+0.326950662 container died b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-30ce3659fc5637bf1f64a373009d88ee4fe9b52bbe16801592c5bb3983864605-merged.mount: Deactivated successfully.
Nov 22 03:25:50 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4004892304; not ready for session (expect reconnect)
Nov 22 03:25:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:50 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:50 compute-0 podman[92119]: 2025-11-22 03:25:50.102978742 +0000 UTC m=+0.467357747 container remove b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:25:50 compute-0 systemd[1]: libpod-conmon-b699fa8cedd9ffdab7d4678e1202d9f43773f5b366267debb234cfc8d7d24926.scope: Deactivated successfully.
Nov 22 03:25:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Nov 22 03:25:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:25:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1769003972' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:50 compute-0 ceph-mon[75011]: osdmap e17: 3 total, 2 up, 3 in
Nov 22 03:25:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Nov 22 03:25:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Nov 22 03:25:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:50 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 18 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:25:50 compute-0 podman[92187]: 2025-11-22 03:25:50.281623807 +0000 UTC m=+0.054835745 container create 1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:25:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3773033766' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:50 compute-0 podman[92187]: 2025-11-22 03:25:50.24983824 +0000 UTC m=+0.023050228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:50 compute-0 systemd[1]: Started libpod-conmon-1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1.scope.
Nov 22 03:25:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76be21b35c746437d5e51fc7fdfbedeb5ef798d499e04e5d26006730329c41d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76be21b35c746437d5e51fc7fdfbedeb5ef798d499e04e5d26006730329c41d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76be21b35c746437d5e51fc7fdfbedeb5ef798d499e04e5d26006730329c41d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76be21b35c746437d5e51fc7fdfbedeb5ef798d499e04e5d26006730329c41d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:50 compute-0 podman[92187]: 2025-11-22 03:25:50.411858305 +0000 UTC m=+0.185070233 container init 1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:50 compute-0 podman[92187]: 2025-11-22 03:25:50.42186998 +0000 UTC m=+0.195081908 container start 1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:50 compute-0 podman[92187]: 2025-11-22 03:25:50.427518855 +0000 UTC m=+0.200730783 container attach 1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pasteur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:25:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:51 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4004892304; not ready for session (expect reconnect)
Nov 22 03:25:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:51 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:51 compute-0 ceph-mon[75011]: pgmap v44: 4 pgs: 1 unknown, 2 creating+peering, 1 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 22 03:25:51 compute-0 ceph-mon[75011]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:25:51 compute-0 ceph-mon[75011]: osdmap e18: 3 total, 2 up, 3 in
Nov 22 03:25:51 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3773033766' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:51 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Nov 22 03:25:51 compute-0 ceph-mgr[75294]: [devicehealth INFO root] creating main.db for devicehealth
Nov 22 03:25:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3773033766' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Nov 22 03:25:51 compute-0 youthful_hypatia[92135]: pool 'images' created
Nov 22 03:25:51 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Nov 22 03:25:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:51 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:51 compute-0 systemd[1]: libpod-e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d.scope: Deactivated successfully.
Nov 22 03:25:51 compute-0 podman[92101]: 2025-11-22 03:25:51.274096384 +0000 UTC m=+1.675241652 container died e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d (image=quay.io/ceph/ceph:v18, name=youthful_hypatia, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:25:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v47: 5 pgs: 2 unknown, 2 creating+peering, 1 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 22 03:25:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e38842ae9a62641981c75e8680c4788389cb91e003d8a46840c7e148a07d0e38-merged.mount: Deactivated successfully.
Nov 22 03:25:51 compute-0 podman[92101]: 2025-11-22 03:25:51.419841995 +0000 UTC m=+1.820987263 container remove e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d (image=quay.io/ceph/ceph:v18, name=youthful_hypatia, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:51 compute-0 ceph-mgr[75294]: [devicehealth INFO root] Check health
Nov 22 03:25:51 compute-0 systemd[1]: libpod-conmon-e6b8483eb2e1e435dfc475677f1020f5b940a0314447a8e4f5a7bf86e5b8c07d.scope: Deactivated successfully.
Nov 22 03:25:51 compute-0 ceph-mgr[75294]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Nov 22 03:25:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 22 03:25:51 compute-0 sudo[92065]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:51 compute-0 sudo[92260]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Nov 22 03:25:51 compute-0 sudo[92260]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 03:25:51 compute-0 sudo[92260]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Nov 22 03:25:51 compute-0 sudo[92260]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 22 03:25:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Nov 22 03:25:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:25:51 compute-0 sudo[92644]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtolrhwkbmyyxjbeipfrgyywhlrkwwgm ; /usr/bin/python3'
Nov 22 03:25:51 compute-0 sudo[92644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:51 compute-0 python3[92725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:51 compute-0 podman[93220]: 2025-11-22 03:25:51.918594357 +0000 UTC m=+0.079403330 container create 92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16 (image=quay.io/ceph/ceph:v18, name=friendly_robinson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:25:51 compute-0 systemd[1]: Started libpod-conmon-92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16.scope.
Nov 22 03:25:51 compute-0 podman[93220]: 2025-11-22 03:25:51.884350526 +0000 UTC m=+0.045159549 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8490ca27166f7e97ed8cc3d8ea8e8ffeb27a92efcbf24fd3fbb8a734fde1b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa8490ca27166f7e97ed8cc3d8ea8e8ffeb27a92efcbf24fd3fbb8a734fde1b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:52 compute-0 podman[93220]: 2025-11-22 03:25:52.012663497 +0000 UTC m=+0.173472450 container init 92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16 (image=quay.io/ceph/ceph:v18, name=friendly_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:25:52 compute-0 podman[93220]: 2025-11-22 03:25:52.026043126 +0000 UTC m=+0.186852079 container start 92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16 (image=quay.io/ceph/ceph:v18, name=friendly_robinson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:52 compute-0 podman[93220]: 2025-11-22 03:25:52.039356919 +0000 UTC m=+0.200165852 container attach 92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16 (image=quay.io/ceph/ceph:v18, name=friendly_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4004892304; not ready for session (expect reconnect)
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]: [
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:     {
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         "available": false,
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         "ceph_device": false,
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         "lsm_data": {},
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         "lvs": [],
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         "path": "/dev/sr0",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         "rejected_reasons": [
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "Has a FileSystem",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "Insufficient space (<5GB)"
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         ],
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         "sys_api": {
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "actuators": null,
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "device_nodes": "sr0",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "devname": "sr0",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "human_readable_size": "482.00 KB",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "id_bus": "ata",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "model": "QEMU DVD-ROM",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "nr_requests": "2",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "parent": "/dev/sr0",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "partitions": {},
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "path": "/dev/sr0",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "removable": "1",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "rev": "2.5+",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "ro": "0",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "rotational": "1",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "sas_address": "",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "sas_device_handle": "",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "scheduler_mode": "mq-deadline",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "sectors": 0,
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "sectorsize": "2048",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "size": 493568.0,
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "support_discard": "2048",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "type": "disk",
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:             "vendor": "QEMU"
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:         }
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]:     }
Nov 22 03:25:52 compute-0 quizzical_pasteur[92206]: ]
Nov 22 03:25:52 compute-0 systemd[1]: libpod-1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1.scope: Deactivated successfully.
Nov 22 03:25:52 compute-0 systemd[1]: libpod-1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1.scope: Consumed 1.683s CPU time.
Nov 22 03:25:52 compute-0 conmon[92206]: conmon 1077994252ed3384cc97 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1.scope/container/memory.events
Nov 22 03:25:52 compute-0 podman[92187]: 2025-11-22 03:25:52.115898125 +0000 UTC m=+1.889110043 container died 1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pasteur, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-a76be21b35c746437d5e51fc7fdfbedeb5ef798d499e04e5d26006730329c41d-merged.mount: Deactivated successfully.
Nov 22 03:25:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3773033766' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:52 compute-0 ceph-mon[75011]: osdmap e19: 3 total, 2 up, 3 in
Nov 22 03:25:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Nov 22 03:25:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:52 compute-0 podman[92187]: 2025-11-22 03:25:52.521960463 +0000 UTC m=+2.295172411 container remove 1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.wbwfxq(active, since 76s)
Nov 22 03:25:52 compute-0 sudo[92017]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:25:52 compute-0 systemd[1]: libpod-conmon-1077994252ed3384cc97ff3d1f720810f837406c23eee22210e21df8216be1a1.scope: Deactivated successfully.
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43687k
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43687k
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 2b14e86f-6734-4e7c-9d1e-8fbef781a238 does not exist
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 71dd2fc2-81a3-4191-99f6-46f870ffd68d does not exist
Nov 22 03:25:52 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 43845ce4-9898-4674-aee8-2559b6fae1c2 does not exist
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:25:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2501620309' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:52 compute-0 sudo[94142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:52 compute-0 sudo[94142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:52 compute-0 sudo[94142]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:52 compute-0 sudo[94170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:52 compute-0 sudo[94170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:52 compute-0 sudo[94170]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:52 compute-0 sudo[94195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:52 compute-0 sudo[94195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:52 compute-0 sudo[94195]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:53 compute-0 ceph-mgr[75294]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/4004892304; not ready for session (expect reconnect)
Nov 22 03:25:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mgr[75294]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Nov 22 03:25:53 compute-0 sudo[94220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:25:53 compute-0 sudo[94220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 9.973 iops: 2553.206 elapsed_sec: 1.175
Nov 22 03:25:53 compute-0 ceph-osd[90752]: log_channel(cluster) log [WRN] : OSD bench result of 2553.206257 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 0 waiting for initial osdmap
Nov 22 03:25:53 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2[90748]: 2025-11-22T03:25:53.120+0000 7f7e8a5a3640 -1 osd.2 0 waiting for initial osdmap
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 19 crush map has features 288514051259236352, adjusting msgr requires for clients
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 19 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 19 crush map has features 3314933000852226048, adjusting msgr requires for osds
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 19 check_osdmap_features require_osd_release unknown -> reef
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 19 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 19 set_numa_affinity not setting numa affinity
Nov 22 03:25:53 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-osd-2[90748]: 2025-11-22T03:25:53.172+0000 7f7e85bcb640 -1 osd.2 19 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 19 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Nov 22 03:25:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v48: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 22 03:25:53 compute-0 ceph-mon[75011]: pgmap v47: 5 pgs: 2 unknown, 2 creating+peering, 1 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 22 03:25:53 compute-0 ceph-mon[75011]: mgrmap e9: compute-0.wbwfxq(active, since 76s)
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: Adjusting osd_memory_target on compute-0 to 43687k
Nov 22 03:25:53 compute-0 ceph-mon[75011]: Unable to set osd_memory_target on compute-0 to 44736375: error parsing value: Value '44736375' is below minimum 939524096
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2501620309' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:53 compute-0 podman[94288]: 2025-11-22 03:25:53.501551892 +0000 UTC m=+0.070983251 container create 5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:53 compute-0 systemd[1]: Started libpod-conmon-5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c.scope.
Nov 22 03:25:53 compute-0 podman[94288]: 2025-11-22 03:25:53.470451748 +0000 UTC m=+0.039883157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:53 compute-0 podman[94288]: 2025-11-22 03:25:53.623734827 +0000 UTC m=+0.193166256 container init 5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:53 compute-0 podman[94288]: 2025-11-22 03:25:53.637751608 +0000 UTC m=+0.207182977 container start 5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:53 compute-0 gallant_babbage[94304]: 167 167
Nov 22 03:25:53 compute-0 systemd[1]: libpod-5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c.scope: Deactivated successfully.
Nov 22 03:25:53 compute-0 podman[94288]: 2025-11-22 03:25:53.646404057 +0000 UTC m=+0.215835426 container attach 5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:53 compute-0 podman[94288]: 2025-11-22 03:25:53.648035701 +0000 UTC m=+0.217467090 container died 5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_babbage, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Nov 22 03:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d299a44c9175a9cbeb22bbcdd3aa5846b1b6af52cb09d69d825fc76e0d6f3717-merged.mount: Deactivated successfully.
Nov 22 03:25:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2501620309' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Nov 22 03:25:53 compute-0 friendly_robinson[93651]: pool 'cephfs.cephfs.meta' created
Nov 22 03:25:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304] boot
Nov 22 03:25:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Nov 22 03:25:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Nov 22 03:25:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 20 state: booting -> active
Nov 22 03:25:53 compute-0 systemd[1]: libpod-92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16.scope: Deactivated successfully.
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 20 pg[5.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 pi=[19,20)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:25:53 compute-0 podman[94288]: 2025-11-22 03:25:53.741484324 +0000 UTC m=+0.310915673 container remove 5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:53 compute-0 podman[93220]: 2025-11-22 03:25:53.74204974 +0000 UTC m=+1.902858713 container died 92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16 (image=quay.io/ceph/ceph:v18, name=friendly_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:53 compute-0 systemd[1]: libpod-conmon-5ca749f467909bfe119decd7ec8ab9d4e2b5850d0985a174cca4eeee3112f48c.scope: Deactivated successfully.
Nov 22 03:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa8490ca27166f7e97ed8cc3d8ea8e8ffeb27a92efcbf24fd3fbb8a734fde1b8-merged.mount: Deactivated successfully.
Nov 22 03:25:53 compute-0 podman[93220]: 2025-11-22 03:25:53.842279413 +0000 UTC m=+2.003088356 container remove 92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16 (image=quay.io/ceph/ceph:v18, name=friendly_robinson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:53 compute-0 systemd[1]: libpod-conmon-92f50e7d18955853631ce85e414c5bfb9d26a97e3958c19009374a34f2e41d16.scope: Deactivated successfully.
Nov 22 03:25:53 compute-0 sudo[92644]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:53 compute-0 podman[94342]: 2025-11-22 03:25:53.922145218 +0000 UTC m=+0.045448314 container create cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 03:25:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 20 pg[6.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:25:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=20 pruub=7.923672199s) [2] r=-1 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.099430084s@ mbc={}] start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:25:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 20 pg[2.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=20 pruub=7.923636913s) [2] r=-1 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 26.099430084s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:25:53 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [2] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:25:53 compute-0 systemd[1]: Started libpod-conmon-cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e.scope.
Nov 22 03:25:53 compute-0 sudo[94382]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqzmdxmafbtgtbujlwuvmsetsvjwbnho ; /usr/bin/python3'
Nov 22 03:25:53 compute-0 sudo[94382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:53 compute-0 podman[94342]: 2025-11-22 03:25:53.898828731 +0000 UTC m=+0.022131887 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a378ffd2b6f0a996bcf405b08b33bf788065e57cbc813c044697b2c3a4bcc296/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a378ffd2b6f0a996bcf405b08b33bf788065e57cbc813c044697b2c3a4bcc296/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a378ffd2b6f0a996bcf405b08b33bf788065e57cbc813c044697b2c3a4bcc296/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a378ffd2b6f0a996bcf405b08b33bf788065e57cbc813c044697b2c3a4bcc296/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a378ffd2b6f0a996bcf405b08b33bf788065e57cbc813c044697b2c3a4bcc296/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:54 compute-0 podman[94342]: 2025-11-22 03:25:54.043142103 +0000 UTC m=+0.166445289 container init cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:25:54 compute-0 podman[94342]: 2025-11-22 03:25:54.052479269 +0000 UTC m=+0.175782405 container start cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:25:54 compute-0 podman[94342]: 2025-11-22 03:25:54.056524336 +0000 UTC m=+0.179827472 container attach cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:54 compute-0 python3[94386]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:54 compute-0 podman[94390]: 2025-11-22 03:25:54.199938274 +0000 UTC m=+0.062234529 container create a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d (image=quay.io/ceph/ceph:v18, name=great_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:25:54 compute-0 systemd[1]: Started libpod-conmon-a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d.scope.
Nov 22 03:25:54 compute-0 podman[94390]: 2025-11-22 03:25:54.172736004 +0000 UTC m=+0.035032249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb37f255201498657b8220a7dcba079dbe9411dd6a051be7a3c62bc61ab9be4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aeb37f255201498657b8220a7dcba079dbe9411dd6a051be7a3c62bc61ab9be4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:54 compute-0 podman[94390]: 2025-11-22 03:25:54.315779081 +0000 UTC m=+0.178075336 container init a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d (image=quay.io/ceph/ceph:v18, name=great_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:25:54 compute-0 podman[94390]: 2025-11-22 03:25:54.323304691 +0000 UTC m=+0.185600906 container start a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d (image=quay.io/ceph/ceph:v18, name=great_thompson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:54 compute-0 podman[94390]: 2025-11-22 03:25:54.339415937 +0000 UTC m=+0.201712182 container attach a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d (image=quay.io/ceph/ceph:v18, name=great_thompson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:25:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Nov 22 03:25:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Nov 22 03:25:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Nov 22 03:25:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Nov 22 03:25:54 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831872402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:54 compute-0 ceph-mon[75011]: OSD bench result of 2553.206257 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Nov 22 03:25:54 compute-0 ceph-mon[75011]: pgmap v48: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Nov 22 03:25:54 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2501620309' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:54 compute-0 ceph-mon[75011]: osd.2 [v2:192.168.122.100:6810/4004892304,v1:192.168.122.100:6811/4004892304] boot
Nov 22 03:25:54 compute-0 ceph-mon[75011]: osdmap e20: 3 total, 3 up, 3 in
Nov 22 03:25:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Nov 22 03:25:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=20/21 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=20) [2] r=0 lpr=20 pi=[19,20)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:25:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 21 pg[6.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [0] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:25:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 21 pg[2.0( empty local-lis/les=20/21 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=20) [2] r=0 lpr=20 pi=[13,20)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:25:55 compute-0 affectionate_mcnulty[94384]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:25:55 compute-0 affectionate_mcnulty[94384]: --> relative data size: 1.0
Nov 22 03:25:55 compute-0 affectionate_mcnulty[94384]: --> All data devices are unavailable
Nov 22 03:25:55 compute-0 systemd[1]: libpod-cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e.scope: Deactivated successfully.
Nov 22 03:25:55 compute-0 systemd[1]: libpod-cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e.scope: Consumed 1.127s CPU time.
Nov 22 03:25:55 compute-0 conmon[94384]: conmon cb0dbb7eb1a5845e12cb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e.scope/container/memory.events
Nov 22 03:25:55 compute-0 podman[94456]: 2025-11-22 03:25:55.327688545 +0000 UTC m=+0.032512421 container died cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v51: 6 pgs: 1 creating+peering, 1 peering, 1 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:25:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a378ffd2b6f0a996bcf405b08b33bf788065e57cbc813c044697b2c3a4bcc296-merged.mount: Deactivated successfully.
Nov 22 03:25:55 compute-0 podman[94456]: 2025-11-22 03:25:55.415145612 +0000 UTC m=+0.119969398 container remove cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:25:55 compute-0 systemd[1]: libpod-conmon-cb0dbb7eb1a5845e12cb62cc2fb70683774bb5bf9640625ac9595eb863d3b55e.scope: Deactivated successfully.
Nov 22 03:25:55 compute-0 sudo[94220]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:25:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e21 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:25:55 compute-0 sudo[94472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:55 compute-0 sudo[94472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:55 compute-0 sudo[94472]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:55 compute-0 sudo[94497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:55 compute-0 sudo[94497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:55 compute-0 sudo[94497]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:55 compute-0 sudo[94522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:55 compute-0 sudo[94522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:55 compute-0 sudo[94522]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Nov 22 03:25:55 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3831872402' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Nov 22 03:25:55 compute-0 great_thompson[94406]: pool 'cephfs.cephfs.data' created
Nov 22 03:25:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Nov 22 03:25:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 22 pg[7.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:25:55 compute-0 sudo[94547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:25:55 compute-0 systemd[1]: libpod-a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d.scope: Deactivated successfully.
Nov 22 03:25:55 compute-0 sudo[94547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:55 compute-0 podman[94390]: 2025-11-22 03:25:55.852115252 +0000 UTC m=+1.714411527 container died a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d (image=quay.io/ceph/ceph:v18, name=great_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:25:55 compute-0 ceph-mon[75011]: osdmap e21: 3 total, 3 up, 3 in
Nov 22 03:25:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3831872402' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Nov 22 03:25:55 compute-0 ceph-mon[75011]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:25:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3831872402' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Nov 22 03:25:55 compute-0 ceph-mon[75011]: osdmap e22: 3 total, 3 up, 3 in
Nov 22 03:25:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-aeb37f255201498657b8220a7dcba079dbe9411dd6a051be7a3c62bc61ab9be4-merged.mount: Deactivated successfully.
Nov 22 03:25:55 compute-0 podman[94390]: 2025-11-22 03:25:55.935474099 +0000 UTC m=+1.797770344 container remove a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d (image=quay.io/ceph/ceph:v18, name=great_thompson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:25:55 compute-0 systemd[1]: libpod-conmon-a7c7004f1fa6c54f44cdb66fbc2a61925f0a9b028cc26d0068caf5edd35de04d.scope: Deactivated successfully.
Nov 22 03:25:55 compute-0 sudo[94382]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:56 compute-0 sudo[94634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lipylevxzqpfftxkupnferrunocahfty ; /usr/bin/python3'
Nov 22 03:25:56 compute-0 sudo[94634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:56 compute-0 python3[94638]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:56 compute-0 podman[94653]: 2025-11-22 03:25:56.342932728 +0000 UTC m=+0.087353924 container create 68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_thompson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:56 compute-0 podman[94653]: 2025-11-22 03:25:56.296964011 +0000 UTC m=+0.041385287 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:56 compute-0 systemd[1]: Started libpod-conmon-68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2.scope.
Nov 22 03:25:56 compute-0 podman[94666]: 2025-11-22 03:25:56.444912379 +0000 UTC m=+0.113589619 container create 24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1 (image=quay.io/ceph/ceph:v18, name=elastic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:56 compute-0 podman[94666]: 2025-11-22 03:25:56.396211069 +0000 UTC m=+0.064888379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:56 compute-0 systemd[1]: Started libpod-conmon-24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1.scope.
Nov 22 03:25:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee2c6c6eea414caeeaf54c674a017808e6065250ff4feef8f98f4977a895a17/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ee2c6c6eea414caeeaf54c674a017808e6065250ff4feef8f98f4977a895a17/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:56 compute-0 podman[94653]: 2025-11-22 03:25:56.58471 +0000 UTC m=+0.329131296 container init 68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:56 compute-0 podman[94653]: 2025-11-22 03:25:56.595060734 +0000 UTC m=+0.339481970 container start 68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:56 compute-0 sleepy_thompson[94683]: 167 167
Nov 22 03:25:56 compute-0 systemd[1]: libpod-68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2.scope: Deactivated successfully.
Nov 22 03:25:56 compute-0 podman[94666]: 2025-11-22 03:25:56.60622856 +0000 UTC m=+0.274905840 container init 24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1 (image=quay.io/ceph/ceph:v18, name=elastic_joliot, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:56 compute-0 podman[94666]: 2025-11-22 03:25:56.618021293 +0000 UTC m=+0.286698503 container start 24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1 (image=quay.io/ceph/ceph:v18, name=elastic_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:25:56 compute-0 podman[94653]: 2025-11-22 03:25:56.624166775 +0000 UTC m=+0.368588021 container attach 68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:56 compute-0 podman[94653]: 2025-11-22 03:25:56.624838753 +0000 UTC m=+0.369259989 container died 68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_thompson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:56 compute-0 podman[94666]: 2025-11-22 03:25:56.695123244 +0000 UTC m=+0.363800514 container attach 24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1 (image=quay.io/ceph/ceph:v18, name=elastic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:25:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0db6dc1385d7d855e2b50e70f1708fe61749be486ee43eb94ce03e2fa709de3d-merged.mount: Deactivated successfully.
Nov 22 03:25:56 compute-0 podman[94653]: 2025-11-22 03:25:56.771101286 +0000 UTC m=+0.515522522 container remove 68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_thompson, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:25:56 compute-0 systemd[1]: libpod-conmon-68a3a399b2b98d1632b5962c985f0996f950ac6746c1041fe21b41cc11cdf2a2.scope: Deactivated successfully.
Nov 22 03:25:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Nov 22 03:25:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Nov 22 03:25:56 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Nov 22 03:25:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [1] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:25:56 compute-0 ceph-mon[75011]: pgmap v51: 6 pgs: 1 creating+peering, 1 peering, 1 unknown, 3 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:25:56 compute-0 ceph-mon[75011]: osdmap e23: 3 total, 3 up, 3 in
Nov 22 03:25:57 compute-0 podman[94713]: 2025-11-22 03:25:57.01030795 +0000 UTC m=+0.073027185 container create 399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:25:57 compute-0 systemd[1]: Started libpod-conmon-399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75.scope.
Nov 22 03:25:57 compute-0 podman[94713]: 2025-11-22 03:25:56.983976372 +0000 UTC m=+0.046695607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e05742119ac00dc741dee6be5913f15262d745708d5be72303b14d5dea197a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e05742119ac00dc741dee6be5913f15262d745708d5be72303b14d5dea197a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e05742119ac00dc741dee6be5913f15262d745708d5be72303b14d5dea197a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2e05742119ac00dc741dee6be5913f15262d745708d5be72303b14d5dea197a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:57 compute-0 podman[94713]: 2025-11-22 03:25:57.198848493 +0000 UTC m=+0.261567728 container init 399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:25:57 compute-0 podman[94713]: 2025-11-22 03:25:57.211632671 +0000 UTC m=+0.274351876 container start 399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Nov 22 03:25:57 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1188532868' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 22 03:25:57 compute-0 podman[94713]: 2025-11-22 03:25:57.25277584 +0000 UTC m=+0.315495045 container attach 399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:25:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v54: 7 pgs: 1 creating+peering, 1 peering, 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:25:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Nov 22 03:25:58 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1188532868' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 22 03:25:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Nov 22 03:25:58 compute-0 elastic_joliot[94688]: enabled application 'rbd' on pool 'vms'
Nov 22 03:25:58 compute-0 systemd[1]: libpod-24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1.scope: Deactivated successfully.
Nov 22 03:25:58 compute-0 podman[94666]: 2025-11-22 03:25:58.064417562 +0000 UTC m=+1.733094772 container died 24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1 (image=quay.io/ceph/ceph:v18, name=elastic_joliot, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:25:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]: {
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:     "0": [
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:         {
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "devices": [
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "/dev/loop3"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             ],
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_name": "ceph_lv0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_size": "21470642176",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "name": "ceph_lv0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "tags": {
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cluster_name": "ceph",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.crush_device_class": "",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.encrypted": "0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osd_id": "0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.type": "block",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.vdo": "0"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             },
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "type": "block",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "vg_name": "ceph_vg0"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:         }
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:     ],
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:     "1": [
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:         {
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "devices": [
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "/dev/loop4"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             ],
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_name": "ceph_lv1",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_size": "21470642176",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "name": "ceph_lv1",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "tags": {
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cluster_name": "ceph",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.crush_device_class": "",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.encrypted": "0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osd_id": "1",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.type": "block",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.vdo": "0"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             },
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "type": "block",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "vg_name": "ceph_vg1"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:         }
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:     ],
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:     "2": [
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:         {
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "devices": [
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "/dev/loop5"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             ],
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_name": "ceph_lv2",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_size": "21470642176",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "name": "ceph_lv2",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "tags": {
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.cluster_name": "ceph",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.crush_device_class": "",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.encrypted": "0",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osd_id": "2",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.type": "block",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:                 "ceph.vdo": "0"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             },
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "type": "block",
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:             "vg_name": "ceph_vg2"
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:         }
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]:     ]
Nov 22 03:25:58 compute-0 reverent_mendeleev[94748]: }
Nov 22 03:25:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1188532868' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Nov 22 03:25:58 compute-0 systemd[1]: libpod-399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75.scope: Deactivated successfully.
Nov 22 03:25:58 compute-0 podman[94713]: 2025-11-22 03:25:58.20825554 +0000 UTC m=+1.270974785 container died 399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ee2c6c6eea414caeeaf54c674a017808e6065250ff4feef8f98f4977a895a17-merged.mount: Deactivated successfully.
Nov 22 03:25:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2e05742119ac00dc741dee6be5913f15262d745708d5be72303b14d5dea197a-merged.mount: Deactivated successfully.
Nov 22 03:25:58 compute-0 podman[94666]: 2025-11-22 03:25:58.432000934 +0000 UTC m=+2.100678154 container remove 24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1 (image=quay.io/ceph/ceph:v18, name=elastic_joliot, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:25:58 compute-0 systemd[1]: libpod-conmon-24fb5291f3fcf7aeee10bb23425d71cb0b892321440a44ecfa8576485fa145c1.scope: Deactivated successfully.
Nov 22 03:25:58 compute-0 sudo[94634]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:58 compute-0 podman[94713]: 2025-11-22 03:25:58.461957228 +0000 UTC m=+1.524676463 container remove 399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mendeleev, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:25:58 compute-0 systemd[1]: libpod-conmon-399e6b5143a025a94286455740de376265ae684418e5030039577fdd51d4ae75.scope: Deactivated successfully.
Nov 22 03:25:58 compute-0 sudo[94547]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:58 compute-0 sudo[94787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:58 compute-0 sudo[94787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:58 compute-0 sudo[94787]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:58 compute-0 sudo[94839]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqwhqoeghcjjcwqyswxzopjlpkjfrmyq ; /usr/bin/python3'
Nov 22 03:25:58 compute-0 sudo[94839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:25:58 compute-0 sudo[94833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:25:58 compute-0 sudo[94833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:58 compute-0 sudo[94833]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:58 compute-0 sudo[94863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:25:58 compute-0 sudo[94863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:58 compute-0 sudo[94863]: pam_unix(sudo:session): session closed for user root
Nov 22 03:25:58 compute-0 python3[94855]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:25:58 compute-0 sudo[94888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:25:58 compute-0 sudo[94888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:25:58 compute-0 podman[94889]: 2025-11-22 03:25:58.907755722 +0000 UTC m=+0.099530946 container create b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc (image=quay.io/ceph/ceph:v18, name=awesome_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:25:58 compute-0 podman[94889]: 2025-11-22 03:25:58.855015895 +0000 UTC m=+0.046791189 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:25:58 compute-0 systemd[1]: Started libpod-conmon-b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc.scope.
Nov 22 03:25:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5336ef147ea7c2c4b8dea5cf93e271a677e61502518f717ea576fb9926f927/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b5336ef147ea7c2c4b8dea5cf93e271a677e61502518f717ea576fb9926f927/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:59 compute-0 podman[94889]: 2025-11-22 03:25:59.036868541 +0000 UTC m=+0.228643775 container init b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc (image=quay.io/ceph/ceph:v18, name=awesome_boyd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:59 compute-0 podman[94889]: 2025-11-22 03:25:59.048399157 +0000 UTC m=+0.240174371 container start b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc (image=quay.io/ceph/ceph:v18, name=awesome_boyd, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Nov 22 03:25:59 compute-0 podman[94889]: 2025-11-22 03:25:59.055535276 +0000 UTC m=+0.247310490 container attach b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc (image=quay.io/ceph/ceph:v18, name=awesome_boyd, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:25:59 compute-0 ceph-mon[75011]: pgmap v54: 7 pgs: 1 creating+peering, 1 peering, 1 unknown, 4 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:25:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1188532868' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Nov 22 03:25:59 compute-0 ceph-mon[75011]: osdmap e24: 3 total, 3 up, 3 in
Nov 22 03:25:59 compute-0 podman[94971]: 2025-11-22 03:25:59.257104903 +0000 UTC m=+0.068350941 container create 26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:59 compute-0 systemd[1]: Started libpod-conmon-26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3.scope.
Nov 22 03:25:59 compute-0 podman[94971]: 2025-11-22 03:25:59.220482733 +0000 UTC m=+0.031728751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:25:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:59 compute-0 podman[94971]: 2025-11-22 03:25:59.362917324 +0000 UTC m=+0.174163342 container init 26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:59 compute-0 podman[94971]: 2025-11-22 03:25:59.373838134 +0000 UTC m=+0.185084142 container start 26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:59 compute-0 keen_kilby[94987]: 167 167
Nov 22 03:25:59 compute-0 systemd[1]: libpod-26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3.scope: Deactivated successfully.
Nov 22 03:25:59 compute-0 podman[94971]: 2025-11-22 03:25:59.381620059 +0000 UTC m=+0.192866107 container attach 26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:25:59 compute-0 podman[94971]: 2025-11-22 03:25:59.386567131 +0000 UTC m=+0.197813169 container died 26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:25:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7995bf7242dc71d04e133907efcd90ebe933adf34328c205df4e457af08d4b9-merged.mount: Deactivated successfully.
Nov 22 03:25:59 compute-0 podman[94971]: 2025-11-22 03:25:59.450543825 +0000 UTC m=+0.261789863 container remove 26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kilby, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:25:59 compute-0 systemd[1]: libpod-conmon-26382cbffa7ffb5fdd54ed38fa22e1b9cd6fb6918db8c9c4631ee6539fddbce3.scope: Deactivated successfully.
Nov 22 03:25:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Nov 22 03:25:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979552934' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 22 03:25:59 compute-0 podman[95029]: 2025-11-22 03:25:59.63314942 +0000 UTC m=+0.048222488 container create fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:25:59 compute-0 systemd[1]: Started libpod-conmon-fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0.scope.
Nov 22 03:25:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6454651149abaa6c9cfef761cc83e5b6a6b73fcec1ad0210d9dc2ce26c24486a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:59 compute-0 podman[95029]: 2025-11-22 03:25:59.605342293 +0000 UTC m=+0.020415391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6454651149abaa6c9cfef761cc83e5b6a6b73fcec1ad0210d9dc2ce26c24486a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6454651149abaa6c9cfef761cc83e5b6a6b73fcec1ad0210d9dc2ce26c24486a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6454651149abaa6c9cfef761cc83e5b6a6b73fcec1ad0210d9dc2ce26c24486a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:25:59 compute-0 podman[95029]: 2025-11-22 03:25:59.725181657 +0000 UTC m=+0.140254795 container init fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:25:59 compute-0 podman[95029]: 2025-11-22 03:25:59.73208132 +0000 UTC m=+0.147154438 container start fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:25:59 compute-0 podman[95029]: 2025-11-22 03:25:59.739097676 +0000 UTC m=+0.154170854 container attach fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:26:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Nov 22 03:26:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3979552934' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Nov 22 03:26:00 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3979552934' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 22 03:26:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Nov 22 03:26:00 compute-0 awesome_boyd[94928]: enabled application 'rbd' on pool 'volumes'
Nov 22 03:26:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Nov 22 03:26:00 compute-0 systemd[1]: libpod-b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc.scope: Deactivated successfully.
Nov 22 03:26:00 compute-0 podman[94889]: 2025-11-22 03:26:00.332081237 +0000 UTC m=+1.523856491 container died b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc (image=quay.io/ceph/ceph:v18, name=awesome_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b5336ef147ea7c2c4b8dea5cf93e271a677e61502518f717ea576fb9926f927-merged.mount: Deactivated successfully.
Nov 22 03:26:00 compute-0 podman[94889]: 2025-11-22 03:26:00.439160353 +0000 UTC m=+1.630935587 container remove b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc (image=quay.io/ceph/ceph:v18, name=awesome_boyd, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:26:00 compute-0 systemd[1]: libpod-conmon-b5c83bd60a0406abc14ad63e73b3606bb12a85e93da3f706274344663a8417cc.scope: Deactivated successfully.
Nov 22 03:26:00 compute-0 sudo[94839]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:26:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:00 compute-0 sudo[95103]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxkhmcafhejhhurpxnvchjppdjxracjv ; /usr/bin/python3'
Nov 22 03:26:00 compute-0 sudo[95103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:00 compute-0 vigorous_williams[95046]: {
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "osd_id": 1,
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "type": "bluestore"
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:     },
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "osd_id": 0,
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "type": "bluestore"
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:     },
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "osd_id": 2,
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:         "type": "bluestore"
Nov 22 03:26:00 compute-0 vigorous_williams[95046]:     }
Nov 22 03:26:00 compute-0 vigorous_williams[95046]: }
Nov 22 03:26:00 compute-0 systemd[1]: libpod-fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0.scope: Deactivated successfully.
Nov 22 03:26:00 compute-0 python3[95106]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:00 compute-0 systemd[1]: libpod-fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0.scope: Consumed 1.057s CPU time.
Nov 22 03:26:00 compute-0 conmon[95046]: conmon fe1c9b88cf833d3e5e5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0.scope/container/memory.events
Nov 22 03:26:00 compute-0 podman[95029]: 2025-11-22 03:26:00.793086834 +0000 UTC m=+1.208159952 container died fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:26:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6454651149abaa6c9cfef761cc83e5b6a6b73fcec1ad0210d9dc2ce26c24486a-merged.mount: Deactivated successfully.
Nov 22 03:26:00 compute-0 podman[95029]: 2025-11-22 03:26:00.905253984 +0000 UTC m=+1.320327102 container remove fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:26:00 compute-0 systemd[1]: libpod-conmon-fe1c9b88cf833d3e5e5b738f4de54e2fdce76e57f5ee011d46459625294bf7b0.scope: Deactivated successfully.
Nov 22 03:26:00 compute-0 podman[95118]: 2025-11-22 03:26:00.937507958 +0000 UTC m=+0.122504335 container create 9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd (image=quay.io/ceph/ceph:v18, name=focused_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:00 compute-0 sudo[94888]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:00 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:00 compute-0 systemd[1]: Started libpod-conmon-9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd.scope.
Nov 22 03:26:00 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:00 compute-0 podman[95118]: 2025-11-22 03:26:00.89077901 +0000 UTC m=+0.075775467 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877cde23c5ba35309687e770b928014efe404a5afbb2b027fff1aa0e933ca530/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/877cde23c5ba35309687e770b928014efe404a5afbb2b027fff1aa0e933ca530/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:01 compute-0 podman[95118]: 2025-11-22 03:26:01.028268812 +0000 UTC m=+0.213265189 container init 9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd (image=quay.io/ceph/ceph:v18, name=focused_kare, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:01 compute-0 podman[95118]: 2025-11-22 03:26:01.035722909 +0000 UTC m=+0.220719286 container start 9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd (image=quay.io/ceph/ceph:v18, name=focused_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:26:01 compute-0 sudo[95146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:01 compute-0 sudo[95146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:01 compute-0 podman[95118]: 2025-11-22 03:26:01.045547849 +0000 UTC m=+0.230544266 container attach 9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd (image=quay.io/ceph/ceph:v18, name=focused_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:26:01 compute-0 sudo[95146]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:01 compute-0 sudo[95173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:26:01 compute-0 sudo[95173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:01 compute-0 sudo[95173]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:01 compute-0 sudo[95198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:01 compute-0 sudo[95198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:01 compute-0 sudo[95198]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:01 compute-0 sudo[95223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:01 compute-0 sudo[95223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:01 compute-0 sudo[95223]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:01 compute-0 sudo[95248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:01 compute-0 sudo[95248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:01 compute-0 sudo[95248]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:01 compute-0 ceph-mon[75011]: pgmap v56: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3979552934' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Nov 22 03:26:01 compute-0 ceph-mon[75011]: osdmap e25: 3 total, 3 up, 3 in
Nov 22 03:26:01 compute-0 ceph-mon[75011]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:26:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:01 compute-0 sudo[95273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:26:01 compute-0 sudo[95273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Nov 22 03:26:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4120694759' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 22 03:26:02 compute-0 podman[95390]: 2025-11-22 03:26:02.081221823 +0000 UTC m=+0.165905605 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:26:02 compute-0 podman[95390]: 2025-11-22 03:26:02.239846523 +0000 UTC m=+0.324530235 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:26:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Nov 22 03:26:02 compute-0 ceph-mon[75011]: pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:02 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4120694759' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Nov 22 03:26:02 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4120694759' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 22 03:26:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Nov 22 03:26:02 compute-0 focused_kare[95144]: enabled application 'rbd' on pool 'backups'
Nov 22 03:26:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Nov 22 03:26:02 compute-0 systemd[1]: libpod-9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd.scope: Deactivated successfully.
Nov 22 03:26:02 compute-0 podman[95118]: 2025-11-22 03:26:02.514028783 +0000 UTC m=+1.699025160 container died 9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd (image=quay.io/ceph/ceph:v18, name=focused_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 03:26:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-877cde23c5ba35309687e770b928014efe404a5afbb2b027fff1aa0e933ca530-merged.mount: Deactivated successfully.
Nov 22 03:26:02 compute-0 podman[95118]: 2025-11-22 03:26:02.680535112 +0000 UTC m=+1.865531519 container remove 9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd (image=quay.io/ceph/ceph:v18, name=focused_kare, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:26:02 compute-0 systemd[1]: libpod-conmon-9829cd1cb9ceb34b307e7f143d3d82edf0eb09390071342acc59ccef4ec1adfd.scope: Deactivated successfully.
Nov 22 03:26:02 compute-0 sudo[95103]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:02 compute-0 sudo[95515]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnjhyguwpzkmdsjaeuhffbyeynskvkju ; /usr/bin/python3'
Nov 22 03:26:02 compute-0 sudo[95515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:02 compute-0 python3[95519]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:03 compute-0 sudo[95273]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:03 compute-0 podman[95548]: 2025-11-22 03:26:03.081052567 +0000 UTC m=+0.067041896 container create 5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260 (image=quay.io/ceph/ceph:v18, name=serene_bardeen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:03 compute-0 systemd[1]: Started libpod-conmon-5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260.scope.
Nov 22 03:26:03 compute-0 podman[95548]: 2025-11-22 03:26:03.046334568 +0000 UTC m=+0.032323917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37873e77b61b2fe872530f504dd4654c082ab320e51aa0c114052583bc9dfcd9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37873e77b61b2fe872530f504dd4654c082ab320e51aa0c114052583bc9dfcd9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:03 compute-0 sudo[95565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:03 compute-0 sudo[95565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:03 compute-0 sudo[95565]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:03 compute-0 podman[95548]: 2025-11-22 03:26:03.220987583 +0000 UTC m=+0.206976982 container init 5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260 (image=quay.io/ceph/ceph:v18, name=serene_bardeen, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:26:03 compute-0 podman[95548]: 2025-11-22 03:26:03.232102537 +0000 UTC m=+0.218091856 container start 5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260 (image=quay.io/ceph/ceph:v18, name=serene_bardeen, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:03 compute-0 podman[95548]: 2025-11-22 03:26:03.237605152 +0000 UTC m=+0.223594501 container attach 5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260 (image=quay.io/ceph/ceph:v18, name=serene_bardeen, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:03 compute-0 sudo[95593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:03 compute-0 sudo[95593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:03 compute-0 sudo[95593]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:03 compute-0 sudo[95619]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:03 compute-0 sudo[95619]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:03 compute-0 sudo[95619]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:03 compute-0 sudo[95644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:26:03 compute-0 sudo[95644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4120694759' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Nov 22 03:26:03 compute-0 ceph-mon[75011]: osdmap e26: 3 total, 3 up, 3 in
Nov 22 03:26:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Nov 22 03:26:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2881781553' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 22 03:26:04 compute-0 sudo[95644]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:26:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:26:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:04 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 9df40970-c520-45b7-8259-07edf6e8f038 does not exist
Nov 22 03:26:04 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7295c658-1feb-4278-9362-adef07e71221 does not exist
Nov 22 03:26:04 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev fcae570e-2244-4f8d-b2cf-22cca4d7a0df does not exist
Nov 22 03:26:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:26:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:26:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:04 compute-0 sudo[95720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:04 compute-0 sudo[95720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:04 compute-0 sudo[95720]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:04 compute-0 sudo[95745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:04 compute-0 sudo[95745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:04 compute-0 sudo[95745]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:04 compute-0 sudo[95770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:04 compute-0 sudo[95770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:04 compute-0 sudo[95770]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:04 compute-0 sudo[95795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:26:04 compute-0 sudo[95795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Nov 22 03:26:04 compute-0 ceph-mon[75011]: pgmap v60: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2881781553' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2881781553' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 22 03:26:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Nov 22 03:26:04 compute-0 serene_bardeen[95571]: enabled application 'rbd' on pool 'images'
Nov 22 03:26:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Nov 22 03:26:04 compute-0 systemd[1]: libpod-5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260.scope: Deactivated successfully.
Nov 22 03:26:04 compute-0 podman[95548]: 2025-11-22 03:26:04.538731785 +0000 UTC m=+1.524721124 container died 5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260 (image=quay.io/ceph/ceph:v18, name=serene_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-37873e77b61b2fe872530f504dd4654c082ab320e51aa0c114052583bc9dfcd9-merged.mount: Deactivated successfully.
Nov 22 03:26:04 compute-0 podman[95548]: 2025-11-22 03:26:04.616711239 +0000 UTC m=+1.602700578 container remove 5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260 (image=quay.io/ceph/ceph:v18, name=serene_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:04 compute-0 systemd[1]: libpod-conmon-5ed769833d67244aaaebeb5837df590143aeda074777d6e1b2a1729472435260.scope: Deactivated successfully.
Nov 22 03:26:04 compute-0 sudo[95515]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:04 compute-0 sudo[95899]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szwqchajcpcgbhssdddqmasiephkxipf ; /usr/bin/python3'
Nov 22 03:26:04 compute-0 sudo[95899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:04 compute-0 podman[95897]: 2025-11-22 03:26:04.855037151 +0000 UTC m=+0.068744992 container create ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:04 compute-0 systemd[1]: Started libpod-conmon-ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784.scope.
Nov 22 03:26:04 compute-0 podman[95897]: 2025-11-22 03:26:04.827689846 +0000 UTC m=+0.041397697 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:04 compute-0 python3[95906]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:04 compute-0 podman[95897]: 2025-11-22 03:26:04.94528273 +0000 UTC m=+0.158990601 container init ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:04 compute-0 podman[95897]: 2025-11-22 03:26:04.949932393 +0000 UTC m=+0.163640224 container start ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:04 compute-0 podman[95897]: 2025-11-22 03:26:04.954955856 +0000 UTC m=+0.168663697 container attach ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:04 compute-0 laughing_fermi[95916]: 167 167
Nov 22 03:26:04 compute-0 systemd[1]: libpod-ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784.scope: Deactivated successfully.
Nov 22 03:26:04 compute-0 podman[95897]: 2025-11-22 03:26:04.959452925 +0000 UTC m=+0.173160766 container died ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc9f4d7cddf80bca573e98a122833f30657cd77f53376ec87417c634d98daec0-merged.mount: Deactivated successfully.
Nov 22 03:26:05 compute-0 podman[95897]: 2025-11-22 03:26:05.011090012 +0000 UTC m=+0.224797853 container remove ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:26:05 compute-0 systemd[1]: libpod-conmon-ad70abd9542bbb8fab0b20d5b6b16dd6d557c00132926915098e90ead81d5784.scope: Deactivated successfully.
Nov 22 03:26:05 compute-0 podman[95919]: 2025-11-22 03:26:05.063036028 +0000 UTC m=+0.104146719 container create 989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a (image=quay.io/ceph/ceph:v18, name=pedantic_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:05 compute-0 systemd[1]: Started libpod-conmon-989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a.scope.
Nov 22 03:26:05 compute-0 podman[95919]: 2025-11-22 03:26:05.034753779 +0000 UTC m=+0.075864510 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6738e32fc67d21908f591e78cce0dba4e2cc1921133a6f4cc3c17081afe4d59f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6738e32fc67d21908f591e78cce0dba4e2cc1921133a6f4cc3c17081afe4d59f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:05 compute-0 podman[95919]: 2025-11-22 03:26:05.165038709 +0000 UTC m=+0.206149460 container init 989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a (image=quay.io/ceph/ceph:v18, name=pedantic_hugle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:05 compute-0 podman[95919]: 2025-11-22 03:26:05.172217589 +0000 UTC m=+0.213328290 container start 989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a (image=quay.io/ceph/ceph:v18, name=pedantic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:26:05 compute-0 podman[95919]: 2025-11-22 03:26:05.178920876 +0000 UTC m=+0.220031577 container attach 989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a (image=quay.io/ceph/ceph:v18, name=pedantic_hugle, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:26:05 compute-0 podman[95959]: 2025-11-22 03:26:05.225756086 +0000 UTC m=+0.048712070 container create f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:26:05 compute-0 systemd[1]: Started libpod-conmon-f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119.scope.
Nov 22 03:26:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfa50fbc12f96f0b2e864b20a62f211fb6b329c3ecc7e5ad6ea7d16db3db752/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfa50fbc12f96f0b2e864b20a62f211fb6b329c3ecc7e5ad6ea7d16db3db752/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfa50fbc12f96f0b2e864b20a62f211fb6b329c3ecc7e5ad6ea7d16db3db752/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfa50fbc12f96f0b2e864b20a62f211fb6b329c3ecc7e5ad6ea7d16db3db752/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cfa50fbc12f96f0b2e864b20a62f211fb6b329c3ecc7e5ad6ea7d16db3db752/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:05 compute-0 podman[95959]: 2025-11-22 03:26:05.205795128 +0000 UTC m=+0.028751092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:05 compute-0 podman[95959]: 2025-11-22 03:26:05.306225827 +0000 UTC m=+0.129181811 container init f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:26:05 compute-0 podman[95959]: 2025-11-22 03:26:05.31990819 +0000 UTC m=+0.142864144 container start f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:05 compute-0 podman[95959]: 2025-11-22 03:26:05.323455583 +0000 UTC m=+0.146411577 container attach f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:26:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:26:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2881781553' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Nov 22 03:26:05 compute-0 ceph-mon[75011]: osdmap e27: 3 total, 3 up, 3 in
Nov 22 03:26:05 compute-0 ceph-mon[75011]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Nov 22 03:26:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Nov 22 03:26:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2759772155' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 22 03:26:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:26:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:26:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:26:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:26:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:26:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:26:06 compute-0 infallible_buck[95977]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:26:06 compute-0 infallible_buck[95977]: --> relative data size: 1.0
Nov 22 03:26:06 compute-0 infallible_buck[95977]: --> All data devices are unavailable
Nov 22 03:26:06 compute-0 systemd[1]: libpod-f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119.scope: Deactivated successfully.
Nov 22 03:26:06 compute-0 systemd[1]: libpod-f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119.scope: Consumed 1.026s CPU time.
Nov 22 03:26:06 compute-0 podman[96026]: 2025-11-22 03:26:06.448453603 +0000 UTC m=+0.039959500 container died f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cfa50fbc12f96f0b2e864b20a62f211fb6b329c3ecc7e5ad6ea7d16db3db752-merged.mount: Deactivated successfully.
Nov 22 03:26:06 compute-0 podman[96026]: 2025-11-22 03:26:06.520140341 +0000 UTC m=+0.111646228 container remove f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 03:26:06 compute-0 systemd[1]: libpod-conmon-f7339d617af80553ff52dfb77a93d5828a5473fd5b9f93e35c4cbb7c5a0ad119.scope: Deactivated successfully.
Nov 22 03:26:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Nov 22 03:26:06 compute-0 ceph-mon[75011]: pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2759772155' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Nov 22 03:26:06 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2759772155' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 22 03:26:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Nov 22 03:26:06 compute-0 pedantic_hugle[95951]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Nov 22 03:26:06 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Nov 22 03:26:06 compute-0 systemd[1]: libpod-989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a.scope: Deactivated successfully.
Nov 22 03:26:06 compute-0 podman[95919]: 2025-11-22 03:26:06.566252832 +0000 UTC m=+1.607363523 container died 989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a (image=quay.io/ceph/ceph:v18, name=pedantic_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:26:06 compute-0 sudo[95795]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6738e32fc67d21908f591e78cce0dba4e2cc1921133a6f4cc3c17081afe4d59f-merged.mount: Deactivated successfully.
Nov 22 03:26:06 compute-0 podman[95919]: 2025-11-22 03:26:06.621459823 +0000 UTC m=+1.662570514 container remove 989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a (image=quay.io/ceph/ceph:v18, name=pedantic_hugle, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:06 compute-0 systemd[1]: libpod-conmon-989fdcf57c327bdbabe37f60766e3ccb7e773e1b8fd36755a0281c6cdc42562a.scope: Deactivated successfully.
Nov 22 03:26:06 compute-0 sudo[95899]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:06 compute-0 sudo[96049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:06 compute-0 sudo[96049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:06 compute-0 sudo[96049]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:06 compute-0 sudo[96079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:06 compute-0 sudo[96079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:06 compute-0 sudo[96079]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:06 compute-0 sudo[96127]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vplvceewcdmwwjowuceigmczlxojzjlp ; /usr/bin/python3'
Nov 22 03:26:06 compute-0 sudo[96127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:06 compute-0 sudo[96128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:06 compute-0 sudo[96128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:06 compute-0 sudo[96128]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:06 compute-0 sudo[96155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:26:06 compute-0 sudo[96155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:06 compute-0 python3[96133]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:07 compute-0 podman[96180]: 2025-11-22 03:26:07.06963758 +0000 UTC m=+0.075166401 container create 669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c (image=quay.io/ceph/ceph:v18, name=mystifying_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:26:07 compute-0 systemd[1]: Started libpod-conmon-669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c.scope.
Nov 22 03:26:07 compute-0 podman[96180]: 2025-11-22 03:26:07.042109082 +0000 UTC m=+0.047637953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc5af6f4caea073ed236f76a6a05cbfc908033d232fdb1ddc18e011093006e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddcc5af6f4caea073ed236f76a6a05cbfc908033d232fdb1ddc18e011093006e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:07 compute-0 podman[96180]: 2025-11-22 03:26:07.167555203 +0000 UTC m=+0.173084074 container init 669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c (image=quay.io/ceph/ceph:v18, name=mystifying_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:26:07 compute-0 podman[96180]: 2025-11-22 03:26:07.174806975 +0000 UTC m=+0.180335796 container start 669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c (image=quay.io/ceph/ceph:v18, name=mystifying_matsumoto, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:26:07 compute-0 podman[96180]: 2025-11-22 03:26:07.179247293 +0000 UTC m=+0.184776154 container attach 669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c (image=quay.io/ceph/ceph:v18, name=mystifying_matsumoto, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:26:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:07 compute-0 podman[96240]: 2025-11-22 03:26:07.414111931 +0000 UTC m=+0.058626242 container create 751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chatterjee, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:07 compute-0 systemd[1]: Started libpod-conmon-751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b.scope.
Nov 22 03:26:07 compute-0 podman[96240]: 2025-11-22 03:26:07.383981154 +0000 UTC m=+0.028495525 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:07 compute-0 podman[96240]: 2025-11-22 03:26:07.513533554 +0000 UTC m=+0.158047855 container init 751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:26:07 compute-0 podman[96240]: 2025-11-22 03:26:07.520489718 +0000 UTC m=+0.165003999 container start 751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:07 compute-0 podman[96240]: 2025-11-22 03:26:07.525102041 +0000 UTC m=+0.169616362 container attach 751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:07 compute-0 clever_chatterjee[96257]: 167 167
Nov 22 03:26:07 compute-0 systemd[1]: libpod-751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b.scope: Deactivated successfully.
Nov 22 03:26:07 compute-0 conmon[96257]: conmon 751ad44b7c0f73d8ccde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b.scope/container/memory.events
Nov 22 03:26:07 compute-0 podman[96240]: 2025-11-22 03:26:07.528586383 +0000 UTC m=+0.173100694 container died 751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:26:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2759772155' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Nov 22 03:26:07 compute-0 ceph-mon[75011]: osdmap e28: 3 total, 3 up, 3 in
Nov 22 03:26:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d690cbf9505fece44be17e088f10ccf9c527d89c66fcc5a0237f033c28e52c50-merged.mount: Deactivated successfully.
Nov 22 03:26:07 compute-0 podman[96240]: 2025-11-22 03:26:07.585856819 +0000 UTC m=+0.230371130 container remove 751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:26:07 compute-0 systemd[1]: libpod-conmon-751ad44b7c0f73d8ccde2a2b1ae67d5d2fd9dc6ea29015362f69e2cc0ab29e7b.scope: Deactivated successfully.
Nov 22 03:26:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Nov 22 03:26:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1358913815' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 22 03:26:07 compute-0 podman[96299]: 2025-11-22 03:26:07.798856309 +0000 UTC m=+0.062286390 container create 7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:07 compute-0 systemd[1]: Started libpod-conmon-7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d.scope.
Nov 22 03:26:07 compute-0 podman[96299]: 2025-11-22 03:26:07.773684463 +0000 UTC m=+0.037114594 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f59686654389297ae6725b2fe2e5443068bf52f108bb997f8cec2a5684ea6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f59686654389297ae6725b2fe2e5443068bf52f108bb997f8cec2a5684ea6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f59686654389297ae6725b2fe2e5443068bf52f108bb997f8cec2a5684ea6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21f59686654389297ae6725b2fe2e5443068bf52f108bb997f8cec2a5684ea6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:07 compute-0 podman[96299]: 2025-11-22 03:26:07.898856328 +0000 UTC m=+0.162286459 container init 7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:26:07 compute-0 podman[96299]: 2025-11-22 03:26:07.913582947 +0000 UTC m=+0.177013018 container start 7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:07 compute-0 podman[96299]: 2025-11-22 03:26:07.919106974 +0000 UTC m=+0.182537115 container attach 7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Nov 22 03:26:08 compute-0 ceph-mon[75011]: pgmap v64: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1358913815' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Nov 22 03:26:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1358913815' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 22 03:26:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Nov 22 03:26:08 compute-0 mystifying_matsumoto[96202]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Nov 22 03:26:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Nov 22 03:26:08 compute-0 systemd[1]: libpod-669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c.scope: Deactivated successfully.
Nov 22 03:26:08 compute-0 podman[96180]: 2025-11-22 03:26:08.616976073 +0000 UTC m=+1.622504904 container died 669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c (image=quay.io/ceph/ceph:v18, name=mystifying_matsumoto, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddcc5af6f4caea073ed236f76a6a05cbfc908033d232fdb1ddc18e011093006e-merged.mount: Deactivated successfully.
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]: {
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:     "0": [
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:         {
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "devices": [
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "/dev/loop3"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             ],
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_name": "ceph_lv0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_size": "21470642176",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "name": "ceph_lv0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "tags": {
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.crush_device_class": "",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.encrypted": "0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osd_id": "0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.type": "block",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.vdo": "0"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             },
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "type": "block",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "vg_name": "ceph_vg0"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:         }
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:     ],
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:     "1": [
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:         {
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "devices": [
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "/dev/loop4"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             ],
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_name": "ceph_lv1",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_size": "21470642176",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "name": "ceph_lv1",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "tags": {
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.crush_device_class": "",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.encrypted": "0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osd_id": "1",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.type": "block",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.vdo": "0"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             },
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "type": "block",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "vg_name": "ceph_vg1"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:         }
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:     ],
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:     "2": [
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:         {
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "devices": [
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "/dev/loop5"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             ],
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_name": "ceph_lv2",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_size": "21470642176",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "name": "ceph_lv2",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "tags": {
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.crush_device_class": "",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.encrypted": "0",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osd_id": "2",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.type": "block",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:                 "ceph.vdo": "0"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             },
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "type": "block",
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:             "vg_name": "ceph_vg2"
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:         }
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]:     ]
Nov 22 03:26:08 compute-0 wizardly_goldwasser[96317]: }
Nov 22 03:26:08 compute-0 podman[96180]: 2025-11-22 03:26:08.675403509 +0000 UTC m=+1.680932300 container remove 669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c (image=quay.io/ceph/ceph:v18, name=mystifying_matsumoto, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:26:08 compute-0 systemd[1]: libpod-conmon-669db329e6bd8b6aee03262d8905f7ab1fe965525a85ef3e56ae195fcd60111c.scope: Deactivated successfully.
Nov 22 03:26:08 compute-0 systemd[1]: libpod-7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d.scope: Deactivated successfully.
Nov 22 03:26:08 compute-0 podman[96299]: 2025-11-22 03:26:08.697562276 +0000 UTC m=+0.960992377 container died 7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:08 compute-0 sudo[96127]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-21f59686654389297ae6725b2fe2e5443068bf52f108bb997f8cec2a5684ea6b-merged.mount: Deactivated successfully.
Nov 22 03:26:08 compute-0 podman[96299]: 2025-11-22 03:26:08.791180185 +0000 UTC m=+1.054610226 container remove 7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:26:08 compute-0 systemd[1]: libpod-conmon-7f47d36f06e77ee62159a63c305a079dec0865775a08c6017c1fd3e57c5ff59d.scope: Deactivated successfully.
Nov 22 03:26:08 compute-0 sudo[96155]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:08 compute-0 sudo[96351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:08 compute-0 sudo[96351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:08 compute-0 sudo[96351]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:09 compute-0 sudo[96376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:09 compute-0 sudo[96376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:09 compute-0 sudo[96376]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:09 compute-0 sudo[96401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:09 compute-0 sudo[96401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:09 compute-0 sudo[96401]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:09 compute-0 sudo[96426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:26:09 compute-0 sudo[96426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 03:26:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:26:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1358913815' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Nov 22 03:26:09 compute-0 ceph-mon[75011]: osdmap e29: 3 total, 3 up, 3 in
Nov 22 03:26:09 compute-0 podman[96567]: 2025-11-22 03:26:09.664307915 +0000 UTC m=+0.062261200 container create fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:26:09 compute-0 python3[96553]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:26:09 compute-0 systemd[1]: Started libpod-conmon-fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7.scope.
Nov 22 03:26:09 compute-0 podman[96567]: 2025-11-22 03:26:09.641355537 +0000 UTC m=+0.039308862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:09 compute-0 podman[96567]: 2025-11-22 03:26:09.765830273 +0000 UTC m=+0.163783558 container init fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:26:09 compute-0 podman[96567]: 2025-11-22 03:26:09.772930311 +0000 UTC m=+0.170883576 container start fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:26:09 compute-0 podman[96567]: 2025-11-22 03:26:09.776335612 +0000 UTC m=+0.174288887 container attach fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:09 compute-0 elated_feynman[96589]: 167 167
Nov 22 03:26:09 compute-0 systemd[1]: libpod-fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7.scope: Deactivated successfully.
Nov 22 03:26:09 compute-0 conmon[96589]: conmon fb1e3e514dd8704d3a44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7.scope/container/memory.events
Nov 22 03:26:09 compute-0 podman[96567]: 2025-11-22 03:26:09.782960926 +0000 UTC m=+0.180914191 container died fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-25bfc7a6853b3c71c0d3340dbed829959a35dbceacda8b0239107458a1686c9c-merged.mount: Deactivated successfully.
Nov 22 03:26:09 compute-0 podman[96567]: 2025-11-22 03:26:09.822226776 +0000 UTC m=+0.220180021 container remove fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:09 compute-0 systemd[1]: libpod-conmon-fb1e3e514dd8704d3a44eab1f9c20108829020bc3c3be2a80169b734ec76a9c7.scope: Deactivated successfully.
Nov 22 03:26:10 compute-0 podman[96678]: 2025-11-22 03:26:10.011437286 +0000 UTC m=+0.056920908 container create f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:10 compute-0 python3[96672]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781969.233384-36432-129883574191641/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:10 compute-0 systemd[1]: Started libpod-conmon-f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0.scope.
Nov 22 03:26:10 compute-0 podman[96678]: 2025-11-22 03:26:09.981832862 +0000 UTC m=+0.027316604 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2747a88c145a5d3760131f56c108e6d7477596514be4286426b851ff259517b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2747a88c145a5d3760131f56c108e6d7477596514be4286426b851ff259517b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2747a88c145a5d3760131f56c108e6d7477596514be4286426b851ff259517b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2747a88c145a5d3760131f56c108e6d7477596514be4286426b851ff259517b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:10 compute-0 podman[96678]: 2025-11-22 03:26:10.108922228 +0000 UTC m=+0.154405890 container init f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:26:10 compute-0 podman[96678]: 2025-11-22 03:26:10.121226033 +0000 UTC m=+0.166709695 container start f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:26:10 compute-0 podman[96678]: 2025-11-22 03:26:10.126141433 +0000 UTC m=+0.171625095 container attach f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:10 compute-0 ceph-mon[75011]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:10 compute-0 ceph-mon[75011]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Nov 22 03:26:10 compute-0 ceph-mon[75011]: Cluster is now healthy
Nov 22 03:26:10 compute-0 sudo[96799]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcmvljkjpmtokilozficzraqmzevdriz ; /usr/bin/python3'
Nov 22 03:26:10 compute-0 sudo[96799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:10 compute-0 python3[96801]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:26:10 compute-0 sudo[96799]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:11 compute-0 sudo[96894]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrdohdsljipejswvdzkrkkvuctyfnwaa ; /usr/bin/python3'
Nov 22 03:26:11 compute-0 sudo[96894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:11 compute-0 distracted_kare[96696]: {
Nov 22 03:26:11 compute-0 distracted_kare[96696]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "osd_id": 1,
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "type": "bluestore"
Nov 22 03:26:11 compute-0 distracted_kare[96696]:     },
Nov 22 03:26:11 compute-0 distracted_kare[96696]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "osd_id": 0,
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "type": "bluestore"
Nov 22 03:26:11 compute-0 distracted_kare[96696]:     },
Nov 22 03:26:11 compute-0 distracted_kare[96696]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "osd_id": 2,
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:11 compute-0 distracted_kare[96696]:         "type": "bluestore"
Nov 22 03:26:11 compute-0 distracted_kare[96696]:     }
Nov 22 03:26:11 compute-0 distracted_kare[96696]: }
Nov 22 03:26:11 compute-0 systemd[1]: libpod-f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0.scope: Deactivated successfully.
Nov 22 03:26:11 compute-0 systemd[1]: libpod-f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0.scope: Consumed 1.112s CPU time.
Nov 22 03:26:11 compute-0 podman[96678]: 2025-11-22 03:26:11.229248032 +0000 UTC m=+1.274731694 container died f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2747a88c145a5d3760131f56c108e6d7477596514be4286426b851ff259517b-merged.mount: Deactivated successfully.
Nov 22 03:26:11 compute-0 podman[96678]: 2025-11-22 03:26:11.316579825 +0000 UTC m=+1.362063457 container remove f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_kare, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:11 compute-0 python3[96899]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781970.28016-36446-20778150395012/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=83cbf6564d7e601d2b25f6c95bdfab5256143a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:11 compute-0 systemd[1]: libpod-conmon-f8458e8b044be86a085b4264bc7c978386497c3687ec8504508c13de81bc9ac0.scope: Deactivated successfully.
Nov 22 03:26:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:11 compute-0 sudo[96894]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:11 compute-0 sudo[96426]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:11 compute-0 sudo[96921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:11 compute-0 sudo[96921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:11 compute-0 sudo[96921]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:11 compute-0 sudo[96969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:26:11 compute-0 sudo[96969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:11 compute-0 sudo[96969]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:11 compute-0 sudo[97017]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvmcqczfrbkzawctnovbfvbhydninxcf ; /usr/bin/python3'
Nov 22 03:26:11 compute-0 sudo[97017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:11 compute-0 python3[97019]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:11 compute-0 podman[97020]: 2025-11-22 03:26:11.884890703 +0000 UTC m=+0.063642396 container create 1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d (image=quay.io/ceph/ceph:v18, name=gifted_jang, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:11 compute-0 systemd[1]: Started libpod-conmon-1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d.scope.
Nov 22 03:26:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:11 compute-0 podman[97020]: 2025-11-22 03:26:11.864528694 +0000 UTC m=+0.043280367 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8769ddd09e075bb7f489f8edbffdc1217a550152833da7296682c43317f89c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8769ddd09e075bb7f489f8edbffdc1217a550152833da7296682c43317f89c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c8769ddd09e075bb7f489f8edbffdc1217a550152833da7296682c43317f89c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:11 compute-0 podman[97020]: 2025-11-22 03:26:11.981409359 +0000 UTC m=+0.160161062 container init 1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d (image=quay.io/ceph/ceph:v18, name=gifted_jang, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:26:11 compute-0 podman[97020]: 2025-11-22 03:26:11.988972959 +0000 UTC m=+0.167724642 container start 1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d (image=quay.io/ceph/ceph:v18, name=gifted_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:26:11 compute-0 podman[97020]: 2025-11-22 03:26:11.99277809 +0000 UTC m=+0.171529783 container attach 1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d (image=quay.io/ceph/ceph:v18, name=gifted_jang, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:12 compute-0 ceph-mon[75011]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Nov 22 03:26:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/759956748' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:26:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/759956748' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 03:26:12 compute-0 gifted_jang[97035]: 
Nov 22 03:26:12 compute-0 gifted_jang[97035]: [global]
Nov 22 03:26:12 compute-0 gifted_jang[97035]:         fsid = 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:26:12 compute-0 gifted_jang[97035]:         mon_host = 192.168.122.100
Nov 22 03:26:12 compute-0 systemd[1]: libpod-1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d.scope: Deactivated successfully.
Nov 22 03:26:12 compute-0 podman[97020]: 2025-11-22 03:26:12.594821672 +0000 UTC m=+0.773573315 container died 1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d (image=quay.io/ceph/ceph:v18, name=gifted_jang, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c8769ddd09e075bb7f489f8edbffdc1217a550152833da7296682c43317f89c-merged.mount: Deactivated successfully.
Nov 22 03:26:12 compute-0 podman[97020]: 2025-11-22 03:26:12.648466272 +0000 UTC m=+0.827217925 container remove 1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d (image=quay.io/ceph/ceph:v18, name=gifted_jang, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:12 compute-0 sudo[97060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:12 compute-0 systemd[1]: libpod-conmon-1bfad948be68033fff24b4145899545d830f7a6630e214af88af8ade7484ce0d.scope: Deactivated successfully.
Nov 22 03:26:12 compute-0 sudo[97060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:12 compute-0 sudo[97060]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:12 compute-0 sudo[97017]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:12 compute-0 sudo[97096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:12 compute-0 sudo[97096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:12 compute-0 sudo[97096]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:12 compute-0 sudo[97121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:12 compute-0 sudo[97121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:12 compute-0 sudo[97121]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:12 compute-0 sudo[97189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybltjvucwvilelabsdjadyhtlzldumhx ; /usr/bin/python3'
Nov 22 03:26:12 compute-0 sudo[97189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:12 compute-0 sudo[97150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:26:12 compute-0 sudo[97150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:13 compute-0 python3[97194]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:13 compute-0 podman[97206]: 2025-11-22 03:26:13.147460245 +0000 UTC m=+0.060950755 container create 6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b (image=quay.io/ceph/ceph:v18, name=lucid_dubinsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:26:13 compute-0 systemd[1]: Started libpod-conmon-6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b.scope.
Nov 22 03:26:13 compute-0 podman[97206]: 2025-11-22 03:26:13.12914029 +0000 UTC m=+0.042630890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bf60af3335eec5b6f3a22be280cb37b8c04fe1e45167031be3f92d16f7d008/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bf60af3335eec5b6f3a22be280cb37b8c04fe1e45167031be3f92d16f7d008/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5bf60af3335eec5b6f3a22be280cb37b8c04fe1e45167031be3f92d16f7d008/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:13 compute-0 podman[97206]: 2025-11-22 03:26:13.263335093 +0000 UTC m=+0.176825613 container init 6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b (image=quay.io/ceph/ceph:v18, name=lucid_dubinsky, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:26:13 compute-0 podman[97206]: 2025-11-22 03:26:13.27567881 +0000 UTC m=+0.189169330 container start 6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b (image=quay.io/ceph/ceph:v18, name=lucid_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:26:13 compute-0 podman[97206]: 2025-11-22 03:26:13.279070769 +0000 UTC m=+0.192561289 container attach 6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b (image=quay.io/ceph/ceph:v18, name=lucid_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/759956748' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Nov 22 03:26:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/759956748' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Nov 22 03:26:13 compute-0 podman[97291]: 2025-11-22 03:26:13.598687883 +0000 UTC m=+0.081599292 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:26:13 compute-0 podman[97291]: 2025-11-22 03:26:13.69225076 +0000 UTC m=+0.175162189 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:26:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Nov 22 03:26:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/902317587' entity='client.admin' 
Nov 22 03:26:13 compute-0 lucid_dubinsky[97254]: set ssl_option
Nov 22 03:26:13 compute-0 systemd[1]: libpod-6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b.scope: Deactivated successfully.
Nov 22 03:26:13 compute-0 podman[97206]: 2025-11-22 03:26:13.970716033 +0000 UTC m=+0.884206574 container died 6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b (image=quay.io/ceph/ceph:v18, name=lucid_dubinsky, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5bf60af3335eec5b6f3a22be280cb37b8c04fe1e45167031be3f92d16f7d008-merged.mount: Deactivated successfully.
Nov 22 03:26:14 compute-0 podman[97206]: 2025-11-22 03:26:14.03252891 +0000 UTC m=+0.946019430 container remove 6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b (image=quay.io/ceph/ceph:v18, name=lucid_dubinsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:14 compute-0 systemd[1]: libpod-conmon-6c14b62ad7c65fddb59658d9da2eb80633fc2b2b46a2ea6d915266e38e670e0b.scope: Deactivated successfully.
Nov 22 03:26:14 compute-0 sudo[97189]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:14 compute-0 sudo[97469]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouxzhekydfqfflivlkcmpxevwtqzcqtd ; /usr/bin/python3'
Nov 22 03:26:14 compute-0 sudo[97469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:14 compute-0 sudo[97150]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:26:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:26:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b4e3d63e-ed6b-4e03-8219-bb16b6af3aa6 does not exist
Nov 22 03:26:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ca1045cf-be01-4130-8aea-786397d25ac4 does not exist
Nov 22 03:26:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1e8d824a-2c76-4d47-9d04-e0cd336cefbb does not exist
Nov 22 03:26:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:26:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:26:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:14 compute-0 python3[97474]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:14 compute-0 sudo[97477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:14 compute-0 sudo[97477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:14 compute-0 sudo[97477]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:14 compute-0 podman[97498]: 2025-11-22 03:26:14.418178592 +0000 UTC m=+0.041929231 container create 838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5 (image=quay.io/ceph/ceph:v18, name=bold_lalande, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:14 compute-0 systemd[1]: Started libpod-conmon-838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5.scope.
Nov 22 03:26:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:14 compute-0 sudo[97515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:14 compute-0 podman[97498]: 2025-11-22 03:26:14.39924127 +0000 UTC m=+0.022992009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b9b63b4c45eeb38e92ddd17cffb872dde2f9aca205243d1b07e85be8a6b2a7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b9b63b4c45eeb38e92ddd17cffb872dde2f9aca205243d1b07e85be8a6b2a7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80b9b63b4c45eeb38e92ddd17cffb872dde2f9aca205243d1b07e85be8a6b2a7/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:14 compute-0 sudo[97515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:14 compute-0 sudo[97515]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:14 compute-0 podman[97498]: 2025-11-22 03:26:14.516591088 +0000 UTC m=+0.140341757 container init 838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5 (image=quay.io/ceph/ceph:v18, name=bold_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:26:14 compute-0 podman[97498]: 2025-11-22 03:26:14.52799464 +0000 UTC m=+0.151745289 container start 838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5 (image=quay.io/ceph/ceph:v18, name=bold_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:26:14 compute-0 podman[97498]: 2025-11-22 03:26:14.53139525 +0000 UTC m=+0.155145899 container attach 838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5 (image=quay.io/ceph/ceph:v18, name=bold_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:26:14 compute-0 sudo[97546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:14 compute-0 sudo[97546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:14 compute-0 sudo[97546]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:14 compute-0 sudo[97572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:26:14 compute-0 sudo[97572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:14 compute-0 ceph-mon[75011]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/902317587' entity='client.admin' 
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:15 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:26:15 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Nov 22 03:26:15 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 22 03:26:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 03:26:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:15 compute-0 bold_lalande[97536]: Scheduled rgw.rgw update...
Nov 22 03:26:15 compute-0 podman[97656]: 2025-11-22 03:26:15.094648924 +0000 UTC m=+0.067680662 container create 7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendeleev, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:26:15 compute-0 systemd[1]: libpod-838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5.scope: Deactivated successfully.
Nov 22 03:26:15 compute-0 podman[97498]: 2025-11-22 03:26:15.106280392 +0000 UTC m=+0.730031061 container died 838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5 (image=quay.io/ceph/ceph:v18, name=bold_lalande, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:26:15 compute-0 systemd[1]: Started libpod-conmon-7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90.scope.
Nov 22 03:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-80b9b63b4c45eeb38e92ddd17cffb872dde2f9aca205243d1b07e85be8a6b2a7-merged.mount: Deactivated successfully.
Nov 22 03:26:15 compute-0 podman[97656]: 2025-11-22 03:26:15.065637866 +0000 UTC m=+0.038669624 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:15 compute-0 podman[97498]: 2025-11-22 03:26:15.170536873 +0000 UTC m=+0.794287512 container remove 838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5 (image=quay.io/ceph/ceph:v18, name=bold_lalande, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:26:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:15 compute-0 systemd[1]: libpod-conmon-838da84259b3306b3008659d686cd4b9452b8694ad450983f4fdce47eb322fd5.scope: Deactivated successfully.
Nov 22 03:26:15 compute-0 podman[97656]: 2025-11-22 03:26:15.195243418 +0000 UTC m=+0.168275186 container init 7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:26:15 compute-0 sudo[97469]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:15 compute-0 podman[97656]: 2025-11-22 03:26:15.205313635 +0000 UTC m=+0.178345363 container start 7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:26:15 compute-0 podman[97656]: 2025-11-22 03:26:15.209654749 +0000 UTC m=+0.182686527 container attach 7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:26:15 compute-0 epic_mendeleev[97687]: 167 167
Nov 22 03:26:15 compute-0 systemd[1]: libpod-7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90.scope: Deactivated successfully.
Nov 22 03:26:15 compute-0 podman[97656]: 2025-11-22 03:26:15.212780552 +0000 UTC m=+0.185812240 container died 7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendeleev, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5feedd3f783c46871390272c11458fb2965a4f0a0ad97934cf2a97707440ad58-merged.mount: Deactivated successfully.
Nov 22 03:26:15 compute-0 podman[97656]: 2025-11-22 03:26:15.252247627 +0000 UTC m=+0.225279305 container remove 7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mendeleev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:15 compute-0 systemd[1]: libpod-conmon-7e6b6111f5d61b03a056c1ad8f8e0cd824e0b2ace253a050f0893e01d3601e90.scope: Deactivated successfully.
Nov 22 03:26:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:15 compute-0 podman[97711]: 2025-11-22 03:26:15.492339145 +0000 UTC m=+0.062046764 container create 407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:26:15 compute-0 systemd[1]: Started libpod-conmon-407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b.scope.
Nov 22 03:26:15 compute-0 podman[97711]: 2025-11-22 03:26:15.469574922 +0000 UTC m=+0.039282501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2935f81cd18b3f53a030174e9efb75f09302f239dda0ca2530617422854e4ceb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2935f81cd18b3f53a030174e9efb75f09302f239dda0ca2530617422854e4ceb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2935f81cd18b3f53a030174e9efb75f09302f239dda0ca2530617422854e4ceb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2935f81cd18b3f53a030174e9efb75f09302f239dda0ca2530617422854e4ceb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2935f81cd18b3f53a030174e9efb75f09302f239dda0ca2530617422854e4ceb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:15 compute-0 podman[97711]: 2025-11-22 03:26:15.610495043 +0000 UTC m=+0.180202652 container init 407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:26:15 compute-0 podman[97711]: 2025-11-22 03:26:15.627165294 +0000 UTC m=+0.196872893 container start 407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:26:15 compute-0 podman[97711]: 2025-11-22 03:26:15.630853242 +0000 UTC m=+0.200560841 container attach 407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:26:16 compute-0 ceph-mon[75011]: from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:26:16 compute-0 ceph-mon[75011]: Saving service rgw.rgw spec with placement compute-0
Nov 22 03:26:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:16 compute-0 python3[97808]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:26:16 compute-0 python3[97893]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781975.8332715-36487-85892071274812/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:16 compute-0 wonderful_antonelli[97728]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:26:16 compute-0 wonderful_antonelli[97728]: --> relative data size: 1.0
Nov 22 03:26:16 compute-0 wonderful_antonelli[97728]: --> All data devices are unavailable
Nov 22 03:26:16 compute-0 systemd[1]: libpod-407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b.scope: Deactivated successfully.
Nov 22 03:26:16 compute-0 podman[97711]: 2025-11-22 03:26:16.776005715 +0000 UTC m=+1.345713304 container died 407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:16 compute-0 systemd[1]: libpod-407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b.scope: Consumed 1.098s CPU time.
Nov 22 03:26:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2935f81cd18b3f53a030174e9efb75f09302f239dda0ca2530617422854e4ceb-merged.mount: Deactivated successfully.
Nov 22 03:26:16 compute-0 podman[97711]: 2025-11-22 03:26:16.849390108 +0000 UTC m=+1.419097697 container remove 407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:26:16 compute-0 systemd[1]: libpod-conmon-407c1f7fc8a6997a5eae89d491315fcfa8fbd0747e61ff5bb9dd9351b703596b.scope: Deactivated successfully.
Nov 22 03:26:16 compute-0 sudo[97572]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:16 compute-0 sudo[97939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:16 compute-0 sudo[97939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:16 compute-0 sudo[97939]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:17 compute-0 sudo[97964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:17 compute-0 sudo[97964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:17 compute-0 sudo[97964]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:17 compute-0 ceph-mon[75011]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:17 compute-0 sudo[98012]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqasrmdtlenapcnodakfxhqvlicxrujk ; /usr/bin/python3'
Nov 22 03:26:17 compute-0 sudo[98012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:17 compute-0 sudo[98013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:17 compute-0 sudo[98013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:17 compute-0 sudo[98013]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:17 compute-0 sudo[98040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:26:17 compute-0 sudo[98040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:17 compute-0 python3[98022]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:17 compute-0 podman[98065]: 2025-11-22 03:26:17.392047277 +0000 UTC m=+0.078333745 container create 28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:26:17 compute-0 systemd[1]: Started libpod-conmon-28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3.scope.
Nov 22 03:26:17 compute-0 podman[98065]: 2025-11-22 03:26:17.359412712 +0000 UTC m=+0.045699230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37866aa0f97533eb1b0479cf4236a44f356db7a5311d1b99c503e32da88f2493/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37866aa0f97533eb1b0479cf4236a44f356db7a5311d1b99c503e32da88f2493/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37866aa0f97533eb1b0479cf4236a44f356db7a5311d1b99c503e32da88f2493/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:17 compute-0 podman[98065]: 2025-11-22 03:26:17.511768947 +0000 UTC m=+0.198055395 container init 28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:26:17 compute-0 podman[98065]: 2025-11-22 03:26:17.525362487 +0000 UTC m=+0.211648955 container start 28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:26:17 compute-0 podman[98065]: 2025-11-22 03:26:17.531323925 +0000 UTC m=+0.217610453 container attach 28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:17 compute-0 podman[98125]: 2025-11-22 03:26:17.737871694 +0000 UTC m=+0.056126027 container create 244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:17 compute-0 systemd[1]: Started libpod-conmon-244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed.scope.
Nov 22 03:26:17 compute-0 podman[98125]: 2025-11-22 03:26:17.713665332 +0000 UTC m=+0.031919715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:17 compute-0 podman[98125]: 2025-11-22 03:26:17.831743441 +0000 UTC m=+0.149997854 container init 244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_driscoll, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:26:17 compute-0 podman[98125]: 2025-11-22 03:26:17.84360456 +0000 UTC m=+0.161858893 container start 244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_driscoll, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:17 compute-0 agitated_driscoll[98141]: 167 167
Nov 22 03:26:17 compute-0 systemd[1]: libpod-244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed.scope: Deactivated successfully.
Nov 22 03:26:17 compute-0 podman[98125]: 2025-11-22 03:26:17.850234532 +0000 UTC m=+0.168488894 container attach 244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_driscoll, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:26:17 compute-0 podman[98125]: 2025-11-22 03:26:17.850613568 +0000 UTC m=+0.168867921 container died 244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_driscoll, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f5f98e29002c25e17a9b5d1028782fc8977fc42ff2ac52727583c5591a71a26-merged.mount: Deactivated successfully.
Nov 22 03:26:17 compute-0 podman[98125]: 2025-11-22 03:26:17.901018903 +0000 UTC m=+0.219273226 container remove 244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_driscoll, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:17 compute-0 systemd[1]: libpod-conmon-244504db9746060720f7addf02135001bbfde212b24ab1affce41aa2c35870ed.scope: Deactivated successfully.
Nov 22 03:26:18 compute-0 podman[98183]: 2025-11-22 03:26:18.088842803 +0000 UTC m=+0.047110931 container create 2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:26:18 compute-0 systemd[1]: Started libpod-conmon-2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac.scope.
Nov 22 03:26:18 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:26:18 compute-0 ceph-mgr[75294]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 22 03:26:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 22 03:26:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 22 03:26:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 22 03:26:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Nov 22 03:26:18 compute-0 podman[98183]: 2025-11-22 03:26:18.067471316 +0000 UTC m=+0.025739474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 22 03:26:18 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0[75007]: 2025-11-22T03:26:18.161+0000 7f03719be640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 22 03:26:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e2 new map
Nov 22 03:26:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-22T03:26:18.161683+0000
                                           modified        2025-11-22T03:26:18.161719+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                            
                                            
Nov 22 03:26:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Nov 22 03:26:18 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 22 03:26:18 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 22 03:26:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 03:26:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6538fdc54d771834b41b9815613348cce078b0881bac3f58698322d932f50754/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6538fdc54d771834b41b9815613348cce078b0881bac3f58698322d932f50754/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6538fdc54d771834b41b9815613348cce078b0881bac3f58698322d932f50754/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6538fdc54d771834b41b9815613348cce078b0881bac3f58698322d932f50754/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:18 compute-0 podman[98183]: 2025-11-22 03:26:18.201348726 +0000 UTC m=+0.159616894 container init 2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:26:18 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:18 compute-0 ceph-mgr[75294]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Nov 22 03:26:18 compute-0 podman[98183]: 2025-11-22 03:26:18.214042574 +0000 UTC m=+0.172310702 container start 2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:26:18 compute-0 podman[98183]: 2025-11-22 03:26:18.217630325 +0000 UTC m=+0.175898543 container attach 2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:18 compute-0 systemd[1]: libpod-28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3.scope: Deactivated successfully.
Nov 22 03:26:18 compute-0 podman[98065]: 2025-11-22 03:26:18.220086475 +0000 UTC m=+0.906372913 container died 28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:26:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-37866aa0f97533eb1b0479cf4236a44f356db7a5311d1b99c503e32da88f2493-merged.mount: Deactivated successfully.
Nov 22 03:26:18 compute-0 podman[98065]: 2025-11-22 03:26:18.276755427 +0000 UTC m=+0.963041864 container remove 28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3 (image=quay.io/ceph/ceph:v18, name=youthful_bardeen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:18 compute-0 systemd[1]: libpod-conmon-28c40fd70bda53a259e3150af19b97a9361f806e718119611cfbe3ea2a27f7d3.scope: Deactivated successfully.
Nov 22 03:26:18 compute-0 sudo[98012]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:18 compute-0 sudo[98243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnombeqzfkmksfdwstzgjkmvtsddbchp ; /usr/bin/python3'
Nov 22 03:26:18 compute-0 sudo[98243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:18 compute-0 python3[98245]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:18 compute-0 podman[98246]: 2025-11-22 03:26:18.724701172 +0000 UTC m=+0.054423910 container create 72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e (image=quay.io/ceph/ceph:v18, name=festive_mirzakhani, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:18 compute-0 systemd[1]: Started libpod-conmon-72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e.scope.
Nov 22 03:26:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6602a87bcc129a18052cf8a002255070cf36d9bec380322468290ab3960746fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6602a87bcc129a18052cf8a002255070cf36d9bec380322468290ab3960746fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6602a87bcc129a18052cf8a002255070cf36d9bec380322468290ab3960746fc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:18 compute-0 podman[98246]: 2025-11-22 03:26:18.706084737 +0000 UTC m=+0.035807495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:18 compute-0 podman[98246]: 2025-11-22 03:26:18.814169672 +0000 UTC m=+0.143892440 container init 72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e (image=quay.io/ceph/ceph:v18, name=festive_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:18 compute-0 podman[98246]: 2025-11-22 03:26:18.823306262 +0000 UTC m=+0.153029030 container start 72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e (image=quay.io/ceph/ceph:v18, name=festive_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:18 compute-0 podman[98246]: 2025-11-22 03:26:18.82739545 +0000 UTC m=+0.157118217 container attach 72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e (image=quay.io/ceph/ceph:v18, name=festive_mirzakhani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]: {
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:     "0": [
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:         {
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "devices": [
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "/dev/loop3"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             ],
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_name": "ceph_lv0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_size": "21470642176",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "name": "ceph_lv0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "tags": {
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.crush_device_class": "",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.encrypted": "0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osd_id": "0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.type": "block",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.vdo": "0"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             },
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "type": "block",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "vg_name": "ceph_vg0"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:         }
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:     ],
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:     "1": [
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:         {
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "devices": [
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "/dev/loop4"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             ],
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_name": "ceph_lv1",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_size": "21470642176",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "name": "ceph_lv1",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "tags": {
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.crush_device_class": "",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.encrypted": "0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osd_id": "1",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.type": "block",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.vdo": "0"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             },
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "type": "block",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "vg_name": "ceph_vg1"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:         }
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:     ],
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:     "2": [
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:         {
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "devices": [
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "/dev/loop5"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             ],
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_name": "ceph_lv2",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_size": "21470642176",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "name": "ceph_lv2",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "tags": {
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.crush_device_class": "",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.encrypted": "0",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osd_id": "2",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.type": "block",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:                 "ceph.vdo": "0"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             },
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "type": "block",
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:             "vg_name": "ceph_vg2"
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:         }
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]:     ]
Nov 22 03:26:19 compute-0 eloquent_taussig[98200]: }
Nov 22 03:26:19 compute-0 systemd[1]: libpod-2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac.scope: Deactivated successfully.
Nov 22 03:26:19 compute-0 conmon[98200]: conmon 2f2df6eb2efcce98ffa5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac.scope/container/memory.events
Nov 22 03:26:19 compute-0 podman[98183]: 2025-11-22 03:26:19.071871668 +0000 UTC m=+1.030139796 container died 2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:19 compute-0 ceph-mon[75011]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:19 compute-0 ceph-mon[75011]: from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:26:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Nov 22 03:26:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Nov 22 03:26:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Nov 22 03:26:19 compute-0 ceph-mon[75011]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Nov 22 03:26:19 compute-0 ceph-mon[75011]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Nov 22 03:26:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Nov 22 03:26:19 compute-0 ceph-mon[75011]: osdmap e30: 3 total, 3 up, 3 in
Nov 22 03:26:19 compute-0 ceph-mon[75011]: fsmap cephfs:0
Nov 22 03:26:19 compute-0 ceph-mon[75011]: Saving service mds.cephfs spec with placement compute-0
Nov 22 03:26:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6538fdc54d771834b41b9815613348cce078b0881bac3f58698322d932f50754-merged.mount: Deactivated successfully.
Nov 22 03:26:19 compute-0 podman[98183]: 2025-11-22 03:26:19.157010678 +0000 UTC m=+1.115278816 container remove 2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:26:19 compute-0 systemd[1]: libpod-conmon-2f2df6eb2efcce98ffa5e2c0bf0aac4a4e6fc87766dc7d8eb2c9b3795d222cac.scope: Deactivated successfully.
Nov 22 03:26:19 compute-0 sudo[98040]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:19 compute-0 sudo[98303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:19 compute-0 sudo[98303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:19 compute-0 sudo[98303]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:19 compute-0 sudo[98328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:19 compute-0 sudo[98328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:19 compute-0 sudo[98328]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:19 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:26:19 compute-0 ceph-mgr[75294]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Nov 22 03:26:19 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Nov 22 03:26:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 03:26:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:19 compute-0 festive_mirzakhani[98262]: Scheduled mds.cephfs update...
Nov 22 03:26:19 compute-0 systemd[1]: libpod-72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e.scope: Deactivated successfully.
Nov 22 03:26:19 compute-0 podman[98246]: 2025-11-22 03:26:19.416966988 +0000 UTC m=+0.746689746 container died 72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e (image=quay.io/ceph/ceph:v18, name=festive_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-6602a87bcc129a18052cf8a002255070cf36d9bec380322468290ab3960746fc-merged.mount: Deactivated successfully.
Nov 22 03:26:19 compute-0 podman[98246]: 2025-11-22 03:26:19.471185338 +0000 UTC m=+0.800908066 container remove 72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e (image=quay.io/ceph/ceph:v18, name=festive_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:19 compute-0 systemd[1]: libpod-conmon-72d147f0d1520ddac24321133f7e6844434ed7782260b5693bf07c1da10f674e.scope: Deactivated successfully.
Nov 22 03:26:19 compute-0 sudo[98354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:19 compute-0 sudo[98354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:19 compute-0 sudo[98354]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:19 compute-0 sudo[98243]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:19 compute-0 sudo[98391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:26:19 compute-0 sudo[98391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:20 compute-0 podman[98488]: 2025-11-22 03:26:20.039529233 +0000 UTC m=+0.072434557 container create 81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:26:20 compute-0 systemd[1]: Started libpod-conmon-81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2.scope.
Nov 22 03:26:20 compute-0 podman[98488]: 2025-11-22 03:26:20.010660512 +0000 UTC m=+0.043565895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:20 compute-0 sudo[98549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raljenmnznsdpeiiriizfkhapiposhih ; /usr/bin/python3'
Nov 22 03:26:20 compute-0 sudo[98549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:20 compute-0 podman[98488]: 2025-11-22 03:26:20.145165443 +0000 UTC m=+0.178070817 container init 81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:26:20 compute-0 podman[98488]: 2025-11-22 03:26:20.157157364 +0000 UTC m=+0.190062688 container start 81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:26:20 compute-0 podman[98488]: 2025-11-22 03:26:20.161206012 +0000 UTC m=+0.194111385 container attach 81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:26:20 compute-0 dreamy_saha[98550]: 167 167
Nov 22 03:26:20 compute-0 systemd[1]: libpod-81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2.scope: Deactivated successfully.
Nov 22 03:26:20 compute-0 podman[98488]: 2025-11-22 03:26:20.164382138 +0000 UTC m=+0.197287472 container died 81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-67adc3559677f0ff6c0c240f484986205335945f676f303d0524d44148246c09-merged.mount: Deactivated successfully.
Nov 22 03:26:20 compute-0 podman[98488]: 2025-11-22 03:26:20.222458725 +0000 UTC m=+0.255364029 container remove 81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_saha, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:26:20 compute-0 systemd[1]: libpod-conmon-81cbdacc9218f6f5787898265b3bff70b8a144b872dd71a13dbe107af4d20cc2.scope: Deactivated successfully.
Nov 22 03:26:20 compute-0 python3[98554]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 22 03:26:20 compute-0 sudo[98549]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:20 compute-0 podman[98576]: 2025-11-22 03:26:20.395857413 +0000 UTC m=+0.035411854 container create 22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:26:20 compute-0 ceph-mon[75011]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:20 compute-0 ceph-mon[75011]: from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 03:26:20 compute-0 ceph-mon[75011]: Saving service mds.cephfs spec with placement compute-0
Nov 22 03:26:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:20 compute-0 systemd[1]: Started libpod-conmon-22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262.scope.
Nov 22 03:26:20 compute-0 podman[98576]: 2025-11-22 03:26:20.379510329 +0000 UTC m=+0.019064800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e454a59fe5bde82abd3c52898431eb968696d531fd9ffa94b86425ec2abb147/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e454a59fe5bde82abd3c52898431eb968696d531fd9ffa94b86425ec2abb147/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e454a59fe5bde82abd3c52898431eb968696d531fd9ffa94b86425ec2abb147/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e454a59fe5bde82abd3c52898431eb968696d531fd9ffa94b86425ec2abb147/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:20 compute-0 podman[98576]: 2025-11-22 03:26:20.504556309 +0000 UTC m=+0.144110769 container init 22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 22 03:26:20 compute-0 podman[98576]: 2025-11-22 03:26:20.509454958 +0000 UTC m=+0.149009398 container start 22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:20 compute-0 podman[98576]: 2025-11-22 03:26:20.512592967 +0000 UTC m=+0.152147408 container attach 22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:20 compute-0 sudo[98668]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emqgnzpwkhivjhtmyjaxdfyevcldypnx ; /usr/bin/python3'
Nov 22 03:26:20 compute-0 sudo[98668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:20 compute-0 python3[98670]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763781979.8109205-36517-162960442846663/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=f53db1b7562575e7551c1a2d8d7268945ce42dda backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:26:20 compute-0 sudo[98668]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:21 compute-0 sudo[98718]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evnbwgpxsolfnwfhhvcfttezibxbhrqu ; /usr/bin/python3'
Nov 22 03:26:21 compute-0 sudo[98718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:21 compute-0 python3[98721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:21 compute-0 podman[98732]: 2025-11-22 03:26:21.365195872 +0000 UTC m=+0.057495066 container create 3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba (image=quay.io/ceph/ceph:v18, name=xenodochial_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:26:21 compute-0 systemd[1]: Started libpod-conmon-3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba.scope.
Nov 22 03:26:21 compute-0 podman[98732]: 2025-11-22 03:26:21.344324376 +0000 UTC m=+0.036623570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7301e9c75eeff4a5301778d2edffa2466cb483795f60a6fb86acab77cb64297/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7301e9c75eeff4a5301778d2edffa2466cb483795f60a6fb86acab77cb64297/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:21 compute-0 podman[98732]: 2025-11-22 03:26:21.463384268 +0000 UTC m=+0.155683442 container init 3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba (image=quay.io/ceph/ceph:v18, name=xenodochial_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:26:21 compute-0 podman[98732]: 2025-11-22 03:26:21.474855156 +0000 UTC m=+0.167154300 container start 3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba (image=quay.io/ceph/ceph:v18, name=xenodochial_ganguly, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:26:21 compute-0 podman[98732]: 2025-11-22 03:26:21.47808682 +0000 UTC m=+0.170386004 container attach 3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba (image=quay.io/ceph/ceph:v18, name=xenodochial_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:26:21 compute-0 loving_nobel[98624]: {
Nov 22 03:26:21 compute-0 loving_nobel[98624]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "osd_id": 1,
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "type": "bluestore"
Nov 22 03:26:21 compute-0 loving_nobel[98624]:     },
Nov 22 03:26:21 compute-0 loving_nobel[98624]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "osd_id": 0,
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "type": "bluestore"
Nov 22 03:26:21 compute-0 loving_nobel[98624]:     },
Nov 22 03:26:21 compute-0 loving_nobel[98624]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "osd_id": 2,
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:21 compute-0 loving_nobel[98624]:         "type": "bluestore"
Nov 22 03:26:21 compute-0 loving_nobel[98624]:     }
Nov 22 03:26:21 compute-0 loving_nobel[98624]: }
Nov 22 03:26:21 compute-0 systemd[1]: libpod-22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262.scope: Deactivated successfully.
Nov 22 03:26:21 compute-0 podman[98576]: 2025-11-22 03:26:21.590565743 +0000 UTC m=+1.230120184 container died 22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:26:21 compute-0 systemd[1]: libpod-22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262.scope: Consumed 1.079s CPU time.
Nov 22 03:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e454a59fe5bde82abd3c52898431eb968696d531fd9ffa94b86425ec2abb147-merged.mount: Deactivated successfully.
Nov 22 03:26:21 compute-0 podman[98576]: 2025-11-22 03:26:21.655536201 +0000 UTC m=+1.295090652 container remove 22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:26:21 compute-0 systemd[1]: libpod-conmon-22c7d815373a62a48caa3f22f790f808c98b5fc50c06561e3ee91108d7f19262.scope: Deactivated successfully.
Nov 22 03:26:21 compute-0 sudo[98391]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:21 compute-0 sudo[98781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:21 compute-0 sudo[98781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:21 compute-0 sudo[98781]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:21 compute-0 sudo[98807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:26:21 compute-0 sudo[98807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:21 compute-0 sudo[98807]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:22 compute-0 sudo[98850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:22 compute-0 sudo[98850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:22 compute-0 sudo[98850]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:22 compute-0 sudo[98875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:22 compute-0 sudo[98875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:22 compute-0 sudo[98875]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Nov 22 03:26:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/956146935' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 22 03:26:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/956146935' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 22 03:26:22 compute-0 systemd[1]: libpod-3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba.scope: Deactivated successfully.
Nov 22 03:26:22 compute-0 podman[98732]: 2025-11-22 03:26:22.171896551 +0000 UTC m=+0.864195725 container died 3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba (image=quay.io/ceph/ceph:v18, name=xenodochial_ganguly, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:22 compute-0 sudo[98900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:22 compute-0 sudo[98900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7301e9c75eeff4a5301778d2edffa2466cb483795f60a6fb86acab77cb64297-merged.mount: Deactivated successfully.
Nov 22 03:26:22 compute-0 sudo[98900]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:22 compute-0 podman[98732]: 2025-11-22 03:26:22.224959662 +0000 UTC m=+0.917258846 container remove 3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba (image=quay.io/ceph/ceph:v18, name=xenodochial_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:22 compute-0 systemd[1]: libpod-conmon-3cad7120c8d1c6647906e0f7651e1f574dcca8c3b552394be1ca816fab3291ba.scope: Deactivated successfully.
Nov 22 03:26:22 compute-0 sudo[98718]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:22 compute-0 sudo[98937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:26:22 compute-0 sudo[98937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:22 compute-0 ceph-mon[75011]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/956146935' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Nov 22 03:26:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/956146935' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Nov 22 03:26:22 compute-0 sudo[99059]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzyszdkmmkfcuufxfvgmesthtevfyslu ; /usr/bin/python3'
Nov 22 03:26:22 compute-0 sudo[99059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:22 compute-0 podman[99063]: 2025-11-22 03:26:22.934061269 +0000 UTC m=+0.090558166 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:26:22 compute-0 python3[99064]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:23 compute-0 podman[99063]: 2025-11-22 03:26:23.056388597 +0000 UTC m=+0.212885534 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:26:23 compute-0 podman[99085]: 2025-11-22 03:26:23.086253019 +0000 UTC m=+0.066345565 container create 2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26 (image=quay.io/ceph/ceph:v18, name=infallible_driscoll, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:23 compute-0 systemd[1]: Started libpod-conmon-2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26.scope.
Nov 22 03:26:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d60f6fb5015a821dd54f369e3f439998906ca0c10acd6c4944b03b3e8fc010a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d60f6fb5015a821dd54f369e3f439998906ca0c10acd6c4944b03b3e8fc010a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:23 compute-0 podman[99085]: 2025-11-22 03:26:23.062343373 +0000 UTC m=+0.042435920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:23 compute-0 podman[99085]: 2025-11-22 03:26:23.16903109 +0000 UTC m=+0.149123637 container init 2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26 (image=quay.io/ceph/ceph:v18, name=infallible_driscoll, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:23 compute-0 podman[99085]: 2025-11-22 03:26:23.175658278 +0000 UTC m=+0.155750845 container start 2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26 (image=quay.io/ceph/ceph:v18, name=infallible_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:26:23 compute-0 podman[99085]: 2025-11-22 03:26:23.180812455 +0000 UTC m=+0.160905022 container attach 2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26 (image=quay.io/ceph/ceph:v18, name=infallible_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:26:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:23 compute-0 sudo[98937]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev eac578b3-c338-46cb-9c2a-b61793fd8b38 does not exist
Nov 22 03:26:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c793e661-bcd9-4d1f-9696-eacefa7f8883 does not exist
Nov 22 03:26:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev dfdda477-59c2-4ec9-b0f1-1f8b26acd4fb does not exist
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:23 compute-0 sudo[99221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:23 compute-0 sudo[99221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:23 compute-0 sudo[99221]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:23 compute-0 sudo[99246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:23 compute-0 sudo[99246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:23 compute-0 sudo[99246]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:26:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576196187' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:26:23 compute-0 infallible_driscoll[99122]: 
Nov 22 03:26:23 compute-0 infallible_driscoll[99122]: {"fsid":"7adcc38b-6484-5de6-b879-33a0309153df","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":153,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":30,"num_osds":3,"num_up_osds":3,"osd_up_since":1763781953,"num_in_osds":3,"osd_in_since":1763781923,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83763200,"bytes_avail":64328163328,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T03:25:37.325184+0000","services":{}},"progress_events":{}}
Nov 22 03:26:23 compute-0 systemd[1]: libpod-2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26.scope: Deactivated successfully.
Nov 22 03:26:23 compute-0 podman[99085]: 2025-11-22 03:26:23.830749985 +0000 UTC m=+0.810842582 container died 2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26 (image=quay.io/ceph/ceph:v18, name=infallible_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:26:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d60f6fb5015a821dd54f369e3f439998906ca0c10acd6c4944b03b3e8fc010a-merged.mount: Deactivated successfully.
Nov 22 03:26:23 compute-0 podman[99085]: 2025-11-22 03:26:23.883112686 +0000 UTC m=+0.863205233 container remove 2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26 (image=quay.io/ceph/ceph:v18, name=infallible_driscoll, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:26:23 compute-0 sudo[99272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:23 compute-0 sudo[99272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:23 compute-0 sudo[99272]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:23 compute-0 systemd[1]: libpod-conmon-2cf63a77dcef493283e46d1f6d016c2bb14a76be600ef5afc033a067af8fbd26.scope: Deactivated successfully.
Nov 22 03:26:23 compute-0 sudo[99059]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:23 compute-0 sudo[99311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:26:23 compute-0 sudo[99311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:24 compute-0 sudo[99359]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-legypcsfcnovnanllrszdhcyvzrmzmki ; /usr/bin/python3'
Nov 22 03:26:24 compute-0 sudo[99359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:24 compute-0 python3[99361]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:24 compute-0 podman[99388]: 2025-11-22 03:26:24.313811495 +0000 UTC m=+0.049411991 container create 070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13 (image=quay.io/ceph/ceph:v18, name=competent_margulis, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:26:24 compute-0 systemd[1]: Started libpod-conmon-070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13.scope.
Nov 22 03:26:24 compute-0 podman[99410]: 2025-11-22 03:26:24.347935451 +0000 UTC m=+0.038760342 container create 30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bouman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:24 compute-0 systemd[1]: Started libpod-conmon-30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36.scope.
Nov 22 03:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9255b65c21937fe34475d6db1f38e1c885b85cec21058648690626524906e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aab9255b65c21937fe34475d6db1f38e1c885b85cec21058648690626524906e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:24 compute-0 podman[99388]: 2025-11-22 03:26:24.387844796 +0000 UTC m=+0.123445321 container init 070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13 (image=quay.io/ceph/ceph:v18, name=competent_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:26:24 compute-0 podman[99388]: 2025-11-22 03:26:24.392717521 +0000 UTC m=+0.128318026 container start 070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13 (image=quay.io/ceph/ceph:v18, name=competent_margulis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:26:24 compute-0 podman[99388]: 2025-11-22 03:26:24.297371464 +0000 UTC m=+0.032972000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:24 compute-0 podman[99388]: 2025-11-22 03:26:24.396257457 +0000 UTC m=+0.131857983 container attach 070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13 (image=quay.io/ceph/ceph:v18, name=competent_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:26:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:24 compute-0 podman[99410]: 2025-11-22 03:26:24.413068033 +0000 UTC m=+0.103892845 container init 30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:26:24 compute-0 podman[99410]: 2025-11-22 03:26:24.417382117 +0000 UTC m=+0.108206899 container start 30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:26:24 compute-0 clever_bouman[99435]: 167 167
Nov 22 03:26:24 compute-0 systemd[1]: libpod-30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36.scope: Deactivated successfully.
Nov 22 03:26:24 compute-0 conmon[99435]: conmon 30f7ed3115750e67c6dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36.scope/container/memory.events
Nov 22 03:26:24 compute-0 podman[99410]: 2025-11-22 03:26:24.420302824 +0000 UTC m=+0.111127616 container attach 30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bouman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:26:24 compute-0 podman[99410]: 2025-11-22 03:26:24.421710299 +0000 UTC m=+0.112535091 container died 30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bouman, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:26:24 compute-0 podman[99410]: 2025-11-22 03:26:24.331227489 +0000 UTC m=+0.022052301 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-95715c0d24815a0f3fc7d350524acb13701fdf1fd4074db10837f313d416bf08-merged.mount: Deactivated successfully.
Nov 22 03:26:24 compute-0 podman[99410]: 2025-11-22 03:26:24.460760184 +0000 UTC m=+0.151584976 container remove 30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_bouman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:24 compute-0 systemd[1]: libpod-conmon-30f7ed3115750e67c6dc25a3d7530659d637e69a2e77b3a7ae6aa8d0015c9c36.scope: Deactivated successfully.
Nov 22 03:26:24 compute-0 ceph-mon[75011]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3576196187' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:26:24 compute-0 podman[99460]: 2025-11-22 03:26:24.648782337 +0000 UTC m=+0.050296405 container create 5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:26:24 compute-0 systemd[1]: Started libpod-conmon-5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0.scope.
Nov 22 03:26:24 compute-0 podman[99460]: 2025-11-22 03:26:24.626718683 +0000 UTC m=+0.028232771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8cd3043ca719933b6534ed749e08a7af0d7b3c05c27cde2fcb218cb5951181c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8cd3043ca719933b6534ed749e08a7af0d7b3c05c27cde2fcb218cb5951181c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8cd3043ca719933b6534ed749e08a7af0d7b3c05c27cde2fcb218cb5951181c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8cd3043ca719933b6534ed749e08a7af0d7b3c05c27cde2fcb218cb5951181c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8cd3043ca719933b6534ed749e08a7af0d7b3c05c27cde2fcb218cb5951181c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:24 compute-0 podman[99460]: 2025-11-22 03:26:24.745756214 +0000 UTC m=+0.147270303 container init 5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:26:24 compute-0 podman[99460]: 2025-11-22 03:26:24.755415594 +0000 UTC m=+0.156929673 container start 5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:24 compute-0 podman[99460]: 2025-11-22 03:26:24.760626596 +0000 UTC m=+0.162140715 container attach 5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:26:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:26:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844111970' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:26:25 compute-0 competent_margulis[99430]: 
Nov 22 03:26:25 compute-0 competent_margulis[99430]: {"epoch":1,"fsid":"7adcc38b-6484-5de6-b879-33a0309153df","modified":"2025-11-22T03:23:44.941830Z","created":"2025-11-22T03:23:44.941830Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Nov 22 03:26:25 compute-0 competent_margulis[99430]: dumped monmap epoch 1
Nov 22 03:26:25 compute-0 systemd[1]: libpod-070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13.scope: Deactivated successfully.
Nov 22 03:26:25 compute-0 podman[99388]: 2025-11-22 03:26:25.044839718 +0000 UTC m=+0.780440224 container died 070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13 (image=quay.io/ceph/ceph:v18, name=competent_margulis, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-aab9255b65c21937fe34475d6db1f38e1c885b85cec21058648690626524906e-merged.mount: Deactivated successfully.
Nov 22 03:26:25 compute-0 podman[99388]: 2025-11-22 03:26:25.083918239 +0000 UTC m=+0.819518745 container remove 070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13 (image=quay.io/ceph/ceph:v18, name=competent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:26:25 compute-0 systemd[1]: libpod-conmon-070fffc76b3ceb2cf093a54cef639c72d9d3b07f5ca6f91d39e6c04186386c13.scope: Deactivated successfully.
Nov 22 03:26:25 compute-0 sudo[99359]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:25 compute-0 sudo[99542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htweodwsaaamfkmncskyxcxtjhpfakhv ; /usr/bin/python3'
Nov 22 03:26:25 compute-0 sudo[99542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/844111970' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:26:25 compute-0 python3[99546]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:25 compute-0 podman[99555]: 2025-11-22 03:26:25.747000127 +0000 UTC m=+0.044155020 container create 7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413 (image=quay.io/ceph/ceph:v18, name=thirsty_leavitt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:26:25 compute-0 systemd[1]: Started libpod-conmon-7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413.scope.
Nov 22 03:26:25 compute-0 podman[99555]: 2025-11-22 03:26:25.728129701 +0000 UTC m=+0.025284564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ed145695d2ae5e333a3e3d7952df901c3410a588eea957307e1a35683bbafd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3ed145695d2ae5e333a3e3d7952df901c3410a588eea957307e1a35683bbafd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:25 compute-0 podman[99555]: 2025-11-22 03:26:25.847318789 +0000 UTC m=+0.144473692 container init 7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413 (image=quay.io/ceph/ceph:v18, name=thirsty_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:25 compute-0 podman[99555]: 2025-11-22 03:26:25.858940744 +0000 UTC m=+0.156095637 container start 7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413 (image=quay.io/ceph/ceph:v18, name=thirsty_leavitt, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:25 compute-0 podman[99555]: 2025-11-22 03:26:25.86329957 +0000 UTC m=+0.160454492 container attach 7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413 (image=quay.io/ceph/ceph:v18, name=thirsty_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:26:25 compute-0 nostalgic_panini[99477]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:26:25 compute-0 nostalgic_panini[99477]: --> relative data size: 1.0
Nov 22 03:26:25 compute-0 nostalgic_panini[99477]: --> All data devices are unavailable
Nov 22 03:26:25 compute-0 systemd[1]: libpod-5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0.scope: Deactivated successfully.
Nov 22 03:26:25 compute-0 podman[99460]: 2025-11-22 03:26:25.976996744 +0000 UTC m=+1.378510833 container died 5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:25 compute-0 systemd[1]: libpod-5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0.scope: Consumed 1.149s CPU time.
Nov 22 03:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8cd3043ca719933b6534ed749e08a7af0d7b3c05c27cde2fcb218cb5951181c-merged.mount: Deactivated successfully.
Nov 22 03:26:26 compute-0 podman[99460]: 2025-11-22 03:26:26.054968923 +0000 UTC m=+1.456482992 container remove 5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:26:26 compute-0 systemd[1]: libpod-conmon-5d391fa475390c7480745b2d54781562f494418a31a585192be90df896d122b0.scope: Deactivated successfully.
Nov 22 03:26:26 compute-0 sudo[99311]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:26 compute-0 sudo[99596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:26 compute-0 sudo[99596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:26 compute-0 sudo[99596]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:26 compute-0 sudo[99621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:26 compute-0 sudo[99621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:26 compute-0 sudo[99621]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:26 compute-0 sudo[99665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:26 compute-0 sudo[99665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:26 compute-0 sudo[99665]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:26 compute-0 sudo[99690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:26:26 compute-0 sudo[99690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Nov 22 03:26:26 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/57092587' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 22 03:26:26 compute-0 thirsty_leavitt[99574]: [client.openstack]
Nov 22 03:26:26 compute-0 thirsty_leavitt[99574]:         key = AQClLCFpAAAAABAAr5ufdHAA+2+5xQHYTrvFiA==
Nov 22 03:26:26 compute-0 thirsty_leavitt[99574]:         caps mgr = "allow *"
Nov 22 03:26:26 compute-0 thirsty_leavitt[99574]:         caps mon = "profile rbd"
Nov 22 03:26:26 compute-0 thirsty_leavitt[99574]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Nov 22 03:26:26 compute-0 systemd[1]: libpod-7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413.scope: Deactivated successfully.
Nov 22 03:26:26 compute-0 podman[99555]: 2025-11-22 03:26:26.555329514 +0000 UTC m=+0.852484387 container died 7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413 (image=quay.io/ceph/ceph:v18, name=thirsty_leavitt, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3ed145695d2ae5e333a3e3d7952df901c3410a588eea957307e1a35683bbafd-merged.mount: Deactivated successfully.
Nov 22 03:26:26 compute-0 podman[99555]: 2025-11-22 03:26:26.608565675 +0000 UTC m=+0.905720558 container remove 7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413 (image=quay.io/ceph/ceph:v18, name=thirsty_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:26 compute-0 systemd[1]: libpod-conmon-7ac6f8841147cb28f82bf1f564364a633890e2c964072596b3adffb12c2cf413.scope: Deactivated successfully.
Nov 22 03:26:26 compute-0 sudo[99542]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:26 compute-0 ceph-mon[75011]: pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/57092587' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Nov 22 03:26:26 compute-0 podman[99768]: 2025-11-22 03:26:26.889782296 +0000 UTC m=+0.068312122 container create c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:26 compute-0 systemd[1]: Started libpod-conmon-c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d.scope.
Nov 22 03:26:26 compute-0 podman[99768]: 2025-11-22 03:26:26.858946719 +0000 UTC m=+0.037476614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:26 compute-0 podman[99768]: 2025-11-22 03:26:26.982292627 +0000 UTC m=+0.160822503 container init c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:26 compute-0 podman[99768]: 2025-11-22 03:26:26.993004283 +0000 UTC m=+0.171534079 container start c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:26 compute-0 podman[99768]: 2025-11-22 03:26:26.996874503 +0000 UTC m=+0.175404379 container attach c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:26:26 compute-0 lucid_diffie[99784]: 167 167
Nov 22 03:26:26 compute-0 systemd[1]: libpod-c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d.scope: Deactivated successfully.
Nov 22 03:26:27 compute-0 podman[99768]: 2025-11-22 03:26:26.999892926 +0000 UTC m=+0.178422752 container died c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a48c9fc62107590253a7372cf70454fe0973d62a8d18da9191dbae3f35887b69-merged.mount: Deactivated successfully.
Nov 22 03:26:27 compute-0 podman[99768]: 2025-11-22 03:26:27.050951575 +0000 UTC m=+0.229481391 container remove c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_diffie, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:26:27 compute-0 systemd[1]: libpod-conmon-c1bd2c2f706f7dacddab2fb59f18af3f3fc9855457ad619afebb7b8d3dcde11d.scope: Deactivated successfully.
Nov 22 03:26:27 compute-0 podman[99807]: 2025-11-22 03:26:27.31285599 +0000 UTC m=+0.070613181 container create 9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kapitsa, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:26:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:27 compute-0 systemd[1]: Started libpod-conmon-9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc.scope.
Nov 22 03:26:27 compute-0 podman[99807]: 2025-11-22 03:26:27.284413612 +0000 UTC m=+0.042170843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90645298b4e835a805cfe181b510b9f364e9e9693d7a843f1d546d8013edb1b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90645298b4e835a805cfe181b510b9f364e9e9693d7a843f1d546d8013edb1b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90645298b4e835a805cfe181b510b9f364e9e9693d7a843f1d546d8013edb1b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90645298b4e835a805cfe181b510b9f364e9e9693d7a843f1d546d8013edb1b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:27 compute-0 podman[99807]: 2025-11-22 03:26:27.433291639 +0000 UTC m=+0.191048879 container init 9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kapitsa, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:26:27 compute-0 podman[99807]: 2025-11-22 03:26:27.444622264 +0000 UTC m=+0.202379455 container start 9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kapitsa, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:26:27 compute-0 podman[99807]: 2025-11-22 03:26:27.449478854 +0000 UTC m=+0.207236105 container attach 9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kapitsa, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:26:28 compute-0 sudo[99975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdszbsainpbmwhdbmvkmyotuzghshoza ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763781987.5928507-36589-195317528170003/async_wrapper.py j846277431769 30 /home/zuul/.ansible/tmp/ansible-tmp-1763781987.5928507-36589-195317528170003/AnsiballZ_command.py _'
Nov 22 03:26:28 compute-0 sudo[99975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]: {
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:     "0": [
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:         {
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "devices": [
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "/dev/loop3"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             ],
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_name": "ceph_lv0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_size": "21470642176",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "name": "ceph_lv0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "tags": {
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.crush_device_class": "",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.encrypted": "0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osd_id": "0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.type": "block",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.vdo": "0"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             },
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "type": "block",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "vg_name": "ceph_vg0"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:         }
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:     ],
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:     "1": [
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:         {
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "devices": [
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "/dev/loop4"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             ],
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_name": "ceph_lv1",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_size": "21470642176",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "name": "ceph_lv1",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "tags": {
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.crush_device_class": "",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.encrypted": "0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osd_id": "1",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.type": "block",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.vdo": "0"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             },
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "type": "block",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "vg_name": "ceph_vg1"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:         }
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:     ],
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:     "2": [
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:         {
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "devices": [
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "/dev/loop5"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             ],
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_name": "ceph_lv2",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_size": "21470642176",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "name": "ceph_lv2",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "tags": {
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.crush_device_class": "",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.encrypted": "0",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osd_id": "2",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.type": "block",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:                 "ceph.vdo": "0"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             },
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "type": "block",
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:             "vg_name": "ceph_vg2"
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:         }
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]:     ]
Nov 22 03:26:28 compute-0 exciting_kapitsa[99823]: }
Nov 22 03:26:28 compute-0 systemd[1]: libpod-9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc.scope: Deactivated successfully.
Nov 22 03:26:28 compute-0 podman[99807]: 2025-11-22 03:26:28.236262248 +0000 UTC m=+0.994019409 container died 9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kapitsa, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:28 compute-0 ansible-async_wrapper.py[99979]: Invoked with j846277431769 30 /home/zuul/.ansible/tmp/ansible-tmp-1763781987.5928507-36589-195317528170003/AnsiballZ_command.py _
Nov 22 03:26:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-90645298b4e835a805cfe181b510b9f364e9e9693d7a843f1d546d8013edb1b8-merged.mount: Deactivated successfully.
Nov 22 03:26:28 compute-0 ansible-async_wrapper.py[99991]: Starting module and watcher
Nov 22 03:26:28 compute-0 ansible-async_wrapper.py[99991]: Start watching 99992 (30)
Nov 22 03:26:28 compute-0 ansible-async_wrapper.py[99992]: Start module (99992)
Nov 22 03:26:28 compute-0 ansible-async_wrapper.py[99979]: Return async_wrapper task started.
Nov 22 03:26:28 compute-0 podman[99807]: 2025-11-22 03:26:28.307608407 +0000 UTC m=+1.065365568 container remove 9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:26:28 compute-0 sudo[99975]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:28 compute-0 systemd[1]: libpod-conmon-9cca3b7587002bcb118691b8dfe539f44564e1206ce42fafecf4aed82f0cfbcc.scope: Deactivated successfully.
Nov 22 03:26:28 compute-0 sudo[99690]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:28 compute-0 python3[99996]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:28 compute-0 sudo[99997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:28 compute-0 sudo[99997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:28 compute-0 sudo[99997]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:28 compute-0 podman[100020]: 2025-11-22 03:26:28.486510735 +0000 UTC m=+0.049581239 container create 0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756 (image=quay.io/ceph/ceph:v18, name=objective_goldstine, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:28 compute-0 systemd[1]: Started libpod-conmon-0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756.scope.
Nov 22 03:26:28 compute-0 sudo[100028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:28 compute-0 sudo[100028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:28 compute-0 sudo[100028]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:28 compute-0 podman[100020]: 2025-11-22 03:26:28.467875551 +0000 UTC m=+0.030946025 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6fcd1a337e74804c421960e8ff430f4106ff1e1ad13b4051637ec5cc11ffc09/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6fcd1a337e74804c421960e8ff430f4106ff1e1ad13b4051637ec5cc11ffc09/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:28 compute-0 podman[100020]: 2025-11-22 03:26:28.592227966 +0000 UTC m=+0.155298509 container init 0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756 (image=quay.io/ceph/ceph:v18, name=objective_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:28 compute-0 podman[100020]: 2025-11-22 03:26:28.604010169 +0000 UTC m=+0.167080673 container start 0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756 (image=quay.io/ceph/ceph:v18, name=objective_goldstine, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:26:28 compute-0 podman[100020]: 2025-11-22 03:26:28.60832976 +0000 UTC m=+0.171400264 container attach 0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756 (image=quay.io/ceph/ceph:v18, name=objective_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:26:28 compute-0 sudo[100066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:28 compute-0 sudo[100066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:28 compute-0 sudo[100066]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:28 compute-0 ceph-mon[75011]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:28 compute-0 sudo[100092]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:26:28 compute-0 sudo[100092]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:29 compute-0 podman[100176]: 2025-11-22 03:26:29.083510229 +0000 UTC m=+0.057548210 container create ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:26:29 compute-0 systemd[1]: Started libpod-conmon-ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7.scope.
Nov 22 03:26:29 compute-0 podman[100176]: 2025-11-22 03:26:29.054969925 +0000 UTC m=+0.029007966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:29 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:26:29 compute-0 objective_goldstine[100062]: 
Nov 22 03:26:29 compute-0 objective_goldstine[100062]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 03:26:29 compute-0 podman[100176]: 2025-11-22 03:26:29.188063919 +0000 UTC m=+0.162102030 container init ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:26:29 compute-0 podman[100176]: 2025-11-22 03:26:29.197901288 +0000 UTC m=+0.171939239 container start ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:29 compute-0 systemd[1]: libpod-0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756.scope: Deactivated successfully.
Nov 22 03:26:29 compute-0 affectionate_panini[100192]: 167 167
Nov 22 03:26:29 compute-0 systemd[1]: libpod-ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7.scope: Deactivated successfully.
Nov 22 03:26:29 compute-0 podman[100176]: 2025-11-22 03:26:29.208786204 +0000 UTC m=+0.182824195 container attach ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:29 compute-0 podman[100176]: 2025-11-22 03:26:29.209736484 +0000 UTC m=+0.183774435 container died ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:26:29 compute-0 podman[100020]: 2025-11-22 03:26:29.253869611 +0000 UTC m=+0.816940115 container died 0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756 (image=quay.io/ceph/ceph:v18, name=objective_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6fcd1a337e74804c421960e8ff430f4106ff1e1ad13b4051637ec5cc11ffc09-merged.mount: Deactivated successfully.
Nov 22 03:26:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-aea6aac21f5fcc4ee0c7a911f68c36bec8b6c7ab2304a58e19951d35f7b94af7-merged.mount: Deactivated successfully.
Nov 22 03:26:29 compute-0 podman[100176]: 2025-11-22 03:26:29.428566595 +0000 UTC m=+0.402604576 container remove ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:26:29 compute-0 systemd[1]: libpod-conmon-ee55b6c469d9cb92de46e2220fbf09913d0cef0e2d748a3412b72a9c03b6e0a7.scope: Deactivated successfully.
Nov 22 03:26:29 compute-0 podman[100020]: 2025-11-22 03:26:29.454310088 +0000 UTC m=+1.017380551 container remove 0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756 (image=quay.io/ceph/ceph:v18, name=objective_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:29 compute-0 systemd[1]: libpod-conmon-0b85712fc1bf4110ce0712ee8c563aa052143b682f05da615aa763f7a9630756.scope: Deactivated successfully.
Nov 22 03:26:29 compute-0 sudo[100275]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgzxwvhvjhzgbbdswxvgsyrysqwfzhnl ; /usr/bin/python3'
Nov 22 03:26:29 compute-0 sudo[100275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:29 compute-0 ansible-async_wrapper.py[99992]: Module complete (99992)
Nov 22 03:26:29 compute-0 python3[100277]: ansible-ansible.legacy.async_status Invoked with jid=j846277431769.99979 mode=status _async_dir=/root/.ansible_async
Nov 22 03:26:29 compute-0 sudo[100275]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:29 compute-0 ceph-mon[75011]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:26:29 compute-0 podman[100283]: 2025-11-22 03:26:29.719868469 +0000 UTC m=+0.115662124 container create e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:29 compute-0 podman[100283]: 2025-11-22 03:26:29.638311802 +0000 UTC m=+0.034105506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:29 compute-0 systemd[1]: Started libpod-conmon-e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e.scope.
Nov 22 03:26:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c87824ee4aa96d3de700e3bc24d16639f06f3c26bd0ffec0cc818e81554f49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c87824ee4aa96d3de700e3bc24d16639f06f3c26bd0ffec0cc818e81554f49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c87824ee4aa96d3de700e3bc24d16639f06f3c26bd0ffec0cc818e81554f49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c87824ee4aa96d3de700e3bc24d16639f06f3c26bd0ffec0cc818e81554f49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:29 compute-0 podman[100283]: 2025-11-22 03:26:29.852146347 +0000 UTC m=+0.247940022 container init e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:26:29 compute-0 podman[100283]: 2025-11-22 03:26:29.865860442 +0000 UTC m=+0.261654096 container start e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:29 compute-0 podman[100283]: 2025-11-22 03:26:29.870223029 +0000 UTC m=+0.266016723 container attach e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:26:29 compute-0 sudo[100349]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eewclldzchbqhivemiseurosmueiragy ; /usr/bin/python3'
Nov 22 03:26:29 compute-0 sudo[100349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:30 compute-0 python3[100353]: ansible-ansible.legacy.async_status Invoked with jid=j846277431769.99979 mode=cleanup _async_dir=/root/.ansible_async
Nov 22 03:26:30 compute-0 sudo[100349]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:30 compute-0 sudo[100377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfnhbvwbdnqyauxswtvvtlrgcrzlragb ; /usr/bin/python3'
Nov 22 03:26:30 compute-0 sudo[100377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:30 compute-0 python3[100380]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:30 compute-0 podman[100388]: 2025-11-22 03:26:30.718946933 +0000 UTC m=+0.074392152 container create 331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79 (image=quay.io/ceph/ceph:v18, name=ecstatic_napier, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:30 compute-0 ceph-mon[75011]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:30 compute-0 systemd[1]: Started libpod-conmon-331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79.scope.
Nov 22 03:26:30 compute-0 podman[100388]: 2025-11-22 03:26:30.687031382 +0000 UTC m=+0.042476591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ce6d2f9199f3a4083dc8eea181671556bc3296e1abd25b969dac16b63c9211/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16ce6d2f9199f3a4083dc8eea181671556bc3296e1abd25b969dac16b63c9211/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:30 compute-0 podman[100388]: 2025-11-22 03:26:30.816413267 +0000 UTC m=+0.171858466 container init 331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79 (image=quay.io/ceph/ceph:v18, name=ecstatic_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:30 compute-0 podman[100388]: 2025-11-22 03:26:30.82632151 +0000 UTC m=+0.181766739 container start 331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79 (image=quay.io/ceph/ceph:v18, name=ecstatic_napier, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:26:30 compute-0 podman[100388]: 2025-11-22 03:26:30.830713253 +0000 UTC m=+0.186158442 container attach 331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79 (image=quay.io/ceph/ceph:v18, name=ecstatic_napier, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]: {
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "osd_id": 1,
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "type": "bluestore"
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:     },
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "osd_id": 0,
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "type": "bluestore"
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:     },
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "osd_id": 2,
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:         "type": "bluestore"
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]:     }
Nov 22 03:26:30 compute-0 unruffled_jackson[100323]: }
Nov 22 03:26:30 compute-0 systemd[1]: libpod-e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e.scope: Deactivated successfully.
Nov 22 03:26:30 compute-0 podman[100283]: 2025-11-22 03:26:30.88029038 +0000 UTC m=+1.276084025 container died e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:30 compute-0 systemd[1]: libpod-e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e.scope: Consumed 1.022s CPU time.
Nov 22 03:26:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-92c87824ee4aa96d3de700e3bc24d16639f06f3c26bd0ffec0cc818e81554f49-merged.mount: Deactivated successfully.
Nov 22 03:26:30 compute-0 podman[100283]: 2025-11-22 03:26:30.965149516 +0000 UTC m=+1.360943171 container remove e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:26:30 compute-0 systemd[1]: libpod-conmon-e3ae4004526db9c6749c5ff50bcf1f351717ccac445c3d818cb9ffc507a24a6e.scope: Deactivated successfully.
Nov 22 03:26:31 compute-0 sudo[100092]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:31 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 6a18d6de-cc12-4222-9777-e4fb61d72a9a (Updating rgw.rgw deployment (+1 -> 1))
Nov 22 03:26:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lgafpt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Nov 22 03:26:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lgafpt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 22 03:26:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lgafpt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 22 03:26:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Nov 22 03:26:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:31 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.lgafpt on compute-0
Nov 22 03:26:31 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.lgafpt on compute-0
Nov 22 03:26:31 compute-0 sudo[100440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:31 compute-0 sudo[100440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:31 compute-0 sudo[100440]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:31 compute-0 sudo[100465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:31 compute-0 sudo[100465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:31 compute-0 sudo[100465]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:31 compute-0 sudo[100509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:31 compute-0 sudo[100509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:31 compute-0 sudo[100509]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:31 compute-0 sudo[100534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:26:31 compute-0 sudo[100534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:31 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:26:31 compute-0 ecstatic_napier[100418]: 
Nov 22 03:26:31 compute-0 ecstatic_napier[100418]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Nov 22 03:26:31 compute-0 systemd[1]: libpod-331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79.scope: Deactivated successfully.
Nov 22 03:26:31 compute-0 conmon[100418]: conmon 331919cddc8785fd5c4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79.scope/container/memory.events
Nov 22 03:26:31 compute-0 podman[100388]: 2025-11-22 03:26:31.469530687 +0000 UTC m=+0.824975895 container died 331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79 (image=quay.io/ceph/ceph:v18, name=ecstatic_napier, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-16ce6d2f9199f3a4083dc8eea181671556bc3296e1abd25b969dac16b63c9211-merged.mount: Deactivated successfully.
Nov 22 03:26:31 compute-0 podman[100388]: 2025-11-22 03:26:31.518869776 +0000 UTC m=+0.874314965 container remove 331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79 (image=quay.io/ceph/ceph:v18, name=ecstatic_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:26:31 compute-0 systemd[1]: libpod-conmon-331919cddc8785fd5c4cc3c2421be1265fe9a8eefca2981e977554ee0d4e9f79.scope: Deactivated successfully.
Nov 22 03:26:31 compute-0 sudo[100377]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:31 compute-0 podman[100614]: 2025-11-22 03:26:31.724677321 +0000 UTC m=+0.055606958 container create ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:26:31 compute-0 systemd[1]: Started libpod-conmon-ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508.scope.
Nov 22 03:26:31 compute-0 podman[100614]: 2025-11-22 03:26:31.693119633 +0000 UTC m=+0.024049320 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:31 compute-0 podman[100614]: 2025-11-22 03:26:31.80939871 +0000 UTC m=+0.140328397 container init ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:31 compute-0 podman[100614]: 2025-11-22 03:26:31.820610337 +0000 UTC m=+0.151539964 container start ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:31 compute-0 podman[100614]: 2025-11-22 03:26:31.824561968 +0000 UTC m=+0.155491645 container attach ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:31 compute-0 objective_brahmagupta[100631]: 167 167
Nov 22 03:26:31 compute-0 systemd[1]: libpod-ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508.scope: Deactivated successfully.
Nov 22 03:26:31 compute-0 podman[100614]: 2025-11-22 03:26:31.827902502 +0000 UTC m=+0.158832159 container died ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9120894846e20fa264147d21153ba89e7b372741020e9445a208cc477a9b2ffe-merged.mount: Deactivated successfully.
Nov 22 03:26:31 compute-0 podman[100614]: 2025-11-22 03:26:31.885520967 +0000 UTC m=+0.216450604 container remove ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:26:31 compute-0 systemd[1]: libpod-conmon-ec2f60eeffef198ccf5942d2b48669dcc904800cc2d36e4441150fb945595508.scope: Deactivated successfully.
Nov 22 03:26:31 compute-0 systemd[1]: Reloading.
Nov 22 03:26:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lgafpt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Nov 22 03:26:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.lgafpt", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Nov 22 03:26:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:32 compute-0 ceph-mon[75011]: Deploying daemon rgw.rgw.compute-0.lgafpt on compute-0
Nov 22 03:26:32 compute-0 systemd-rc-local-generator[100671]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:26:32 compute-0 systemd-sysv-generator[100675]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:26:32 compute-0 sudo[100709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-molwkzfncjcvocqpkqfcrbjxhpuejbka ; /usr/bin/python3'
Nov 22 03:26:32 compute-0 sudo[100709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:32 compute-0 systemd[1]: Reloading.
Nov 22 03:26:32 compute-0 systemd-rc-local-generator[100743]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:26:32 compute-0 systemd-sysv-generator[100746]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:26:32 compute-0 python3[100714]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:32 compute-0 podman[100751]: 2025-11-22 03:26:32.528851625 +0000 UTC m=+0.077301774 container create b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba (image=quay.io/ceph/ceph:v18, name=vigorous_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:32 compute-0 systemd[1]: Started libpod-conmon-b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba.scope.
Nov 22 03:26:32 compute-0 podman[100751]: 2025-11-22 03:26:32.494889544 +0000 UTC m=+0.043339743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:32 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.lgafpt for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:26:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa3ab8b2f74b8258b36b7204db73829a9b41fca8fe1c708cd4538aaeb65b6e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aa3ab8b2f74b8258b36b7204db73829a9b41fca8fe1c708cd4538aaeb65b6e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:32 compute-0 podman[100751]: 2025-11-22 03:26:32.6310793 +0000 UTC m=+0.179529488 container init b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba (image=quay.io/ceph/ceph:v18, name=vigorous_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:26:32 compute-0 podman[100751]: 2025-11-22 03:26:32.643733235 +0000 UTC m=+0.192183394 container start b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba (image=quay.io/ceph/ceph:v18, name=vigorous_williamson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:26:32 compute-0 podman[100751]: 2025-11-22 03:26:32.648721348 +0000 UTC m=+0.197171576 container attach b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba (image=quay.io/ceph/ceph:v18, name=vigorous_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:26:32 compute-0 podman[100820]: 2025-11-22 03:26:32.92456125 +0000 UTC m=+0.069489811 container create e0a8ea03d56e7415dd6f7fc78b35a210d612a04db02c18c57cf38f34a203fc0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-rgw-rgw-compute-0-lgafpt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:32 compute-0 podman[100820]: 2025-11-22 03:26:32.888343329 +0000 UTC m=+0.033271810 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae4aec1d69e9ad32b92a805586a037396a42ad15eff87666cf0c200a4d11fbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae4aec1d69e9ad32b92a805586a037396a42ad15eff87666cf0c200a4d11fbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae4aec1d69e9ad32b92a805586a037396a42ad15eff87666cf0c200a4d11fbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ae4aec1d69e9ad32b92a805586a037396a42ad15eff87666cf0c200a4d11fbf/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.lgafpt supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:33 compute-0 podman[100820]: 2025-11-22 03:26:33.01539458 +0000 UTC m=+0.160323070 container init e0a8ea03d56e7415dd6f7fc78b35a210d612a04db02c18c57cf38f34a203fc0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-rgw-rgw-compute-0-lgafpt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:33 compute-0 podman[100820]: 2025-11-22 03:26:33.028242498 +0000 UTC m=+0.173170939 container start e0a8ea03d56e7415dd6f7fc78b35a210d612a04db02c18c57cf38f34a203fc0f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-rgw-rgw-compute-0-lgafpt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:26:33 compute-0 bash[100820]: e0a8ea03d56e7415dd6f7fc78b35a210d612a04db02c18c57cf38f34a203fc0f
Nov 22 03:26:33 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.lgafpt for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:26:33 compute-0 sudo[100534]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:33 compute-0 ceph-mon[75011]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:33 compute-0 ceph-mon[75011]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:26:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 03:26:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 6a18d6de-cc12-4222-9777-e4fb61d72a9a (Updating rgw.rgw deployment (+1 -> 1))
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 6a18d6de-cc12-4222-9777-e4fb61d72a9a (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Nov 22 03:26:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 03:26:33 compute-0 radosgw[100858]: deferred set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:26:33 compute-0 radosgw[100858]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Nov 22 03:26:33 compute-0 radosgw[100858]: framework: beast
Nov 22 03:26:33 compute-0 radosgw[100858]: framework conf key: endpoint, val: 192.168.122.100:8082
Nov 22 03:26:33 compute-0 radosgw[100858]: init_numa not setting numa affinity
Nov 22 03:26:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Nov 22 03:26:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 0f17082f-662e-4fd8-a927-a281dcbc6bf2 (Updating mds.cephfs deployment (+1 -> 1))
Nov 22 03:26:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fzlata", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Nov 22 03:26:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fzlata", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 22 03:26:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fzlata", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 22 03:26:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.fzlata on compute-0
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.fzlata on compute-0
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:26:33 compute-0 sudo[100920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:33 compute-0 vigorous_williamson[100769]: 
Nov 22 03:26:33 compute-0 vigorous_williamson[100769]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Nov 22 03:26:33 compute-0 sudo[100920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:33 compute-0 sudo[100920]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:33 compute-0 systemd[1]: libpod-b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba.scope: Deactivated successfully.
Nov 22 03:26:33 compute-0 conmon[100769]: conmon b576dad25b1926b40c0a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba.scope/container/memory.events
Nov 22 03:26:33 compute-0 podman[100751]: 2025-11-22 03:26:33.25916502 +0000 UTC m=+0.807615179 container died b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba (image=quay.io/ceph/ceph:v18, name=vigorous_williamson, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:33 compute-0 ansible-async_wrapper.py[99991]: Done in kid B.
Nov 22 03:26:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-0aa3ab8b2f74b8258b36b7204db73829a9b41fca8fe1c708cd4538aaeb65b6e6-merged.mount: Deactivated successfully.
Nov 22 03:26:33 compute-0 podman[100751]: 2025-11-22 03:26:33.325647902 +0000 UTC m=+0.874098051 container remove b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba (image=quay.io/ceph/ceph:v18, name=vigorous_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:33 compute-0 sudo[100947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:33 compute-0 sudo[100947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:33 compute-0 systemd[1]: libpod-conmon-b576dad25b1926b40c0aac3b83e356a7699b07ce0c0d5b9906a3f49f217cd9ba.scope: Deactivated successfully.
Nov 22 03:26:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:33 compute-0 sudo[100947]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:33 compute-0 sudo[100709]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:33 compute-0 sudo[100982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:33 compute-0 sudo[100982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:33 compute-0 sudo[100982]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:33 compute-0 sudo[101007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 _orch deploy --fsid 7adcc38b-6484-5de6-b879-33a0309153df
Nov 22 03:26:33 compute-0 sudo[101007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:34 compute-0 podman[101074]: 2025-11-22 03:26:34.00748482 +0000 UTC m=+0.066836784 container create 717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:26:34 compute-0 systemd[1]: Started libpod-conmon-717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc.scope.
Nov 22 03:26:34 compute-0 podman[101074]: 2025-11-22 03:26:33.979901264 +0000 UTC m=+0.039253278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:34 compute-0 ceph-mon[75011]: Saving service rgw.rgw spec with placement compute-0
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fzlata", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.fzlata", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:34 compute-0 ceph-mon[75011]: Deploying daemon mds.cephfs.compute-0.fzlata on compute-0
Nov 22 03:26:34 compute-0 ceph-mon[75011]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:26:34 compute-0 podman[101074]: 2025-11-22 03:26:34.105401226 +0000 UTC m=+0.164753259 container init 717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:26:34 compute-0 podman[101074]: 2025-11-22 03:26:34.119340843 +0000 UTC m=+0.178692827 container start 717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Nov 22 03:26:34 compute-0 nice_elbakyan[101090]: 167 167
Nov 22 03:26:34 compute-0 systemd[1]: libpod-717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc.scope: Deactivated successfully.
Nov 22 03:26:34 compute-0 podman[101074]: 2025-11-22 03:26:34.130962479 +0000 UTC m=+0.190314463 container attach 717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elbakyan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:34 compute-0 podman[101074]: 2025-11-22 03:26:34.132114876 +0000 UTC m=+0.191466869 container died 717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elbakyan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Nov 22 03:26:34 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Nov 22 03:26:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Nov 22 03:26:34 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 22 03:26:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ab887f462208690589b52ea4f8631b91a9fc0b1ab201e05f563f28677a08958-merged.mount: Deactivated successfully.
Nov 22 03:26:34 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:34 compute-0 podman[101074]: 2025-11-22 03:26:34.20026544 +0000 UTC m=+0.259617434 container remove 717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_elbakyan, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:26:34 compute-0 systemd[1]: libpod-conmon-717f3c2d6ab151387ff8b770cf67315a06de13238eb1718795f38ce0727413fc.scope: Deactivated successfully.
Nov 22 03:26:34 compute-0 sudo[101130]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epopwijrunihzoyspcccimjjqcqonfwt ; /usr/bin/python3'
Nov 22 03:26:34 compute-0 sudo[101130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:34 compute-0 systemd[1]: Reloading.
Nov 22 03:26:34 compute-0 systemd-sysv-generator[101164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:26:34 compute-0 systemd-rc-local-generator[101159]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:26:34 compute-0 python3[101134]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:34 compute-0 podman[101170]: 2025-11-22 03:26:34.4728102 +0000 UTC m=+0.052507963 container create 55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435 (image=quay.io/ceph/ceph:v18, name=elated_mendel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:34 compute-0 podman[101170]: 2025-11-22 03:26:34.451897744 +0000 UTC m=+0.031595487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:34 compute-0 systemd[1]: Started libpod-conmon-55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435.scope.
Nov 22 03:26:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50928cdc574918deb76cd1f4a66fbeb69385e64585de69a279236316fa1ec2bb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50928cdc574918deb76cd1f4a66fbeb69385e64585de69a279236316fa1ec2bb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:34 compute-0 podman[101170]: 2025-11-22 03:26:34.661427665 +0000 UTC m=+0.241125468 container init 55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435 (image=quay.io/ceph/ceph:v18, name=elated_mendel, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:26:34 compute-0 systemd[1]: Reloading.
Nov 22 03:26:34 compute-0 podman[101170]: 2025-11-22 03:26:34.676204103 +0000 UTC m=+0.255901856 container start 55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435 (image=quay.io/ceph/ceph:v18, name=elated_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:26:34 compute-0 podman[101170]: 2025-11-22 03:26:34.681115713 +0000 UTC m=+0.260813516 container attach 55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435 (image=quay.io/ceph/ceph:v18, name=elated_mendel, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:26:34 compute-0 systemd-sysv-generator[101218]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:26:34 compute-0 systemd-rc-local-generator[101215]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:26:34 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.fzlata for 7adcc38b-6484-5de6-b879-33a0309153df...
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Nov 22 03:26:35 compute-0 ceph-mon[75011]: pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:35 compute-0 ceph-mon[75011]: osdmap e31: 3 total, 3 up, 3 in
Nov 22 03:26:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Nov 22 03:26:35 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:35 compute-0 podman[101297]: 2025-11-22 03:26:35.249617541 +0000 UTC m=+0.052866575 container create f9a79bf39dbd47288fdb30fdddc7cf5ab32391339f7de9f562217187ef1229f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mds-cephfs-compute-0-fzlata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:26:35 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:26:35 compute-0 elated_mendel[101187]: 
Nov 22 03:26:35 compute-0 elated_mendel[101187]: [{"container_id": "0cd55069c528", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.54%", "created": "2025-11-22T03:25:05.220499Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-11-22T03:25:05.297054Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T03:26:23.603623Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-11-22T03:25:05.082973Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-7adcc38b-6484-5de6-b879-33a0309153df@crash.compute-0", "version": "18.2.7"}, {"container_id": "b2284a130048", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "30.49%", "created": "2025-11-22T03:23:52.799975Z", "daemon_id": "compute-0.wbwfxq", "daemon_name": "mgr.compute-0.wbwfxq", "daemon_type": "mgr", "events": ["2025-11-22T03:25:10.466151Z daemon:mgr.compute-0.wbwfxq [INFO] \"Reconfigured mgr.compute-0.wbwfxq on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T03:26:23.603563Z", "memory_usage": 547880960, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-11-22T03:23:52.677024Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-7adcc38b-6484-5de6-b879-33a0309153df@mgr.compute-0.wbwfxq", "version": "18.2.7"}, {"container_id": "ae4dd69c2f80", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "2.37%", "created": "2025-11-22T03:23:47.420836Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-11-22T03:25:09.699630Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T03:26:23.603477Z", "memory_request": 2147483648, "memory_usage": 37717278, "ports": [], "service_name": "mon", "started": "2025-11-22T03:23:50.291697Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-7adcc38b-6484-5de6-b879-33a0309153df@mon.compute-0", "version": "18.2.7"}, {"container_id": "36937a248e0f", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.02%", "created": "2025-11-22T03:25:34.287451Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-11-22T03:25:34.339290Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T03:26:23.603682Z", "memory_request": 4294967296, "memory_usage": 56350474, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T03:25:34.165703Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-7adcc38b-6484-5de6-b879-33a0309153df@osd.0", "version": "18.2.7"}, {"container_id": "bddca431f46f", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.07%", "created": "2025-11-22T03:25:39.362502Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-11-22T03:25:39.433348Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T03:26:23.603738Z", "memory_request": 4294967296, "memory_usage": 60104376, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T03:25:39.195300Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-7adcc38b-6484-5de6-b879-33a0309153df@osd.1", "version": "18.2.7"}, {"container_id": "3a066733ca2b", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.46%", "created": "2025-11-22T03:25:44.401638Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-11-22T03:25:44.519226Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-11-22T03:26:23.603795Z", "memory_request": 4294967296, "memory_usage": 56004444, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-11-22T03:25:44.198065Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-7adcc38b-6484-5de6-b879-33a0309153df@osd.2", "version": "18.2.7"}, {"daemon_id": "rgw.compute-0.lgafpt", "daemon_name": "rgw.rgw.compute-0.lgafpt", "daemon_type": "rgw", "events": ["2025-11-22T03:26:33.097599Z daemon:rgw.rgw.compute-0.lgafpt [INFO] \"Deployed rgw.rgw.compute-0.lgafpt on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "ports": [8082], "service_name": "rgw.rgw", "status": 2, "status_desc": "starting"}]
Nov 22 03:26:35 compute-0 systemd[1]: libpod-55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435.scope: Deactivated successfully.
Nov 22 03:26:35 compute-0 podman[101170]: 2025-11-22 03:26:35.287711367 +0000 UTC m=+0.867409140 container died 55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435 (image=quay.io/ceph/ceph:v18, name=elated_mendel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:26:35 compute-0 podman[101297]: 2025-11-22 03:26:35.22700577 +0000 UTC m=+0.030254814 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-50928cdc574918deb76cd1f4a66fbeb69385e64585de69a279236316fa1ec2bb-merged.mount: Deactivated successfully.
Nov 22 03:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98219e46f3eea52a7c20d8b0fab09a3686c85d5f260b289d0bcf70d485af9fdc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98219e46f3eea52a7c20d8b0fab09a3686c85d5f260b289d0bcf70d485af9fdc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98219e46f3eea52a7c20d8b0fab09a3686c85d5f260b289d0bcf70d485af9fdc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98219e46f3eea52a7c20d8b0fab09a3686c85d5f260b289d0bcf70d485af9fdc/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.fzlata supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v82: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:35 compute-0 podman[101170]: 2025-11-22 03:26:35.354931548 +0000 UTC m=+0.934629311 container remove 55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435 (image=quay.io/ceph/ceph:v18, name=elated_mendel, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:35 compute-0 podman[101297]: 2025-11-22 03:26:35.366573419 +0000 UTC m=+0.169822513 container init f9a79bf39dbd47288fdb30fdddc7cf5ab32391339f7de9f562217187ef1229f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mds-cephfs-compute-0-fzlata, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:26:35 compute-0 systemd[1]: libpod-conmon-55ca373d679392dfca07b4acb66ca91902a978d5c53939a79eb6631908c01435.scope: Deactivated successfully.
Nov 22 03:26:35 compute-0 podman[101297]: 2025-11-22 03:26:35.373430531 +0000 UTC m=+0.176679575 container start f9a79bf39dbd47288fdb30fdddc7cf5ab32391339f7de9f562217187ef1229f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mds-cephfs-compute-0-fzlata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:35 compute-0 bash[101297]: f9a79bf39dbd47288fdb30fdddc7cf5ab32391339f7de9f562217187ef1229f6
Nov 22 03:26:35 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.fzlata for 7adcc38b-6484-5de6-b879-33a0309153df.
Nov 22 03:26:35 compute-0 sudo[101130]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:35 compute-0 sudo[101007]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:35 compute-0 ceph-mds[101332]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:26:35 compute-0 ceph-mds[101332]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Nov 22 03:26:35 compute-0 ceph-mds[101332]: main not setting numa affinity
Nov 22 03:26:35 compute-0 ceph-mds[101332]: pidfile_write: ignore empty --pid-file
Nov 22 03:26:35 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mds-cephfs-compute-0-fzlata[101324]: starting mds.cephfs.compute-0.fzlata at 
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata Updating MDS map to version 2 from mon.0
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e2 assigned standby [v2:192.168.122.100:6814/3519558385,v1:192.168.122.100:6815/3519558385] as mds.0
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.fzlata assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e3 new map
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        3
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-22T03:26:18.161683+0000
                                           modified        2025-11-22T03:26:35.488912+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14269}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.fzlata{0:14269} state up:creating seq 1 addr [v2:192.168.122.100:6814/3519558385,v1:192.168.122.100:6815/3519558385] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata Updating MDS map to version 3 from mon.0
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.3 handle_mds_map i am now mds.0.3
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.3 handle_mds_map state change up:standby --> up:creating
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x1
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x100
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x600
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x601
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x602
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x603
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x604
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x605
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x606
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3519558385,v1:192.168.122.100:6815/3519558385] up:boot
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.fzlata=up:creating}
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x607
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x608
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.cache creating system inode with ino:0x609
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.fzlata"} v 0) v1
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.fzlata"}]: dispatch
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e3 all = 0
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:35 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 0f17082f-662e-4fd8-a927-a281dcbc6bf2 (Updating mds.cephfs deployment (+1 -> 1))
Nov 22 03:26:35 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 0f17082f-662e-4fd8-a927-a281dcbc6bf2 (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:35 compute-0 ceph-mds[101332]: mds.0.3 creating_done
Nov 22 03:26:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.fzlata is now active in filesystem cephfs as rank 0
Nov 22 03:26:35 compute-0 sudo[101360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:35 compute-0 sudo[101360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:35 compute-0 sudo[101360]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:35 compute-0 sudo[101385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:26:35 compute-0 sudo[101385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:35 compute-0 sudo[101385]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:35 compute-0 sudo[101410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:35 compute-0 sudo[101410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:35 compute-0 sudo[101410]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:35 compute-0 sudo[101435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:35 compute-0 sudo[101435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:35 compute-0 sudo[101435]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:35 compute-0 sudo[101460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:35 compute-0 sudo[101460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:35 compute-0 sudo[101460]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:36 compute-0 sudo[101485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:26:36 compute-0 sudo[101485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:26:36
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Some PGs (0.125000) are unknown; try again later
Nov 22 03:26:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:26:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 16 (current 1)
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:26:36 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Nov 22 03:26:36 compute-0 ceph-mon[75011]: osdmap e32: 3 total, 3 up, 3 in
Nov 22 03:26:36 compute-0 ceph-mon[75011]: from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Nov 22 03:26:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:36 compute-0 ceph-mon[75011]: daemon mds.cephfs.compute-0.fzlata assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Nov 22 03:26:36 compute-0 ceph-mon[75011]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Nov 22 03:26:36 compute-0 ceph-mon[75011]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Nov 22 03:26:36 compute-0 ceph-mon[75011]: Cluster is now healthy
Nov 22 03:26:36 compute-0 ceph-mon[75011]: mds.? [v2:192.168.122.100:6814/3519558385,v1:192.168.122.100:6815/3519558385] up:boot
Nov 22 03:26:36 compute-0 ceph-mon[75011]: fsmap cephfs:1 {0=cephfs.compute-0.fzlata=up:creating}
Nov 22 03:26:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.fzlata"}]: dispatch
Nov 22 03:26:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:36 compute-0 ceph-mon[75011]: daemon mds.cephfs.compute-0.fzlata is now active in filesystem cephfs as rank 0
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:26:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Nov 22 03:26:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:26:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Nov 22 03:26:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:26:36 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:26:36 compute-0 ceph-mgr[75294]: [progress INFO root] Writing back 5 completed events
Nov 22 03:26:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:26:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:36 compute-0 sudo[101552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-posudmfcfjgdjspvljstftvqrgkukstk ; /usr/bin/python3'
Nov 22 03:26:36 compute-0 sudo[101552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:36 compute-0 python3[101559]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:36 compute-0 podman[101594]: 2025-11-22 03:26:36.501363183 +0000 UTC m=+0.065634500 container create 9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8 (image=quay.io/ceph/ceph:v18, name=quirky_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e4 new map
Nov 22 03:26:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-11-22T03:26:18.161683+0000
                                           modified        2025-11-22T03:26:36.520688+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}
                                           max_mds        1
                                           in        0
                                           up        {0=14269}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           [mds.cephfs.compute-0.fzlata{0:14269} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/3519558385,v1:192.168.122.100:6815/3519558385] compat {c=[1],r=[1],i=[7ff]}]
                                            
                                            
Nov 22 03:26:36 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata Updating MDS map to version 4 from mon.0
Nov 22 03:26:36 compute-0 ceph-mds[101332]: mds.0.3 handle_mds_map i am now mds.0.3
Nov 22 03:26:36 compute-0 ceph-mds[101332]: mds.0.3 handle_mds_map state change up:creating --> up:active
Nov 22 03:26:36 compute-0 ceph-mds[101332]: mds.0.3 recovery_done -- successful recovery!
Nov 22 03:26:36 compute-0 ceph-mds[101332]: mds.0.3 active_start
Nov 22 03:26:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/3519558385,v1:192.168.122.100:6815/3519558385] up:active
Nov 22 03:26:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.fzlata=up:active}
Nov 22 03:26:36 compute-0 systemd[1]: Started libpod-conmon-9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8.scope.
Nov 22 03:26:36 compute-0 podman[101616]: 2025-11-22 03:26:36.556633819 +0000 UTC m=+0.072764840 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:36 compute-0 podman[101594]: 2025-11-22 03:26:36.470737774 +0000 UTC m=+0.035009131 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfc2ffda024d8d61345c01d0918ae43e7d0f3b634148233f951b356a7a2d318d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfc2ffda024d8d61345c01d0918ae43e7d0f3b634148233f951b356a7a2d318d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:36 compute-0 podman[101594]: 2025-11-22 03:26:36.604602731 +0000 UTC m=+0.168874068 container init 9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8 (image=quay.io/ceph/ceph:v18, name=quirky_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:36 compute-0 podman[101594]: 2025-11-22 03:26:36.611224501 +0000 UTC m=+0.175495778 container start 9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8 (image=quay.io/ceph/ceph:v18, name=quirky_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:26:36 compute-0 podman[101594]: 2025-11-22 03:26:36.614477249 +0000 UTC m=+0.178748616 container attach 9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8 (image=quay.io/ceph/ceph:v18, name=quirky_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:26:36 compute-0 podman[101616]: 2025-11-22 03:26:36.672989208 +0000 UTC m=+0.189120209 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Nov 22 03:26:37 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 5cc8795e-19ab-49b2-b08b-f5ab8a3e5792 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:37 compute-0 ceph-mon[75011]: pgmap v82: 8 pgs: 1 unknown, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:26:37 compute-0 ceph-mon[75011]: osdmap e33: 3 total, 3 up, 3 in
Nov 22 03:26:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Nov 22 03:26:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mds.? [v2:192.168.122.100:6814/3519558385,v1:192.168.122.100:6815/3519558385] up:active
Nov 22 03:26:37 compute-0 ceph-mon[75011]: fsmap cephfs:1 {0=cephfs.compute-0.fzlata=up:active}
Nov 22 03:26:37 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4214605540' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:26:37 compute-0 quirky_pare[101638]: 
Nov 22 03:26:37 compute-0 quirky_pare[101638]: {"fsid":"7adcc38b-6484-5de6-b879-33a0309153df","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":166,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":34,"num_osds":3,"num_up_osds":3,"osd_up_since":1763781953,"num_in_osds":3,"osd_in_since":1763781923,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"unknown","count":1}],"num_pgs":8,"num_pools":8,"num_objects":2,"data_bytes":459280,"bytes_used":83787776,"bytes_avail":64328138752,"bytes_total":64411926528,"unknown_pgs_ratio":0.125},"fsmap":{"epoch":4,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.fzlata","status":"up:active","gid":14269}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-11-22T03:25:37.325184+0000","services":{}},"progress_events":{"0f17082f-662e-4fd8-a927-a281dcbc6bf2":{"message":"Updating mds.cephfs deployment (+1 -> 1) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Nov 22 03:26:37 compute-0 systemd[1]: libpod-9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8.scope: Deactivated successfully.
Nov 22 03:26:37 compute-0 podman[101783]: 2025-11-22 03:26:37.322789262 +0000 UTC m=+0.039892333 container died 9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8 (image=quay.io/ceph/ceph:v18, name=quirky_pare, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:26:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v85: 9 pgs: 2 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfc2ffda024d8d61345c01d0918ae43e7d0f3b634148233f951b356a7a2d318d-merged.mount: Deactivated successfully.
Nov 22 03:26:37 compute-0 podman[101783]: 2025-11-22 03:26:37.378645761 +0000 UTC m=+0.095748802 container remove 9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8 (image=quay.io/ceph/ceph:v18, name=quirky_pare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:37 compute-0 systemd[1]: libpod-conmon-9d391b024a64901e7d9eeded4a2e6992731c7751b1cb7cef3405bc11295748e8.scope: Deactivated successfully.
Nov 22 03:26:37 compute-0 sudo[101552]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:37 compute-0 sudo[101485]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:37 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 0cb71637-8c4e-46c7-af91-7d9eba3f1d0a does not exist
Nov 22 03:26:37 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f1097232-286c-47d9-aa7e-0c937270842a does not exist
Nov 22 03:26:37 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 65c9547c-40ae-4104-974a-3a3aaf271805 does not exist
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:37 compute-0 sudo[101811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:37 compute-0 sudo[101811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:37 compute-0 sudo[101811]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:37 compute-0 sudo[101836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:37 compute-0 sudo[101836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:37 compute-0 sudo[101836]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:37 compute-0 sudo[101861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:37 compute-0 sudo[101861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:37 compute-0 sudo[101861]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:37 compute-0 sudo[101886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:26:37 compute-0 sudo[101886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Nov 22 03:26:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Nov 22 03:26:38 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Nov 22 03:26:38 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 485c9e45-0f19-44db-879b-b33651c64b90 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 22 03:26:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Nov 22 03:26:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:26:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:38 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=20/21 n=0 ec=13/13 lis/c=20/20 les/c/f=21/21/0 sis=35 pruub=12.719769478s) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active pruub 64.883689880s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:38 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 35 pg[2.0( empty local-lis/les=20/21 n=0 ec=13/13 lis/c=20/20 les/c/f=21/21/0 sis=35 pruub=12.719769478s) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown pruub 64.883689880s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Nov 22 03:26:38 compute-0 ceph-mon[75011]: osdmap e34: 3 total, 3 up, 3 in
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4214605540' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:38 compute-0 ceph-mon[75011]: osdmap e35: 3 total, 3 up, 3 in
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Nov 22 03:26:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:38 compute-0 podman[101958]: 2025-11-22 03:26:38.232854445 +0000 UTC m=+0.035310457 container create 0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:38 compute-0 sudo[101990]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kashewsqrvqfahcsbkzsyxqqbfmocvwj ; /usr/bin/python3'
Nov 22 03:26:38 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:26:38 compute-0 sudo[101990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:38 compute-0 systemd[1]: Started libpod-conmon-0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d.scope.
Nov 22 03:26:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:38 compute-0 podman[101958]: 2025-11-22 03:26:38.298910297 +0000 UTC m=+0.101366358 container init 0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:38 compute-0 podman[101958]: 2025-11-22 03:26:38.306856664 +0000 UTC m=+0.109312676 container start 0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:38 compute-0 peaceful_ellis[101997]: 167 167
Nov 22 03:26:38 compute-0 systemd[1]: libpod-0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d.scope: Deactivated successfully.
Nov 22 03:26:38 compute-0 podman[101958]: 2025-11-22 03:26:38.310080345 +0000 UTC m=+0.112536377 container attach 0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:38 compute-0 podman[101958]: 2025-11-22 03:26:38.310631304 +0000 UTC m=+0.113087316 container died 0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:26:38 compute-0 podman[101958]: 2025-11-22 03:26:38.218744791 +0000 UTC m=+0.021200832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-da26aab4d9fa91fc3b52474f753b17811cedb5e9d6772d2df19f70dc366aa24b-merged.mount: Deactivated successfully.
Nov 22 03:26:38 compute-0 podman[101958]: 2025-11-22 03:26:38.353106217 +0000 UTC m=+0.155562229 container remove 0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_ellis, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:38 compute-0 systemd[1]: libpod-conmon-0d3447f83f9a563510b48969951868a7965eb54cde16ae084ae2d4a6ce43545d.scope: Deactivated successfully.
Nov 22 03:26:38 compute-0 python3[101993]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:38 compute-0 podman[102018]: 2025-11-22 03:26:38.483776929 +0000 UTC m=+0.060831461 container create ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c (image=quay.io/ceph/ceph:v18, name=exciting_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:38 compute-0 systemd[1]: Started libpod-conmon-ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c.scope.
Nov 22 03:26:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:38 compute-0 podman[102018]: 2025-11-22 03:26:38.451527251 +0000 UTC m=+0.028581803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1bf3cdb20aecfd2bb990cdd066b3f43f1b2e974c44c8868126247d3d1fe7fe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a1bf3cdb20aecfd2bb990cdd066b3f43f1b2e974c44c8868126247d3d1fe7fe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:38 compute-0 podman[102036]: 2025-11-22 03:26:38.558790633 +0000 UTC m=+0.054945154 container create ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nobel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:38 compute-0 podman[102018]: 2025-11-22 03:26:38.568985117 +0000 UTC m=+0.146039619 container init ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c (image=quay.io/ceph/ceph:v18, name=exciting_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:26:38 compute-0 podman[102018]: 2025-11-22 03:26:38.576567255 +0000 UTC m=+0.153621797 container start ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c (image=quay.io/ceph/ceph:v18, name=exciting_ellis, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:26:38 compute-0 podman[102018]: 2025-11-22 03:26:38.581405785 +0000 UTC m=+0.158460287 container attach ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c (image=quay.io/ceph/ceph:v18, name=exciting_ellis, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 03:26:38 compute-0 systemd[1]: Started libpod-conmon-ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9.scope.
Nov 22 03:26:38 compute-0 podman[102036]: 2025-11-22 03:26:38.534166709 +0000 UTC m=+0.030321119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a4195715b392a61535c525e796e86ec04a58c7f90084696ac2b66cd225e8a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a4195715b392a61535c525e796e86ec04a58c7f90084696ac2b66cd225e8a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a4195715b392a61535c525e796e86ec04a58c7f90084696ac2b66cd225e8a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a4195715b392a61535c525e796e86ec04a58c7f90084696ac2b66cd225e8a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a4195715b392a61535c525e796e86ec04a58c7f90084696ac2b66cd225e8a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:38 compute-0 podman[102036]: 2025-11-22 03:26:38.670697913 +0000 UTC m=+0.166852364 container init ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Nov 22 03:26:38 compute-0 podman[102036]: 2025-11-22 03:26:38.683952359 +0000 UTC m=+0.180106740 container start ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nobel, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:26:38 compute-0 podman[102036]: 2025-11-22 03:26:38.68787641 +0000 UTC m=+0.184030891 container attach ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Nov 22 03:26:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1712837986' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:26:39 compute-0 exciting_ellis[102048]: 
Nov 22 03:26:39 compute-0 exciting_ellis[102048]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.lgafpt","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Nov 22 03:26:39 compute-0 systemd[1]: libpod-ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c.scope: Deactivated successfully.
Nov 22 03:26:39 compute-0 podman[102018]: 2025-11-22 03:26:39.174670812 +0000 UTC m=+0.751725363 container died ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c (image=quay.io/ceph/ceph:v18, name=exciting_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Nov 22 03:26:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 22 03:26:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Nov 22 03:26:39 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Nov 22 03:26:39 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 5b137eb1-100b-4d51-a308-c8c4c1558357 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 22 03:26:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:26:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=20/21 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:39 compute-0 ceph-mon[75011]: pgmap v85: 9 pgs: 2 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Nov 22 03:26:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1712837986' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Nov 22 03:26:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1635659822' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Nov 22 03:26:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:39 compute-0 ceph-mon[75011]: osdmap e36: 3 total, 3 up, 3 in
Nov 22 03:26:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1e( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.0( empty local-lis/les=35/36 n=0 ec=13/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.c( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.10( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.12( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.14( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.e( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 36 pg[2.1a( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=20/20 les/c/f=21/21/0 sis=35) [2] r=0 lpr=35 pi=[20,35)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a1bf3cdb20aecfd2bb990cdd066b3f43f1b2e974c44c8868126247d3d1fe7fe-merged.mount: Deactivated successfully.
Nov 22 03:26:39 compute-0 podman[102018]: 2025-11-22 03:26:39.261411819 +0000 UTC m=+0.838466361 container remove ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c (image=quay.io/ceph/ceph:v18, name=exciting_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Nov 22 03:26:39 compute-0 systemd[1]: libpod-conmon-ff1258307119595486567252edbf072f7138af458efcf4493ab88db25d7d593c.scope: Deactivated successfully.
Nov 22 03:26:39 compute-0 sudo[101990]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v88: 41 pgs: 1 creating+peering, 1 peering, 31 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 03:26:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:39 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Nov 22 03:26:39 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Nov 22 03:26:39 compute-0 goofy_nobel[102058]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:26:39 compute-0 goofy_nobel[102058]: --> relative data size: 1.0
Nov 22 03:26:39 compute-0 goofy_nobel[102058]: --> All data devices are unavailable
Nov 22 03:26:39 compute-0 systemd[1]: libpod-ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9.scope: Deactivated successfully.
Nov 22 03:26:39 compute-0 podman[102036]: 2025-11-22 03:26:39.854928961 +0000 UTC m=+1.351083372 container died ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:26:39 compute-0 systemd[1]: libpod-ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9.scope: Consumed 1.119s CPU time.
Nov 22 03:26:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-79a4195715b392a61535c525e796e86ec04a58c7f90084696ac2b66cd225e8a6-merged.mount: Deactivated successfully.
Nov 22 03:26:39 compute-0 podman[102036]: 2025-11-22 03:26:39.929388175 +0000 UTC m=+1.425542576 container remove ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:26:39 compute-0 systemd[1]: libpod-conmon-ddc0cb4c4c8029fe1270a3b42309339fd864811d2c1af5bb7c11390ab03d90d9.scope: Deactivated successfully.
Nov 22 03:26:39 compute-0 sudo[101886]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:40 compute-0 sudo[102148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:40 compute-0 sudo[102148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:40 compute-0 sudo[102148]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:40 compute-0 sudo[102195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmwyrecitgiktiuhiubtrzzgsbcnhsey ; /usr/bin/python3'
Nov 22 03:26:40 compute-0 sudo[102195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:40 compute-0 sudo[102199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:40 compute-0 sudo[102199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:40 compute-0 sudo[102199]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Nov 22 03:26:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Nov 22 03:26:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Nov 22 03:26:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Nov 22 03:26:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1437183010' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 22 03:26:40 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev b4ff12ac-3972-44be-915d-ce82379f2df8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 22 03:26:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Nov 22 03:26:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 22 03:26:40 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:40 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=16/17 n=0 ec=15/15 lis/c=16/16 les/c/f=17/17/0 sis=37 pruub=12.949344635s) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active pruub 72.292442322s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:40 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 37 pg[3.0( empty local-lis/les=16/17 n=0 ec=15/15 lis/c=16/16 les/c/f=17/17/0 sis=37 pruub=12.949344635s) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown pruub 72.292442322s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:40 compute-0 ceph-mon[75011]: 2.1 scrub starts
Nov 22 03:26:40 compute-0 ceph-mon[75011]: 2.1 scrub ok
Nov 22 03:26:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:40 compute-0 ceph-mon[75011]: osdmap e37: 3 total, 3 up, 3 in
Nov 22 03:26:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1437183010' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Nov 22 03:26:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Nov 22 03:26:40 compute-0 sudo[102224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:40 compute-0 sudo[102224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:40 compute-0 sudo[102224]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:40 compute-0 python3[102198]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:40 compute-0 sudo[102249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:26:40 compute-0 sudo[102249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:40 compute-0 podman[102254]: 2025-11-22 03:26:40.341859344 +0000 UTC m=+0.064802000 container create fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11 (image=quay.io/ceph/ceph:v18, name=hopeful_lewin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:40 compute-0 systemd[1]: Started libpod-conmon-fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11.scope.
Nov 22 03:26:40 compute-0 podman[102254]: 2025-11-22 03:26:40.320385801 +0000 UTC m=+0.043328447 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56cbbccd40662ab93720684dcd34e1ebca78d29dbfe62f0f87d5415c1afd7b80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56cbbccd40662ab93720684dcd34e1ebca78d29dbfe62f0f87d5415c1afd7b80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:40 compute-0 podman[102254]: 2025-11-22 03:26:40.442675123 +0000 UTC m=+0.165617789 container init fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11 (image=quay.io/ceph/ceph:v18, name=hopeful_lewin, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:26:40 compute-0 podman[102254]: 2025-11-22 03:26:40.454101817 +0000 UTC m=+0.177044473 container start fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11 (image=quay.io/ceph/ceph:v18, name=hopeful_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:26:40 compute-0 podman[102254]: 2025-11-22 03:26:40.46068148 +0000 UTC m=+0.183624145 container attach fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11 (image=quay.io/ceph/ceph:v18, name=hopeful_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:40 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37 pruub=13.781311035s) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active pruub 78.473297119s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:40 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 37 pg[4.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37 pruub=13.781311035s) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown pruub 78.473297119s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:40 compute-0 podman[102333]: 2025-11-22 03:26:40.716090966 +0000 UTC m=+0.051813021 container create 2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:26:40 compute-0 systemd[1]: Started libpod-conmon-2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc.scope.
Nov 22 03:26:40 compute-0 podman[102333]: 2025-11-22 03:26:40.693036371 +0000 UTC m=+0.028758416 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:40 compute-0 podman[102333]: 2025-11-22 03:26:40.81520346 +0000 UTC m=+0.150925555 container init 2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:40 compute-0 podman[102333]: 2025-11-22 03:26:40.821144857 +0000 UTC m=+0.156866902 container start 2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:40 compute-0 podman[102333]: 2025-11-22 03:26:40.824903232 +0000 UTC m=+0.160625357 container attach 2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:40 compute-0 agitated_hypatia[102350]: 167 167
Nov 22 03:26:40 compute-0 systemd[1]: libpod-2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc.scope: Deactivated successfully.
Nov 22 03:26:40 compute-0 podman[102333]: 2025-11-22 03:26:40.835324537 +0000 UTC m=+0.171046592 container died 2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb91d4aa295dd75529722906934049bb7eff4b1afacaf9c8e6afbb5f52596cf-merged.mount: Deactivated successfully.
Nov 22 03:26:40 compute-0 podman[102333]: 2025-11-22 03:26:40.886119945 +0000 UTC m=+0.221841959 container remove 2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_hypatia, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:40 compute-0 systemd[1]: libpod-conmon-2b757b81cc5860a6990896169e897d72c43b939e348462cfbca75583c354e8cc.scope: Deactivated successfully.
Nov 22 03:26:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Nov 22 03:26:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3415395796' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 22 03:26:41 compute-0 hopeful_lewin[102289]: mimic
Nov 22 03:26:41 compute-0 systemd[1]: libpod-fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11.scope: Deactivated successfully.
Nov 22 03:26:41 compute-0 podman[102254]: 2025-11-22 03:26:41.044586907 +0000 UTC m=+0.767529602 container died fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11 (image=quay.io/ceph/ceph:v18, name=hopeful_lewin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:41 compute-0 podman[102392]: 2025-11-22 03:26:41.071359537 +0000 UTC m=+0.072253625 container create 853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-56cbbccd40662ab93720684dcd34e1ebca78d29dbfe62f0f87d5415c1afd7b80-merged.mount: Deactivated successfully.
Nov 22 03:26:41 compute-0 podman[102392]: 2025-11-22 03:26:41.028994282 +0000 UTC m=+0.029888410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:41 compute-0 podman[102254]: 2025-11-22 03:26:41.134204589 +0000 UTC m=+0.857147215 container remove fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11 (image=quay.io/ceph/ceph:v18, name=hopeful_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:26:41 compute-0 systemd[1]: libpod-conmon-fe3502392b0fe44f8b218ca674117930aca1c7cd243de0a813608c3292bfac11.scope: Deactivated successfully.
Nov 22 03:26:41 compute-0 sudo[102195]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Nov 22 03:26:41 compute-0 systemd[1]: Started libpod-conmon-853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563.scope.
Nov 22 03:26:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1437183010' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 22 03:26:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 22 03:26:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Nov 22 03:26:41 compute-0 ceph-mgr[75294]: [progress WARNING root] Starting Global Recovery Event,96 pgs not in active + clean state
Nov 22 03:26:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Nov 22 03:26:41 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev f320bddb-396c-4027-af4c-37b9ebd381a7 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 22 03:26:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Nov 22 03:26:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1437183010' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 22 03:26:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:26:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=16/17 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacf166010d7978e2934328f54fe7605c6d0be6b0521a84428c93a0d7bcd3751/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=17/18 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:41 compute-0 ceph-mon[75011]: pgmap v88: 41 pgs: 1 creating+peering, 1 peering, 31 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.5 KiB/s wr, 12 op/s
Nov 22 03:26:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3415395796' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Nov 22 03:26:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1437183010' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Nov 22 03:26:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Nov 22 03:26:41 compute-0 ceph-mon[75011]: osdmap e38: 3 total, 3 up, 3 in
Nov 22 03:26:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1437183010' entity='client.rgw.rgw.compute-0.lgafpt' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Nov 22 03:26:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacf166010d7978e2934328f54fe7605c6d0be6b0521a84428c93a0d7bcd3751/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacf166010d7978e2934328f54fe7605c6d0be6b0521a84428c93a0d7bcd3751/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aacf166010d7978e2934328f54fe7605c6d0be6b0521a84428c93a0d7bcd3751/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1a( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1c( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.19( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.0( empty local-lis/les=37/38 n=0 ec=15/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.4( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.2( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.b( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.10( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.13( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.14( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 38 pg[3.d( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=16/16 les/c/f=17/17/0 sis=37) [1] r=0 lpr=37 pi=[16,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1e( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1f( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.b( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.6( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.3( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.0( empty local-lis/les=37/38 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.19( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.15( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.17( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 38 pg[4.16( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=17/17 les/c/f=18/18/0 sis=37) [0] r=0 lpr=37 pi=[17,37)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:41 compute-0 podman[102392]: 2025-11-22 03:26:41.28434901 +0000 UTC m=+0.285243188 container init 853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:26:41 compute-0 podman[102392]: 2025-11-22 03:26:41.295884644 +0000 UTC m=+0.296778762 container start 853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:41 compute-0 podman[102392]: 2025-11-22 03:26:41.299950783 +0000 UTC m=+0.300844951 container attach 853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:26:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v91: 104 pgs: 1 creating+peering, 1 peering, 94 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Nov 22 03:26:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Nov 22 03:26:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 22 03:26:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:42 compute-0 sudo[102452]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwkxvzuwfdgrayaicyzkphusyjngzovh ; /usr/bin/python3'
Nov 22 03:26:42 compute-0 sudo[102452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:26:42 compute-0 sharp_nash[102420]: {
Nov 22 03:26:42 compute-0 sharp_nash[102420]:     "0": [
Nov 22 03:26:42 compute-0 sharp_nash[102420]:         {
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "devices": [
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "/dev/loop3"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             ],
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_name": "ceph_lv0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_size": "21470642176",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "name": "ceph_lv0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "tags": {
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.crush_device_class": "",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.encrypted": "0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osd_id": "0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.type": "block",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.vdo": "0"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             },
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "type": "block",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "vg_name": "ceph_vg0"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:         }
Nov 22 03:26:42 compute-0 sharp_nash[102420]:     ],
Nov 22 03:26:42 compute-0 sharp_nash[102420]:     "1": [
Nov 22 03:26:42 compute-0 sharp_nash[102420]:         {
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "devices": [
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "/dev/loop4"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             ],
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_name": "ceph_lv1",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_size": "21470642176",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "name": "ceph_lv1",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "tags": {
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.crush_device_class": "",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.encrypted": "0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osd_id": "1",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.type": "block",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.vdo": "0"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             },
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "type": "block",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "vg_name": "ceph_vg1"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:         }
Nov 22 03:26:42 compute-0 sharp_nash[102420]:     ],
Nov 22 03:26:42 compute-0 sharp_nash[102420]:     "2": [
Nov 22 03:26:42 compute-0 sharp_nash[102420]:         {
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "devices": [
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "/dev/loop5"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             ],
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_name": "ceph_lv2",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_size": "21470642176",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "name": "ceph_lv2",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "tags": {
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.crush_device_class": "",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.encrypted": "0",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osd_id": "2",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.type": "block",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:                 "ceph.vdo": "0"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             },
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "type": "block",
Nov 22 03:26:42 compute-0 sharp_nash[102420]:             "vg_name": "ceph_vg2"
Nov 22 03:26:42 compute-0 sharp_nash[102420]:         }
Nov 22 03:26:42 compute-0 sharp_nash[102420]:     ]
Nov 22 03:26:42 compute-0 sharp_nash[102420]: }
Nov 22 03:26:42 compute-0 systemd[1]: libpod-853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563.scope: Deactivated successfully.
Nov 22 03:26:42 compute-0 podman[102392]: 2025-11-22 03:26:42.092612293 +0000 UTC m=+1.093506431 container died 853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:26:42 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Nov 22 03:26:42 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Nov 22 03:26:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-aacf166010d7978e2934328f54fe7605c6d0be6b0521a84428c93a0d7bcd3751-merged.mount: Deactivated successfully.
Nov 22 03:26:42 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Nov 22 03:26:42 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Nov 22 03:26:42 compute-0 python3[102454]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:26:42 compute-0 podman[102392]: 2025-11-22 03:26:42.197182656 +0000 UTC m=+1.198076784 container remove 853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nash, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:42 compute-0 systemd[1]: libpod-conmon-853e1e3a2bdbd082fdc024d144b5a4b7e603734987b4a460df8825bee35c7563.scope: Deactivated successfully.
Nov 22 03:26:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Nov 22 03:26:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1437183010' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 22 03:26:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 22 03:26:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Nov 22 03:26:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Nov 22 03:26:42 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev a5a7ac51-23fb-488a-a8e3-e5218e909715 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 22 03:26:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:26:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:42 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=20/21 n=0 ec=19/19 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=8.658946037s) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active pruub 64.863655090s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:42 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 39 pg[5.0( empty local-lis/les=20/21 n=0 ec=19/19 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=8.658946037s) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown pruub 64.863655090s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Nov 22 03:26:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:42 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1437183010' entity='client.rgw.rgw.compute-0.lgafpt' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Nov 22 03:26:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Nov 22 03:26:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:42 compute-0 ceph-mon[75011]: osdmap e39: 3 total, 3 up, 3 in
Nov 22 03:26:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:26:42 compute-0 sudo[102249]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:42 compute-0 podman[102467]: 2025-11-22 03:26:42.285380441 +0000 UTC m=+0.064757059 container create c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8 (image=quay.io/ceph/ceph:v18, name=crazy_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:42 compute-0 systemd[1]: Started libpod-conmon-c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8.scope.
Nov 22 03:26:42 compute-0 podman[102467]: 2025-11-22 03:26:42.262862275 +0000 UTC m=+0.042238924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:26:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:42 compute-0 sudo[102480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb13545b8ae09ae281a10be2592c028cfed1e9bb64654b109c6726ed587b3537/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb13545b8ae09ae281a10be2592c028cfed1e9bb64654b109c6726ed587b3537/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:42 compute-0 sudo[102480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:42 compute-0 sudo[102480]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:42 compute-0 podman[102467]: 2025-11-22 03:26:42.375932618 +0000 UTC m=+0.155309237 container init c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8 (image=quay.io/ceph/ceph:v18, name=crazy_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:42 compute-0 podman[102467]: 2025-11-22 03:26:42.382706373 +0000 UTC m=+0.162082992 container start c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8 (image=quay.io/ceph/ceph:v18, name=crazy_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:42 compute-0 podman[102467]: 2025-11-22 03:26:42.388401201 +0000 UTC m=+0.167777840 container attach c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8 (image=quay.io/ceph/ceph:v18, name=crazy_brown, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:42 compute-0 radosgw[100858]: LDAP not started since no server URIs were provided in the configuration.
Nov 22 03:26:42 compute-0 radosgw[100858]: framework: beast
Nov 22 03:26:42 compute-0 radosgw[100858]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Nov 22 03:26:42 compute-0 radosgw[100858]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Nov 22 03:26:42 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-rgw-rgw-compute-0-lgafpt[100837]: 2025-11-22T03:26:42.403+0000 7f42ecea1940 -1 LDAP not started since no server URIs were provided in the configuration.
Nov 22 03:26:42 compute-0 radosgw[100858]: starting handler: beast
Nov 22 03:26:42 compute-0 radosgw[100858]: set uid:gid to 167:167 (ceph:ceph)
Nov 22 03:26:42 compute-0 sudo[102512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:42 compute-0 sudo[102512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:42 compute-0 sudo[102512]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:42 compute-0 radosgw[100858]: mgrc service_daemon_register rgw.14275 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.lgafpt,kernel_description=#1 SMP PREEMPT_DYNAMIC Sat Nov 15 10:30:41 UTC 2025,kernel_version=5.14.0-639.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864308,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=3448382e-b3c8-4a76-a951-50e1ee7b591d,zone_name=default,zonegroup_id=2704fcd2-b7de-4bb4-9369-ee3497bb8679,zonegroup_name=default}
Nov 22 03:26:42 compute-0 sudo[103080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:42 compute-0 sudo[103080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:42 compute-0 sudo[103080]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:42 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.2 deep-scrub starts
Nov 22 03:26:42 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.2 deep-scrub ok
Nov 22 03:26:42 compute-0 sudo[103105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:26:42 compute-0 sudo[103105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Nov 22 03:26:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3528551172' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 22 03:26:42 compute-0 crazy_brown[102506]: 
Nov 22 03:26:42 compute-0 crazy_brown[102506]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":6}}
Nov 22 03:26:43 compute-0 systemd[1]: libpod-c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8.scope: Deactivated successfully.
Nov 22 03:26:43 compute-0 podman[103191]: 2025-11-22 03:26:43.008549544 +0000 UTC m=+0.055517844 container create 17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:26:43 compute-0 systemd[76613]: Starting Mark boot as successful...
Nov 22 03:26:43 compute-0 systemd[76613]: Finished Mark boot as successful.
Nov 22 03:26:43 compute-0 systemd[1]: Started libpod-conmon-17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be.scope.
Nov 22 03:26:43 compute-0 podman[103205]: 2025-11-22 03:26:43.043977254 +0000 UTC m=+0.032780092 container died c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8 (image=quay.io/ceph/ceph:v18, name=crazy_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:26:43 compute-0 podman[103191]: 2025-11-22 03:26:42.985811861 +0000 UTC m=+0.032780171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb13545b8ae09ae281a10be2592c028cfed1e9bb64654b109c6726ed587b3537-merged.mount: Deactivated successfully.
Nov 22 03:26:43 compute-0 podman[103191]: 2025-11-22 03:26:43.104019409 +0000 UTC m=+0.150987748 container init 17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:43 compute-0 podman[103205]: 2025-11-22 03:26:43.107658267 +0000 UTC m=+0.096461075 container remove c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8 (image=quay.io/ceph/ceph:v18, name=crazy_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:26:43 compute-0 podman[103191]: 2025-11-22 03:26:43.113497079 +0000 UTC m=+0.160465368 container start 17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:43 compute-0 systemd[1]: libpod-conmon-c6bf75a76139534d83b8d32c39770d117640b77ce84b9b4ed9966a225f86e7b8.scope: Deactivated successfully.
Nov 22 03:26:43 compute-0 laughing_tesla[103217]: 167 167
Nov 22 03:26:43 compute-0 systemd[1]: libpod-17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be.scope: Deactivated successfully.
Nov 22 03:26:43 compute-0 podman[103191]: 2025-11-22 03:26:43.117681817 +0000 UTC m=+0.164650197 container attach 17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:26:43 compute-0 conmon[103217]: conmon 17f4a0c84bff67a6ade5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be.scope/container/memory.events
Nov 22 03:26:43 compute-0 podman[103191]: 2025-11-22 03:26:43.118954847 +0000 UTC m=+0.165923147 container died 17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:26:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4895ef56e1cc991304d42e90f8e5a467dd1fa06ab8dd755da3929cc7a39b159-merged.mount: Deactivated successfully.
Nov 22 03:26:43 compute-0 sudo[102452]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:43 compute-0 podman[103191]: 2025-11-22 03:26:43.173645211 +0000 UTC m=+0.220613520 container remove 17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_tesla, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:43 compute-0 systemd[1]: libpod-conmon-17f4a0c84bff67a6ade5f00de17ddef7330ffca16de32911574e7f337ffa21be.scope: Deactivated successfully.
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 39 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=20/21 n=22 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=15.706598282s) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 32'38 mlcod 32'38 active pruub 83.125831604s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 39 pg[6.0( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39 pruub=15.706598282s) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 32'38 mlcod 0'0 unknown pruub 83.125831604s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Nov 22 03:26:43 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Nov 22 03:26:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Nov 22 03:26:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Nov 22 03:26:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 031552eb-9d57-497c-8c24-dd41ed2c1a8a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 5cc8795e-19ab-49b2-b08b-f5ab8a3e5792 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 5cc8795e-19ab-49b2-b08b-f5ab8a3e5792 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 6 seconds
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 485c9e45-0f19-44db-879b-b33651c64b90 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 485c9e45-0f19-44db-879b-b33651c64b90 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 5 seconds
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 5b137eb1-100b-4d51-a308-c8c4c1558357 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 5b137eb1-100b-4d51-a308-c8c4c1558357 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 4 seconds
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev b4ff12ac-3972-44be-915d-ce82379f2df8 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event b4ff12ac-3972-44be-915d-ce82379f2df8 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 3 seconds
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev f320bddb-396c-4027-af4c-37b9ebd381a7 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event f320bddb-396c-4027-af4c-37b9ebd381a7 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 2 seconds
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev a5a7ac51-23fb-488a-a8e3-e5218e909715 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event a5a7ac51-23fb-488a-a8e3-e5218e909715 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 1 seconds
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 031552eb-9d57-497c-8c24-dd41ed2c1a8a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 031552eb-9d57-497c-8c24-dd41ed2c1a8a (PG autoscaler increasing pool 8 PGs from 1 to 32) in 0 seconds
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.a( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.5( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.4( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.9( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.8( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.7( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=20/21 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.2( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.e( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.f( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.c( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.d( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=20/21 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=20/21 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.0( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 32'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.10( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.17( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.8( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.0( empty local-lis/les=39/40 n=0 ec=19/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 40 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=20/20 les/c/f=21/21/0 sis=39) [0] r=0 lpr=39 pi=[20,39)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.6( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1b( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 40 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=20/20 les/c/f=21/21/0 sis=39) [2] r=0 lpr=39 pi=[20,39)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:43 compute-0 ceph-mon[75011]: pgmap v91: 104 pgs: 1 creating+peering, 1 peering, 94 unknown, 8 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 1023 B/s wr, 1 op/s
Nov 22 03:26:43 compute-0 ceph-mon[75011]: 4.1 deep-scrub starts
Nov 22 03:26:43 compute-0 ceph-mon[75011]: 4.1 deep-scrub ok
Nov 22 03:26:43 compute-0 ceph-mon[75011]: 3.1 scrub starts
Nov 22 03:26:43 compute-0 ceph-mon[75011]: 3.1 scrub ok
Nov 22 03:26:43 compute-0 ceph-mon[75011]: 2.2 deep-scrub starts
Nov 22 03:26:43 compute-0 ceph-mon[75011]: 2.2 deep-scrub ok
Nov 22 03:26:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3528551172' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Nov 22 03:26:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:26:43 compute-0 ceph-mon[75011]: osdmap e40: 3 total, 3 up, 3 in
Nov 22 03:26:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v94: 150 pgs: 1 creating+peering, 1 peering, 109 unknown, 39 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 22 03:26:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:43 compute-0 podman[103249]: 2025-11-22 03:26:43.343173699 +0000 UTC m=+0.041116900 container create 877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:43 compute-0 systemd[1]: Started libpod-conmon-877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8.scope.
Nov 22 03:26:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802af0e038fe3125042470ad51b0acd0cfb040c2a55c57ea8aa5dc4c95853455/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802af0e038fe3125042470ad51b0acd0cfb040c2a55c57ea8aa5dc4c95853455/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802af0e038fe3125042470ad51b0acd0cfb040c2a55c57ea8aa5dc4c95853455/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/802af0e038fe3125042470ad51b0acd0cfb040c2a55c57ea8aa5dc4c95853455/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:43 compute-0 podman[103249]: 2025-11-22 03:26:43.32911594 +0000 UTC m=+0.027059151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:43 compute-0 podman[103249]: 2025-11-22 03:26:43.430528466 +0000 UTC m=+0.128471687 container init 877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:43 compute-0 podman[103249]: 2025-11-22 03:26:43.437381927 +0000 UTC m=+0.135325127 container start 877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:26:43 compute-0 podman[103249]: 2025-11-22 03:26:43.440911047 +0000 UTC m=+0.138854248 container attach 877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:43 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Nov 22 03:26:43 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Nov 22 03:26:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Nov 22 03:26:44 compute-0 ceph-mon[75011]: 3.2 scrub starts
Nov 22 03:26:44 compute-0 ceph-mon[75011]: 3.2 scrub ok
Nov 22 03:26:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:44 compute-0 ceph-mon[75011]: 2.3 scrub starts
Nov 22 03:26:44 compute-0 ceph-mon[75011]: 2.3 scrub ok
Nov 22 03:26:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Nov 22 03:26:44 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Nov 22 03:26:44 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Nov 22 03:26:44 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Nov 22 03:26:44 compute-0 great_haibt[103265]: {
Nov 22 03:26:44 compute-0 great_haibt[103265]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "osd_id": 1,
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "type": "bluestore"
Nov 22 03:26:44 compute-0 great_haibt[103265]:     },
Nov 22 03:26:44 compute-0 great_haibt[103265]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "osd_id": 0,
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "type": "bluestore"
Nov 22 03:26:44 compute-0 great_haibt[103265]:     },
Nov 22 03:26:44 compute-0 great_haibt[103265]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "osd_id": 2,
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:44 compute-0 great_haibt[103265]:         "type": "bluestore"
Nov 22 03:26:44 compute-0 great_haibt[103265]:     }
Nov 22 03:26:44 compute-0 great_haibt[103265]: }
Nov 22 03:26:44 compute-0 systemd[1]: libpod-877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8.scope: Deactivated successfully.
Nov 22 03:26:44 compute-0 podman[103249]: 2025-11-22 03:26:44.556706975 +0000 UTC m=+1.254650216 container died 877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:26:44 compute-0 systemd[1]: libpod-877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8.scope: Consumed 1.106s CPU time.
Nov 22 03:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-802af0e038fe3125042470ad51b0acd0cfb040c2a55c57ea8aa5dc4c95853455-merged.mount: Deactivated successfully.
Nov 22 03:26:44 compute-0 podman[103249]: 2025-11-22 03:26:44.625351155 +0000 UTC m=+1.323294396 container remove 877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:26:44 compute-0 systemd[1]: libpod-conmon-877ccbd8b49136f4e56bca7e2221ce4714f9b10dead0b3c881bce1d9d56c8eb8.scope: Deactivated successfully.
Nov 22 03:26:44 compute-0 sudo[103105]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:44 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ce1f4564-7f2e-4ae4-acfb-628f2c74bcf6 does not exist
Nov 22 03:26:44 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev cd69e4ec-a936-4dd0-9b71-1ef8190da339 does not exist
Nov 22 03:26:44 compute-0 sudo[103309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:44 compute-0 sudo[103309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:44 compute-0 sudo[103309]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:44 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=41 pruub=15.983895302s) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active pruub 80.008796692s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:44 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 41 pg[8.0( v 32'4 (0'0,32'4] local-lis/les=31/32 n=4 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=14.275022507s) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 32'3 mlcod 32'3 active pruub 78.299980164s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:44 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 41 pg[7.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=41 pruub=15.983895302s) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown pruub 80.008796692s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:44 compute-0 sudo[103334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:26:44 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 41 pg[8.0( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=41 pruub=14.275022507s) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 32'3 mlcod 0'0 unknown pruub 78.299980164s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:44 compute-0 sudo[103334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:44 compute-0 sudo[103334]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:44 compute-0 sudo[103359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:44 compute-0 sudo[103359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:44 compute-0 sudo[103359]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:45 compute-0 sudo[103384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:45 compute-0 sudo[103384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:45 compute-0 sudo[103384]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:45 compute-0 sudo[103409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:45 compute-0 sudo[103409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:45 compute-0 sudo[103409]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:45 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Nov 22 03:26:45 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Nov 22 03:26:45 compute-0 sudo[103434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:26:45 compute-0 sudo[103434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:45 compute-0 ceph-mon[75011]: pgmap v94: 150 pgs: 1 creating+peering, 1 peering, 109 unknown, 39 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Nov 22 03:26:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:26:45 compute-0 ceph-mon[75011]: osdmap e41: 3 total, 3 up, 3 in
Nov 22 03:26:45 compute-0 ceph-mon[75011]: 2.4 scrub starts
Nov 22 03:26:45 compute-0 ceph-mon[75011]: 2.4 scrub ok
Nov 22 03:26:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v96: 212 pgs: 77 unknown, 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 8.2 KiB/s wr, 354 op/s
Nov 22 03:26:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Nov 22 03:26:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Nov 22 03:26:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1c( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1d( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1e( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1f( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.18( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.19( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1a( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1b( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.4( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.5( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.6( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.7( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.2( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.9( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.b( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.f( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1( v 32'4 (0'0,32'4] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.3( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.a( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.e( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.c( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.d( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.8( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.13( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.12( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.11( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.10( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.17( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.15( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.16( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.14( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=31/32 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=22/23 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1c( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1d( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1e( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.17( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.10( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1f( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.18( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.19( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.16( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1a( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.14( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1b( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.12( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.5( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.4( v 32'4 (0'0,32'4] local-lis/les=41/42 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.7( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.6( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.0( empty local-lis/les=41/42 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.9( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.f( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.1( v 32'4 (0'0,32'4] local-lis/les=41/42 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.0( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 32'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.3( v 32'4 (0'0,32'4] local-lis/les=41/42 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.a( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.b( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.7( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.c( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.e( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.13( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.d( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.2( v 32'4 (0'0,32'4] local-lis/les=41/42 n=1 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1d( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.11( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.10( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.12( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.8( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.17( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.15( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.19( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.14( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=22/22 les/c/f=23/23/0 sis=41) [1] r=0 lpr=41 pi=[22,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 42 pg[8.16( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=31/31 les/c/f=32/32/0 sis=41) [1] r=0 lpr=41 pi=[31,41)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:45 compute-0 podman[103530]: 2025-11-22 03:26:45.915471545 +0000 UTC m=+0.086243163 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:46 compute-0 podman[103530]: 2025-11-22 03:26:46.029688956 +0000 UTC m=+0.200460524 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:26:46 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Nov 22 03:26:46 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Nov 22 03:26:46 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Nov 22 03:26:46 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Nov 22 03:26:46 compute-0 ceph-mgr[75294]: [progress INFO root] Writing back 12 completed events
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:46 compute-0 ceph-mon[75011]: 3.3 scrub starts
Nov 22 03:26:46 compute-0 ceph-mon[75011]: 3.3 scrub ok
Nov 22 03:26:46 compute-0 ceph-mon[75011]: pgmap v96: 212 pgs: 77 unknown, 135 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 159 KiB/s rd, 8.2 KiB/s wr, 354 op/s
Nov 22 03:26:46 compute-0 ceph-mon[75011]: osdmap e42: 3 total, 3 up, 3 in
Nov 22 03:26:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:46 compute-0 sudo[103434]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:46 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b43b984e-f8da-47d3-8b0a-73f07f3caf56 does not exist
Nov 22 03:26:46 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ae4729ca-f93c-4543-b13d-90348ca1d42d does not exist
Nov 22 03:26:46 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 55b930af-f2ee-4b42-b029-0d39532cf259 does not exist
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:26:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:47 compute-0 sudo[103688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:47 compute-0 sudo[103688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:47 compute-0 sudo[103688]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:47 compute-0 sudo[103713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:47 compute-0 sudo[103713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:47 compute-0 sudo[103713]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:47 compute-0 sudo[103738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:47 compute-0 sudo[103738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:47 compute-0 sudo[103738]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:47 compute-0 sudo[103763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:26:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v98: 212 pgs: 62 unknown, 150 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 6.6 KiB/s wr, 285 op/s
Nov 22 03:26:47 compute-0 sudo[103763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:47 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.5 deep-scrub starts
Nov 22 03:26:47 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.5 deep-scrub ok
Nov 22 03:26:47 compute-0 ceph-mon[75011]: 3.4 scrub starts
Nov 22 03:26:47 compute-0 ceph-mon[75011]: 3.4 scrub ok
Nov 22 03:26:47 compute-0 ceph-mon[75011]: 4.2 scrub starts
Nov 22 03:26:47 compute-0 ceph-mon[75011]: 4.2 scrub ok
Nov 22 03:26:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:26:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:26:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:26:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:26:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:26:47 compute-0 podman[103830]: 2025-11-22 03:26:47.801818909 +0000 UTC m=+0.070883466 container create 4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:26:47 compute-0 systemd[1]: Started libpod-conmon-4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819.scope.
Nov 22 03:26:47 compute-0 podman[103830]: 2025-11-22 03:26:47.770765911 +0000 UTC m=+0.039830458 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:47 compute-0 podman[103830]: 2025-11-22 03:26:47.894838611 +0000 UTC m=+0.163903168 container init 4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:26:47 compute-0 podman[103830]: 2025-11-22 03:26:47.907032219 +0000 UTC m=+0.176096786 container start 4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:26:47 compute-0 cranky_taussig[103847]: 167 167
Nov 22 03:26:47 compute-0 podman[103830]: 2025-11-22 03:26:47.913147293 +0000 UTC m=+0.182211810 container attach 4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:26:47 compute-0 systemd[1]: libpod-4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819.scope: Deactivated successfully.
Nov 22 03:26:47 compute-0 conmon[103847]: conmon 4768e78439ebde1231d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819.scope/container/memory.events
Nov 22 03:26:47 compute-0 podman[103830]: 2025-11-22 03:26:47.915962043 +0000 UTC m=+0.185026570 container died 4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-def313c0faf42e31dac71cfdcd3a05d4494ae4ea83765840104f1bcc2b80b304-merged.mount: Deactivated successfully.
Nov 22 03:26:47 compute-0 podman[103830]: 2025-11-22 03:26:47.964985527 +0000 UTC m=+0.234050064 container remove 4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:26:47 compute-0 systemd[1]: libpod-conmon-4768e78439ebde1231d5d1c12192a6844b59c66f7e17a3777219acc37b8ae819.scope: Deactivated successfully.
Nov 22 03:26:48 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Nov 22 03:26:48 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Nov 22 03:26:48 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Nov 22 03:26:48 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Nov 22 03:26:48 compute-0 podman[103869]: 2025-11-22 03:26:48.203396223 +0000 UTC m=+0.065720011 container create 627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:48 compute-0 systemd[1]: Started libpod-conmon-627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747.scope.
Nov 22 03:26:48 compute-0 podman[103869]: 2025-11-22 03:26:48.170698091 +0000 UTC m=+0.033021949 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d56b96f6463eb9cd386f3aeb7cd8adf7b6dbe6b375c4b6ca66103647f9291a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d56b96f6463eb9cd386f3aeb7cd8adf7b6dbe6b375c4b6ca66103647f9291a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d56b96f6463eb9cd386f3aeb7cd8adf7b6dbe6b375c4b6ca66103647f9291a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d56b96f6463eb9cd386f3aeb7cd8adf7b6dbe6b375c4b6ca66103647f9291a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3d56b96f6463eb9cd386f3aeb7cd8adf7b6dbe6b375c4b6ca66103647f9291a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:48 compute-0 podman[103869]: 2025-11-22 03:26:48.290090683 +0000 UTC m=+0.152414481 container init 627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:48 compute-0 podman[103869]: 2025-11-22 03:26:48.305592656 +0000 UTC m=+0.167916424 container start 627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:26:48 compute-0 podman[103869]: 2025-11-22 03:26:48.310058157 +0000 UTC m=+0.172381955 container attach 627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:26:48 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Nov 22 03:26:48 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Nov 22 03:26:49 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Nov 22 03:26:49 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Nov 22 03:26:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v99: 212 pgs: 212 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 5.6 KiB/s wr, 259 op/s
Nov 22 03:26:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 22 03:26:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 03:26:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:26:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Nov 22 03:26:49 compute-0 pedantic_mendeleev[103886]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:26:49 compute-0 pedantic_mendeleev[103886]: --> relative data size: 1.0
Nov 22 03:26:49 compute-0 pedantic_mendeleev[103886]: --> All data devices are unavailable
Nov 22 03:26:49 compute-0 ceph-mon[75011]: pgmap v98: 212 pgs: 62 unknown, 150 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 128 KiB/s rd, 6.6 KiB/s wr, 285 op/s
Nov 22 03:26:49 compute-0 ceph-mon[75011]: 2.5 deep-scrub starts
Nov 22 03:26:49 compute-0 ceph-mon[75011]: 2.5 deep-scrub ok
Nov 22 03:26:49 compute-0 ceph-mon[75011]: 3.5 scrub starts
Nov 22 03:26:49 compute-0 ceph-mon[75011]: 3.5 scrub ok
Nov 22 03:26:49 compute-0 ceph-mon[75011]: 4.3 scrub starts
Nov 22 03:26:49 compute-0 ceph-mon[75011]: 4.3 scrub ok
Nov 22 03:26:49 compute-0 systemd[1]: libpod-627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747.scope: Deactivated successfully.
Nov 22 03:26:49 compute-0 systemd[1]: libpod-627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747.scope: Consumed 1.112s CPU time.
Nov 22 03:26:49 compute-0 podman[103869]: 2025-11-22 03:26:49.939004024 +0000 UTC m=+1.801327881 container died 627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:26:50 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Nov 22 03:26:50 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Nov 22 03:26:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Nov 22 03:26:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952767372s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.218704224s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952655792s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.218589783s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.930894852s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.196876526s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.930844307s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.196876526s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952692032s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.218704224s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952570915s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.218589783s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.930527687s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.196876526s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.930411339s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.196868896s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.930335045s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.196868896s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.930331230s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.196868896s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.930336952s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.196876526s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952241898s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.218811035s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.930284500s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.196868896s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952186584s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.218811035s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952770233s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.219474792s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952403069s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.219146729s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952741623s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.219474792s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952375412s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.219146729s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.929731369s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.196571350s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952685356s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.219581604s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952346802s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.219261169s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952655792s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.219581604s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.929659843s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.196571350s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.929587364s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.196563721s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952295303s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.219261169s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.929558754s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.196563721s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952917099s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.220008850s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952861786s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.220008850s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.928296089s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.195533752s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.927900314s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.195182800s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.928248405s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.195533752s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.929636002s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.196968079s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952385902s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.219718933s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.927827835s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.195182800s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.929606438s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.196968079s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952334404s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.219718933s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.927564621s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.195091248s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.959266663s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.226791382s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952776909s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.220306396s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952749252s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.220306396s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.927500725s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.195091248s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.959203720s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.226791382s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.929197311s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.196876526s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.927366257s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.195060730s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952653885s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.220413208s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.929153442s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.196876526s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.927317619s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.195060730s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952623367s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.220413208s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.926629066s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.194526672s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952744484s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.220680237s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952685356s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.220642090s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.918429375s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.186370850s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.926575661s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.194526672s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952696800s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.220680237s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952656746s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.220642090s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.918355942s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.186370850s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.918340683s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.186378479s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.918288231s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.186378479s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952499390s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.220687866s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917984009s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.186195374s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.958250046s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.226478577s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.952466011s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.220687866s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917951584s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.186195374s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.958218575s) [0] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.226478577s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917926788s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.186218262s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917875290s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.186241150s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917873383s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.186218262s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917845726s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.186241150s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.958238602s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.226646423s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.927074432s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.195533752s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917686462s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.186172485s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.958189964s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.226646423s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.927046776s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.195533752s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917657852s) [1] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.186172485s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.958067894s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.226737976s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.916846275s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.185539246s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.926465034s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.195182800s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.958037376s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.226737976s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.958010674s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.226745605s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917506218s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active pruub 77.186187744s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.926436424s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.195182800s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.916799545s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.185539246s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.957961082s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.226745605s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=35/36 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43 pruub=12.917371750s) [0] r=-1 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 77.186187744s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.957932472s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active pruub 73.226791382s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=39/40 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.957905769s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.226791382s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3d56b96f6463eb9cd386f3aeb7cd8adf7b6dbe6b375c4b6ca66103647f9291a-merged.mount: Deactivated successfully.
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.19( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[5.1e( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.18( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.19( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[5.7( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.18( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.1d( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.1a( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.1d( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.1b( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[5.4( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.6( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.1( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.7( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.1c( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.4( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.f( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.9( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.2( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.c( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[5.5( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.1f( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.a( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[5.2( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.5( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.3( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[5.3( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.f( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.b( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.8( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.d( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.9( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.16( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.16( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[5.15( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[5.14( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.12( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.15( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.13( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[2.11( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.865406990s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.504150391s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.865357399s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.504150391s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.865130424s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.504135132s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.865050316s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.504135132s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.849911690s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.489265442s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.849879265s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.489265442s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.849324226s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488914490s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.849299431s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488914490s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.13( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[2.17( empty local-lis/les=0/0 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[5.11( empty local-lis/les=0/0 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1c( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317130089s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.862571716s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1c( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317080498s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.862571716s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847494125s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.393402100s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.849119186s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488883972s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847452164s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.393402100s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.849091530s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488883972s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.849394798s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.489242554s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.849360466s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.489242554s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.18( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.13( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.320403099s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869277954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.13( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.320358276s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869277954s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1d( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.320278168s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869255066s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.844398499s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.393386841s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.844352722s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.393386841s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1d( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.320222855s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869255066s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.844285011s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.393417358s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.844259262s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.393417358s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319962502s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869262695s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1f( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319966316s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869354248s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.11( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319864273s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869262695s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1f( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319920540s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869354248s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.18( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319919586s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869392395s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.18( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319886208s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869392395s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.843704224s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.393463135s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.843561172s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.393333435s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.843659401s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.393463135s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.843516350s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.393333435s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1a( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319651604s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869522095s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319633484s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869644165s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1b( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319479942s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869590759s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.843161583s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.393318176s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.838723183s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 83.478889465s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1b( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319427490s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869590759s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.848628998s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488845825s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.843094826s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.393318176s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.838680267s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.478889465s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.848599434s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488845825s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.1a( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319605827s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869522095s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.e( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.1( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.4( v 32'4 (0'0,32'4] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.318469048s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869651794s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.4( v 32'4 (0'0,32'4] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.318387985s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869651794s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.848438263s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488792419s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.838384628s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 83.478775024s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.838346481s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.478775024s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.848365784s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488792419s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.15( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.319528580s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869644165s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.848266602s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488807678s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.834054947s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 83.474662781s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.841232300s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392936707s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.834027290s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.474662781s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.848214149s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488807678s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.841187477s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392936707s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.838063240s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 83.478767395s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317848206s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869735718s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847996712s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488754272s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317796707s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869735718s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.6( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317722321s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869750977s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847899437s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488777161s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.6( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317674637s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869750977s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847869873s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488777161s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847980499s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488899231s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317430496s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869697571s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847913742s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488899231s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.833484650s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 83.474601746s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.833442688s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.474601746s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.9( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317378998s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869697571s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847490311s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488731384s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.841033936s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.393486023s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847451210s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488731384s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317202568s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869697571s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847202301s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488533020s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847175598s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488533020s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.840980530s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.393486023s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.8( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.317153931s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869697571s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847199440s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488677979s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.832949638s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 83.474433899s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847177505s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488677979s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.832909584s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.474433899s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847019196s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488685608s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.846996307s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488685608s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.2( v 32'4 (0'0,32'4] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316942215s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869728088s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.832629204s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 83.474456787s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.9( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316929817s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869789124s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.832605362s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.474456787s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.2( v 32'4 (0'0,32'4] local-lis/les=41/42 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316890717s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869728088s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.846815109s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488761902s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316863060s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869758606s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.832339287s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 83.474327087s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.9( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316880226s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869789124s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.846772194s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488761902s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.b( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316949844s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869918823s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.832257271s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.474327087s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.6( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316814423s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869758606s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.846267700s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488510132s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.846224785s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488510132s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.846105576s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488456726s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.846074104s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488456726s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316722870s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869827271s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.17( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.f( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316586494s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869796753s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.4( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316633224s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869827271s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.840634346s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.393371582s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.f( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316538811s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869796753s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316528320s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869842529s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316478729s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869842529s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.840076447s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.393371582s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.839189529s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392662048s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316349030s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869873047s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.839141846s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392662048s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.e( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316302299s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869873047s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.839021683s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392669678s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838981628s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392669678s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43 pruub=8.838014603s) [1] r=-1 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 83.478767395s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316090584s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869949341s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838753700s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392631531s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.847960472s) [1] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488754272s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.5( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316066742s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869949341s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838704109s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392631531s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838762283s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392829895s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838727951s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392829895s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315756798s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869918823s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.13( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.e( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315754890s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869987488s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.842773438s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 89.488937378s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838468552s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392730713s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315656662s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869918823s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=37/38 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.842634201s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 89.488937378s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315643311s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869949341s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.e( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315701485s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869987488s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.1d( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315611839s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869949341s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.1a( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838410378s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392730713s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838027000s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392509460s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.15( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.838001251s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392509460s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315304756s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.869956970s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.c( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315323830s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.869979858s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.1f( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.a( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.2( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315249443s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.869956970s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.c( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315267563s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869979858s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.18( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.d( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315221786s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.870025635s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.837761879s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392570496s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.12( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.837729454s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392570496s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.f( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.b( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.316911697s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.869918823s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.6( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315296173s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.870231628s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.1a( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.d( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315185547s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.870025635s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315044403s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.870048523s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.3( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.315256119s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.870231628s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.9( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.837465286s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392494202s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.837436676s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392494202s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1c( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314998627s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.870048523s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.12( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314962387s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.870109558s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.12( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314930916s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.870109558s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.10( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314788818s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.870094299s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.10( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314758301s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.870094299s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314697266s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.870071411s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1f( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314672470s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.870071411s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.836955070s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392608643s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.836924553s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392608643s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.1b( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.11( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314066887s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.870063782s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.11( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.314011574s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.870063782s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.1c( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.313818932s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.870140076s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.18( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.313787460s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.870140076s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.c( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.835384369s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392570496s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.835326195s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392570496s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.312893867s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.870216370s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.834908485s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.392372131s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1a( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.312847137s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.870216370s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.834863663s) [2] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.392372131s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.14( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.312613487s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.870193481s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.14( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.312539101s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.870193481s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.15( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.312404633s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active pruub 80.870162964s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.817972183s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active pruub 84.375862122s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[8.15( v 32'4 (0'0,32'4] local-lis/les=41/42 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.312276840s) [2] r=-1 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 80.870162964s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.14( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=37/38 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43 pruub=14.817934990s) [0] r=-1 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 84.375862122s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.312453270s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active pruub 80.870216370s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[7.1b( empty local-lis/les=41/42 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43 pruub=11.312129974s) [0] r=-1 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 80.870216370s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.9( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.6( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.12( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.10( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.f( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.4( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.d( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.f( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.4( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.9( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.f( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.9( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.a( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.5( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.1( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.3( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.7( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.e( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.8( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.6( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.c( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[6.1( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.b( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 43 pg[4.2( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.1b( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.10( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.1f( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.3( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.18( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[8.14( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[3.1f( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 43 pg[7.1b( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[4.11( empty local-lis/les=0/0 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[8.1c( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.16( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.11( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.11( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[8.1b( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[8.4( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.15( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.e( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.a( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.8( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[8.2( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.e( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.8( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.5( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.c( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.1( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.5( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.2( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.7( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[8.d( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.18( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.1c( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[8.12( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[8.11( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.1d( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[7.1a( empty local-lis/les=0/0 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[3.1e( empty local-lis/les=0/0 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 43 pg[8.15( empty local-lis/les=0/0 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:50 compute-0 podman[103869]: 2025-11-22 03:26:50.589167489 +0000 UTC m=+2.451491307 container remove 627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mendeleev, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:26:50 compute-0 systemd[1]: libpod-conmon-627fca4554705e4add186627022d8c3e9a07d2e16f6c7bcd956a459d9597c747.scope: Deactivated successfully.
Nov 22 03:26:50 compute-0 sudo[103763]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:50 compute-0 sudo[103929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:50 compute-0 sudo[103929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:50 compute-0 sudo[103929]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:50 compute-0 sudo[103954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:26:50 compute-0 sudo[103954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:50 compute-0 sudo[103954]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:50 compute-0 sudo[103979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:26:50 compute-0 sudo[103979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:50 compute-0 sudo[103979]: pam_unix(sudo:session): session closed for user root
Nov 22 03:26:50 compute-0 ceph-mon[75011]: 2.6 scrub starts
Nov 22 03:26:50 compute-0 ceph-mon[75011]: 2.6 scrub ok
Nov 22 03:26:50 compute-0 ceph-mon[75011]: 4.4 scrub starts
Nov 22 03:26:50 compute-0 ceph-mon[75011]: 4.4 scrub ok
Nov 22 03:26:50 compute-0 ceph-mon[75011]: pgmap v99: 212 pgs: 212 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 5.6 KiB/s wr, 259 op/s
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:26:50 compute-0 ceph-mon[75011]: 4.5 scrub starts
Nov 22 03:26:50 compute-0 ceph-mon[75011]: 4.5 scrub ok
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:26:50 compute-0 ceph-mon[75011]: osdmap e43: 3 total, 3 up, 3 in
Nov 22 03:26:50 compute-0 sudo[104004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:26:50 compute-0 sudo[104004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:26:51 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Nov 22 03:26:51 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Nov 22 03:26:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Nov 22 03:26:51 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event d3cbb8e1-d404-4cb8-b138-30aec2b70b27 (Global Recovery Event) in 10 seconds
Nov 22 03:26:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Nov 22 03:26:51 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Nov 22 03:26:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v102: 212 pgs: 212 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 03:26:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 22 03:26:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.1b( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 podman[104069]: 2025-11-22 03:26:51.38305553 +0000 UTC m=+0.030293091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 podman[104069]: 2025-11-22 03:26:51.509510901 +0000 UTC m=+0.156748451 container create f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[6.5( v 37'39 lc 32'11 (0'0,37'39] local-lis/les=43/44 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[6.7( v 37'39 lc 32'21 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[6.d( v 37'39 lc 32'13 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[6.f( v 37'39 lc 32'1 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=43/44 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.19( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [1] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.1a( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.18( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.1d( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.1b( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.6( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.7( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[6.1( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.4( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.1( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.14( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.11( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.1a( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.13( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[5.15( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.18( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.1f( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[5.14( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.15( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.1d( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.13( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.9( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.a( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.8( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.3( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.f( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.b( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.c( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[5.3( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.f( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[5.2( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.1f( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[5.5( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.2( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.16( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.f( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.6( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.1c( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[5.4( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.9( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.e( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.9( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.6( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.1d( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.18( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.c( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.4( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[5.7( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.b( v 32'4 lc 0'0 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.f( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[7.1f( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.1b( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[5.1e( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [0] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[3.3( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [0] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[8.10( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [0] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.19( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 44 pg[2.18( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [0] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.9( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.a( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.5( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.c( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.3( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.d( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.f( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.16( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.9( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.12( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.13( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.15( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[2.17( empty local-lis/les=43/44 n=0 ec=35/13 lis/c=35/35 les/c/f=36/36/0 sis=43) [1] r=0 lpr=43 pi=[35,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 44 pg[5.11( empty local-lis/les=43/44 n=0 ec=39/19 lis/c=39/39 les/c/f=40/40/0 sis=43) [1] r=0 lpr=43 pi=[39,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.1c( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[8.12( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.16( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[8.1c( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.11( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[8.1b( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.15( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.11( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.e( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.a( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[8.4( v 32'4 (0'0,32'4] local-lis/les=43/44 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.8( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.e( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[8.d( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[8.2( v 32'4 (0'0,32'4] local-lis/les=43/44 n=1 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.5( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.5( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.1( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.2( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.c( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[8.11( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.1d( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[7.1a( empty local-lis/les=43/44 n=0 ec=41/22 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[8.15( v 32'4 (0'0,32'4] local-lis/les=43/44 n=0 ec=41/31 lis/c=41/41 les/c/f=42/42/0 sis=43) [2] r=0 lpr=43 pi=[41,43)/1 crt=32'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[3.8( empty local-lis/les=43/44 n=0 ec=37/15 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=37/17 lis/c=37/37 les/c/f=38/38/0 sis=43) [2] r=0 lpr=43 pi=[37,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:51 compute-0 systemd[1]: Started libpod-conmon-f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1.scope.
Nov 22 03:26:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:51 compute-0 podman[104069]: 2025-11-22 03:26:51.881155472 +0000 UTC m=+0.528393082 container init f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:26:51 compute-0 podman[104069]: 2025-11-22 03:26:51.892538473 +0000 UTC m=+0.539776024 container start f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:26:51 compute-0 agitated_cerf[104085]: 167 167
Nov 22 03:26:51 compute-0 systemd[1]: libpod-f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1.scope: Deactivated successfully.
Nov 22 03:26:51 compute-0 podman[104069]: 2025-11-22 03:26:51.983215614 +0000 UTC m=+0.630453135 container attach f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:26:51 compute-0 podman[104069]: 2025-11-22 03:26:51.984236729 +0000 UTC m=+0.631474260 container died f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:26:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Nov 22 03:26:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-80fa21fa3d5c4187e8cc0dd393d3ad48876506c037fd79f100e42226b7805c22-merged.mount: Deactivated successfully.
Nov 22 03:26:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 03:26:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Nov 22 03:26:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Nov 22 03:26:52 compute-0 ceph-mon[75011]: 4.6 scrub starts
Nov 22 03:26:52 compute-0 ceph-mon[75011]: 4.6 scrub ok
Nov 22 03:26:52 compute-0 ceph-mon[75011]: osdmap e44: 3 total, 3 up, 3 in
Nov 22 03:26:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 03:26:52 compute-0 podman[104069]: 2025-11-22 03:26:52.925822514 +0000 UTC m=+1.573060065 container remove f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:26:52 compute-0 systemd[1]: libpod-conmon-f1c7c1e814865262e761217c1d388dc5a7a85f6f82f8479aa8454303ae7f33c1.scope: Deactivated successfully.
Nov 22 03:26:53 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.b scrub starts
Nov 22 03:26:53 compute-0 podman[104111]: 2025-11-22 03:26:53.163742728 +0000 UTC m=+0.044359830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:26:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v104: 212 pgs: 2 activating+degraded, 50 activating, 160 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 20 op/s; 3/245 objects degraded (1.224%)
Nov 22 03:26:53 compute-0 podman[104111]: 2025-11-22 03:26:53.351268373 +0000 UTC m=+0.231885455 container create 6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brahmagupta, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:26:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 45 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=13.836411476s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 91.478973389s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 45 pg[6.e( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=13.836331367s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.478973389s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 45 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=13.836409569s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 91.479087830s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 45 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=13.836360931s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.479087830s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 45 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=13.831878662s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 91.474670410s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 45 pg[6.6( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=13.831831932s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.474670410s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 45 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=13.830527306s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 91.474533081s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:26:53 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 45 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45 pruub=13.830407143s) [1] r=-1 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 91.474533081s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:26:53 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 45 pg[6.e( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:53 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 45 pg[6.2( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:53 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 45 pg[6.6( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:53 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 45 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:26:53 compute-0 systemd[1]: Started libpod-conmon-6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4.scope.
Nov 22 03:26:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3b5c37a3deacbc7b1a1b710459ed58f3b627c1aeb6982ff98d46fbf5372854/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3b5c37a3deacbc7b1a1b710459ed58f3b627c1aeb6982ff98d46fbf5372854/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3b5c37a3deacbc7b1a1b710459ed58f3b627c1aeb6982ff98d46fbf5372854/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe3b5c37a3deacbc7b1a1b710459ed58f3b627c1aeb6982ff98d46fbf5372854/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:26:53 compute-0 podman[104111]: 2025-11-22 03:26:53.661285066 +0000 UTC m=+0.541902168 container init 6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brahmagupta, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:53 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.b scrub ok
Nov 22 03:26:53 compute-0 podman[104111]: 2025-11-22 03:26:53.673467687 +0000 UTC m=+0.554084769 container start 6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brahmagupta, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:53 compute-0 podman[104111]: 2025-11-22 03:26:53.736453569 +0000 UTC m=+0.617070721 container attach 6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:26:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Nov 22 03:26:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 3/245 objects degraded (1.224%), 2 pgs degraded (PG_DEGRADED)
Nov 22 03:26:53 compute-0 ceph-mon[75011]: pgmap v102: 212 pgs: 212 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 20 op/s
Nov 22 03:26:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 03:26:53 compute-0 ceph-mon[75011]: osdmap e45: 3 total, 3 up, 3 in
Nov 22 03:26:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Nov 22 03:26:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Nov 22 03:26:54 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.c deep-scrub starts
Nov 22 03:26:54 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.c deep-scrub ok
Nov 22 03:26:54 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.c deep-scrub starts
Nov 22 03:26:54 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.c deep-scrub ok
Nov 22 03:26:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 46 pg[6.e( v 37'39 lc 32'19 (0'0,37'39] local-lis/les=45/46 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 46 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 46 pg[6.6( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=45/46 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 46 pg[6.2( v 37'39 (0'0,37'39] local-lis/les=45/46 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=45) [1] r=0 lpr=45 pi=[39,45)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]: {
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:     "0": [
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:         {
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "devices": [
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "/dev/loop3"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             ],
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_name": "ceph_lv0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_size": "21470642176",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "name": "ceph_lv0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "tags": {
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.crush_device_class": "",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.encrypted": "0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osd_id": "0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.type": "block",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.vdo": "0"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             },
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "type": "block",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "vg_name": "ceph_vg0"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:         }
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:     ],
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:     "1": [
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:         {
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "devices": [
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "/dev/loop4"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             ],
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_name": "ceph_lv1",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_size": "21470642176",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "name": "ceph_lv1",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "tags": {
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.crush_device_class": "",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.encrypted": "0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osd_id": "1",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.type": "block",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.vdo": "0"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             },
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "type": "block",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "vg_name": "ceph_vg1"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:         }
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:     ],
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:     "2": [
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:         {
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "devices": [
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "/dev/loop5"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             ],
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_name": "ceph_lv2",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_size": "21470642176",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "name": "ceph_lv2",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "tags": {
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.cluster_name": "ceph",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.crush_device_class": "",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.encrypted": "0",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osd_id": "2",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.type": "block",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:                 "ceph.vdo": "0"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             },
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "type": "block",
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:             "vg_name": "ceph_vg2"
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:         }
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]:     ]
Nov 22 03:26:54 compute-0 compassionate_brahmagupta[104128]: }
Nov 22 03:26:54 compute-0 systemd[1]: libpod-6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4.scope: Deactivated successfully.
Nov 22 03:26:54 compute-0 podman[104111]: 2025-11-22 03:26:54.619341122 +0000 UTC m=+1.499958214 container died 6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brahmagupta, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:26:54 compute-0 ceph-mon[75011]: 4.b scrub starts
Nov 22 03:26:54 compute-0 ceph-mon[75011]: pgmap v104: 212 pgs: 2 activating+degraded, 50 activating, 160 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 0 B/s wr, 20 op/s; 3/245 objects degraded (1.224%)
Nov 22 03:26:54 compute-0 ceph-mon[75011]: 4.b scrub ok
Nov 22 03:26:54 compute-0 ceph-mon[75011]: Health check failed: Degraded data redundancy: 3/245 objects degraded (1.224%), 2 pgs degraded (PG_DEGRADED)
Nov 22 03:26:54 compute-0 ceph-mon[75011]: osdmap e46: 3 total, 3 up, 3 in
Nov 22 03:26:54 compute-0 ceph-mon[75011]: 4.c deep-scrub starts
Nov 22 03:26:54 compute-0 ceph-mon[75011]: 4.c deep-scrub ok
Nov 22 03:26:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe3b5c37a3deacbc7b1a1b710459ed58f3b627c1aeb6982ff98d46fbf5372854-merged.mount: Deactivated successfully.
Nov 22 03:26:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v106: 212 pgs: 1 active+recovery_wait+degraded, 2 activating+degraded, 4 peering, 50 activating, 1 active+recovering, 154 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 5/245 objects degraded (2.041%); 3/245 objects misplaced (1.224%); 31 B/s, 1 objects/s recovering
Nov 22 03:26:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:26:56 compute-0 ceph-mgr[75294]: [progress INFO root] Writing back 13 completed events
Nov 22 03:26:57 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.15 deep-scrub starts
Nov 22 03:26:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v107: 212 pgs: 1 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2/245 objects degraded (0.816%); 3/245 objects misplaced (1.224%); 110 B/s, 1 objects/s recovering
Nov 22 03:26:57 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.e scrub starts
Nov 22 03:26:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v108: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 163 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:27:01 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.083357334s, txc = 0x5603709a6600
Nov 22 03:27:01 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.075868607s, txc = 0x56036fbde300
Nov 22 03:27:01 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.068603516s, txc = 0x56036fbe0000
Nov 22 03:27:01 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.957019329s
Nov 22 03:27:01 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.957019806s
Nov 22 03:27:01 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.15 deep-scrub ok
Nov 22 03:27:01 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.b scrub starts
Nov 22 03:27:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v109: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:27:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [WRN] : Health check update: Degraded data redundancy: 5/245 objects degraded (2.041%), 3 pgs degraded (PG_DEGRADED)
Nov 22 03:27:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:27:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 22 03:27:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:27:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 22 03:27:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:27:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 6.909700394s
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 6.909700394s
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.100356102s, txc = 0x560951968900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.100265503s, txc = 0x560951968300
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.100111961s, txc = 0x560951a29200
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.099829674s, txc = 0x5609518b6300
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.099627018s, txc = 0x5609518b6900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.099358559s, txc = 0x5609518b6c00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.099168777s, txc = 0x5609518b7500
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.098940372s, txc = 0x5609518afb00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.098720074s, txc = 0x5609518aec00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.098571777s, txc = 0x5609518b6f00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.098399639s, txc = 0x5609518aef00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.098219872s, txc = 0x56095196db00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.098035336s, txc = 0x56095196c600
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.097881317s, txc = 0x56095196c900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.097639561s, txc = 0x56095196cc00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.097458839s, txc = 0x56095196cf00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.097268105s, txc = 0x56095196d200
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.097067356s, txc = 0x56095196d500
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.096895695s, txc = 0x56095196d800
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.096712589s, txc = 0x560951b7a000
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.096574306s, txc = 0x560951b7a300
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.096372604s, txc = 0x560951b7a600
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.096140862s, txc = 0x560951b7a900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.095954895s, txc = 0x560951b7ac00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.095779896s, txc = 0x560951b7af00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.095615387s, txc = 0x560951b7b200
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.095415115s, txc = 0x560951b7b500
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.095211983s, txc = 0x560951b7b800
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.095015526s, txc = 0x560951a28f00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094906807s, txc = 0x5609518b1200
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094830990s, txc = 0x560951a2a900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094670296s, txc = 0x5609518b0f00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094571114s, txc = 0x5609518ae600
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094485283s, txc = 0x5609518b7b00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094403267s, txc = 0x560951a2a600
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094330311s, txc = 0x560951934000
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094217300s, txc = 0x560951a2a300
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094141960s, txc = 0x560951a2a000
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.094058514s, txc = 0x560951934300
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093984127s, txc = 0x560951934600
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093904972s, txc = 0x5609518b0c00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093825817s, txc = 0x560951934900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093752861s, txc = 0x560951a28c00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093671322s, txc = 0x5609518b2900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093592644s, txc = 0x560951a17b00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093516827s, txc = 0x5609518b0900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093439579s, txc = 0x560951a28900
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093292236s, txc = 0x560951592f00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093207359s, txc = 0x5609518b2f00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093132973s, txc = 0x560951981b00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.093056202s, txc = 0x560951887800
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092979908s, txc = 0x560951887500
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092908859s, txc = 0x560951981500
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092839241s, txc = 0x5609519c2f00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092760563s, txc = 0x560951593800
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092689037s, txc = 0x560951981200
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092579842s, txc = 0x560951593b00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092505932s, txc = 0x5609518b5b00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092364788s, txc = 0x560951887200
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092280388s, txc = 0x560951886f00
Nov 22 03:27:02 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.092199802s, txc = 0x560951a29500
Nov 22 03:27:02 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.b scrub ok
Nov 22 03:27:02 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Nov 22 03:27:02 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_commit, latency = 6.555882931s
Nov 22 03:27:02 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency slow operation observed for kv_sync, latency = 6.555883408s
Nov 22 03:27:02 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.556265831s, txc = 0x55936e6ee300
Nov 22 03:27:02 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.e scrub ok
Nov 22 03:27:02 compute-0 ceph-mon[75011]: 2.c deep-scrub starts
Nov 22 03:27:02 compute-0 ceph-mon[75011]: 2.c deep-scrub ok
Nov 22 03:27:02 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.632361412s, txc = 0x5603709a6900
Nov 22 03:27:02 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 7.511725426s, txc = 0x56036fbde600
Nov 22 03:27:02 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.465908051s, txc = 0x56036fbdec00
Nov 22 03:27:02 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Nov 22 03:27:03 compute-0 podman[104111]: 2025-11-22 03:27:03.244147507 +0000 UTC m=+10.124764589 container remove 6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:27:03 compute-0 sudo[104004]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:03 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.827808380s, txc = 0x55936e6c7800
Nov 22 03:27:03 compute-0 systemd[1]: libpod-conmon-6fb114a4ce36ba49cda0728c6b883c91dcae81c120b712400dc0f322a5d9c6b4.scope: Deactivated successfully.
Nov 22 03:27:03 compute-0 sudo[104149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:27:03 compute-0 sudo[104149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:27:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v110: 212 pgs: 2 active+clean+scrubbing, 210 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 0 keys/s, 1 objects/s recovering
Nov 22 03:27:03 compute-0 sudo[104149]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:03 compute-0 sudo[104176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:27:03 compute-0 sudo[104176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:27:03 compute-0 sudo[104176]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:03 compute-0 sudo[104201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:27:03 compute-0 sudo[104201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:27:03 compute-0 sudo[104201]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:03 compute-0 sudo[104226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:27:03 compute-0 sudo[104226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:27:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 22 03:27:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:27:03 compute-0 podman[104290]: 2025-11-22 03:27:03.828467898 +0000 UTC m=+0.029563114 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:27:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:27:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:27:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Nov 22 03:27:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:27:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Nov 22 03:27:04 compute-0 podman[104290]: 2025-11-22 03:27:04.522156836 +0000 UTC m=+0.723252052 container create 0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:27:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Nov 22 03:27:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 5/245 objects degraded (2.041%), 3 pgs degraded)
Nov 22 03:27:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [INF] : Cluster is now healthy
Nov 22 03:27:05 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Nov 22 03:27:05 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Nov 22 03:27:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v112: 212 pgs: 3 active+clean+scrubbing, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 0 keys/s, 0 objects/s recovering
Nov 22 03:27:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 22 03:27:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:27:05 compute-0 ceph-mon[75011]: pgmap v106: 212 pgs: 1 active+recovery_wait+degraded, 2 activating+degraded, 4 peering, 50 activating, 1 active+recovering, 154 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 5/245 objects degraded (2.041%); 3/245 objects misplaced (1.224%); 31 B/s, 1 objects/s recovering
Nov 22 03:27:05 compute-0 ceph-mon[75011]: 4.15 deep-scrub starts
Nov 22 03:27:05 compute-0 ceph-mon[75011]: pgmap v107: 212 pgs: 1 active+recovery_wait+degraded, 4 peering, 1 active+recovering, 206 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 2/245 objects degraded (0.816%); 3/245 objects misplaced (1.224%); 110 B/s, 1 objects/s recovering
Nov 22 03:27:05 compute-0 ceph-mon[75011]: 2.e scrub starts
Nov 22 03:27:05 compute-0 ceph-mon[75011]: pgmap v108: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 163 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:27:05 compute-0 ceph-mon[75011]: 4.15 deep-scrub ok
Nov 22 03:27:05 compute-0 ceph-mon[75011]: 3.b scrub starts
Nov 22 03:27:05 compute-0 ceph-mon[75011]: pgmap v109: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:27:05 compute-0 ceph-mon[75011]: Health check update: Degraded data redundancy: 5/245 objects degraded (2.041%), 3 pgs degraded (PG_DEGRADED)
Nov 22 03:27:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:27:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:27:05 compute-0 ceph-mon[75011]: 3.b scrub ok
Nov 22 03:27:05 compute-0 ceph-mon[75011]: 4.16 scrub starts
Nov 22 03:27:05 compute-0 ceph-mon[75011]: 2.e scrub ok
Nov 22 03:27:05 compute-0 ceph-mon[75011]: 4.16 scrub ok
Nov 22 03:27:05 compute-0 systemd[1]: Started libpod-conmon-0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f.scope.
Nov 22 03:27:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:27:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:27:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:27:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:27:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:27:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:27:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:27:06 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.12 deep-scrub starts
Nov 22 03:27:06 compute-0 podman[104290]: 2025-11-22 03:27:06.951235525 +0000 UTC m=+3.152330771 container init 0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:27:06 compute-0 podman[104290]: 2025-11-22 03:27:06.963672635 +0000 UTC m=+3.164767851 container start 0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:27:06 compute-0 naughty_ganguly[104306]: 167 167
Nov 22 03:27:06 compute-0 systemd[1]: libpod-0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f.scope: Deactivated successfully.
Nov 22 03:27:07 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 47 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=39/20 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=8.220870018s) [0] r=-1 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 37'39 active pruub 94.657356262s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:07 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 47 pg[6.3( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=39/20 lis/c=43/43 les/c/f=44/46/0 sis=47 pruub=8.220787048s) [0] r=-1 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 94.657356262s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:07 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 47 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/45/0 sis=47 pruub=8.220350266s) [0] r=-1 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 37'39 active pruub 94.657226562s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:07 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 47 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/45/0 sis=47 pruub=8.220290184s) [0] r=-1 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 94.657226562s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:07 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 47 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.219831467s) [0] r=-1 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 37'39 active pruub 94.657157898s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:07 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 47 pg[6.7( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.219786644s) [0] r=-1 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 94.657157898s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:07 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 47 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.218293190s) [0] r=-1 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 37'39 active pruub 94.655899048s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:07 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 47 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=47 pruub=8.218194962s) [0] r=-1 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 94.655899048s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:07 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.12 deep-scrub ok
Nov 22 03:27:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v113: 212 pgs: 1 active+clean+scrubbing+deep, 2 active+clean+scrubbing, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 0 keys/s, 0 objects/s recovering
Nov 22 03:27:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 22 03:27:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:27:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:27:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Nov 22 03:27:07 compute-0 podman[104290]: 2025-11-22 03:27:07.938385244 +0000 UTC m=+4.139480470 container attach 0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:27:07 compute-0 podman[104290]: 2025-11-22 03:27:07.93935178 +0000 UTC m=+4.140446966 container died 0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:27:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Nov 22 03:27:08 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 47 pg[6.7( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=47) [0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:08 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 47 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=47) [0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:08 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 47 pg[6.3( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/46/0 sis=47) [0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:08 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 47 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/45/0 sis=47) [0] r=0 lpr=47 pi=[43,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Nov 22 03:27:09 compute-0 ceph-mon[75011]: pgmap v110: 212 pgs: 2 active+clean+scrubbing, 210 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 130 B/s, 0 keys/s, 1 objects/s recovering
Nov 22 03:27:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:27:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:27:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:27:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:27:09 compute-0 ceph-mon[75011]: osdmap e47: 3 total, 3 up, 3 in
Nov 22 03:27:09 compute-0 ceph-mon[75011]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 5/245 objects degraded (2.041%), 3 pgs degraded)
Nov 22 03:27:09 compute-0 ceph-mon[75011]: Cluster is now healthy
Nov 22 03:27:09 compute-0 ceph-mon[75011]: 2.10 scrub starts
Nov 22 03:27:09 compute-0 ceph-mon[75011]: 2.10 scrub ok
Nov 22 03:27:09 compute-0 ceph-mon[75011]: pgmap v112: 212 pgs: 3 active+clean+scrubbing, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 114 B/s, 0 keys/s, 0 objects/s recovering
Nov 22 03:27:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:27:09 compute-0 ceph-mon[75011]: 2.12 deep-scrub starts
Nov 22 03:27:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v115: 212 pgs: 1 active+clean+scrubbing+deep, 1 active+clean+scrubbing, 210 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-77de8d223a8f863aa5f5493a0db467c02e77345c35d7c9951bac5dd88b50715d-merged.mount: Deactivated successfully.
Nov 22 03:27:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 22 03:27:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:27:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:27:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:27:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Nov 22 03:27:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Nov 22 03:27:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 49 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=49 pruub=12.778009415s) [1] r=-1 lpr=49 pi=[39,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 107.479553223s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 49 pg[6.c( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=49 pruub=12.777941704s) [1] r=-1 lpr=49 pi=[39,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.479553223s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 49 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=49 pruub=12.776721001s) [1] r=-1 lpr=49 pi=[39,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 107.479522705s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 49 pg[6.4( v 37'39 (0'0,37'39] local-lis/les=39/40 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=49 pruub=12.776659012s) [1] r=-1 lpr=49 pi=[39,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 107.479522705s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:10 compute-0 podman[104290]: 2025-11-22 03:27:10.493708515 +0000 UTC m=+6.694803691 container remove 0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 03:27:10 compute-0 ceph-mon[75011]: 2.12 deep-scrub ok
Nov 22 03:27:10 compute-0 ceph-mon[75011]: pgmap v113: 212 pgs: 1 active+clean+scrubbing+deep, 2 active+clean+scrubbing, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 63 B/s, 0 keys/s, 0 objects/s recovering
Nov 22 03:27:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:27:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:27:10 compute-0 ceph-mon[75011]: osdmap e48: 3 total, 3 up, 3 in
Nov 22 03:27:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:27:10 compute-0 systemd[1]: libpod-conmon-0695165c6dc2103c5f8104c1538aed2514c367b8c5c47aa4f1159130bf1daa5f.scope: Deactivated successfully.
Nov 22 03:27:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 49 pg[6.f( v 37'39 lc 32'1 (0'0,37'39] local-lis/les=47/49 n=1 ec=39/20 lis/c=43/43 les/c/f=44/45/0 sis=47) [0] r=0 lpr=47 pi=[43,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 49 pg[6.3( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=47/49 n=2 ec=39/20 lis/c=43/43 les/c/f=44/46/0 sis=47) [0] r=0 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 49 pg[6.c( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=49) [1] r=0 lpr=49 pi=[39,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 49 pg[6.4( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=49) [1] r=0 lpr=49 pi=[39,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 49 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=47/49 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=47) [0] r=0 lpr=47 pi=[43,47)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 49 pg[6.7( v 37'39 lc 32'21 (0'0,37'39] local-lis/les=47/49 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=47) [0] r=0 lpr=47 pi=[43,47)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:10 compute-0 podman[104330]: 2025-11-22 03:27:10.726569702 +0000 UTC m=+0.029009799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:27:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.d scrub starts
Nov 22 03:27:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Nov 22 03:27:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Nov 22 03:27:10 compute-0 podman[104330]: 2025-11-22 03:27:10.85410243 +0000 UTC m=+0.156542527 container create 8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:27:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Nov 22 03:27:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Nov 22 03:27:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.d scrub ok
Nov 22 03:27:10 compute-0 systemd[1]: Started libpod-conmon-8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab.scope.
Nov 22 03:27:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9513ae9cc0aba49c33a5d82259828ed1c71944858aac0aeb26ffc048d8b3537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9513ae9cc0aba49c33a5d82259828ed1c71944858aac0aeb26ffc048d8b3537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9513ae9cc0aba49c33a5d82259828ed1c71944858aac0aeb26ffc048d8b3537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:27:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9513ae9cc0aba49c33a5d82259828ed1c71944858aac0aeb26ffc048d8b3537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:27:11 compute-0 podman[104330]: 2025-11-22 03:27:11.137622866 +0000 UTC m=+0.440063013 container init 8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:27:11 compute-0 podman[104330]: 2025-11-22 03:27:11.149562883 +0000 UTC m=+0.452002950 container start 8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:27:11 compute-0 podman[104330]: 2025-11-22 03:27:11.271486631 +0000 UTC m=+0.573926728 container attach 8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:27:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Nov 22 03:27:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v117: 212 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 22 03:27:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 03:27:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:27:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Nov 22 03:27:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Nov 22 03:27:11 compute-0 ceph-mon[75011]: pgmap v115: 212 pgs: 1 active+clean+scrubbing+deep, 1 active+clean+scrubbing, 210 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:27:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:27:11 compute-0 ceph-mon[75011]: osdmap e49: 3 total, 3 up, 3 in
Nov 22 03:27:11 compute-0 ceph-mon[75011]: 3.d scrub starts
Nov 22 03:27:11 compute-0 ceph-mon[75011]: 3.10 scrub starts
Nov 22 03:27:11 compute-0 ceph-mon[75011]: 3.13 scrub starts
Nov 22 03:27:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 03:27:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:27:11 compute-0 ceph-mon[75011]: osdmap e50: 3 total, 3 up, 3 in
Nov 22 03:27:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 50 pg[6.4( v 37'39 lc 32'15 (0'0,37'39] local-lis/les=49/50 n=2 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=49) [1] r=0 lpr=49 pi=[39,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 50 pg[6.c( v 37'39 lc 32'16 (0'0,37'39] local-lis/les=49/50 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=49) [1] r=0 lpr=49 pi=[39,49)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:12 compute-0 peaceful_pike[104347]: {
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "osd_id": 1,
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "type": "bluestore"
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:     },
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "osd_id": 0,
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "type": "bluestore"
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:     },
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "osd_id": 2,
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:         "type": "bluestore"
Nov 22 03:27:12 compute-0 peaceful_pike[104347]:     }
Nov 22 03:27:12 compute-0 peaceful_pike[104347]: }
Nov 22 03:27:12 compute-0 systemd[1]: libpod-8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab.scope: Deactivated successfully.
Nov 22 03:27:12 compute-0 podman[104330]: 2025-11-22 03:27:12.221588218 +0000 UTC m=+1.524028285 container died 8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:27:12 compute-0 systemd[1]: libpod-8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab.scope: Consumed 1.077s CPU time.
Nov 22 03:27:12 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Nov 22 03:27:12 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Nov 22 03:27:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9513ae9cc0aba49c33a5d82259828ed1c71944858aac0aeb26ffc048d8b3537-merged.mount: Deactivated successfully.
Nov 22 03:27:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Nov 22 03:27:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 03:27:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Nov 22 03:27:12 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Nov 22 03:27:12 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Nov 22 03:27:12 compute-0 podman[104330]: 2025-11-22 03:27:12.74773021 +0000 UTC m=+2.050170297 container remove 8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_pike, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:27:12 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Nov 22 03:27:12 compute-0 sudo[104226]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:12 compute-0 systemd[1]: libpod-conmon-8b3ec46a0774439f72e0f6764a4e9823f7ea10455e128a3f83638c090f7ab9ab.scope: Deactivated successfully.
Nov 22 03:27:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:27:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:27:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:27:13 compute-0 ceph-mon[75011]: 3.10 scrub ok
Nov 22 03:27:13 compute-0 ceph-mon[75011]: 3.13 scrub ok
Nov 22 03:27:13 compute-0 ceph-mon[75011]: 3.d scrub ok
Nov 22 03:27:13 compute-0 ceph-mon[75011]: pgmap v117: 212 pgs: 2 active+clean+scrubbing, 1 active+clean+scrubbing+deep, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 03:27:13 compute-0 ceph-mon[75011]: osdmap e51: 3 total, 3 up, 3 in
Nov 22 03:27:13 compute-0 ceph-mon[75011]: 4.17 scrub starts
Nov 22 03:27:13 compute-0 ceph-mon[75011]: 4.17 scrub ok
Nov 22 03:27:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:27:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:27:13 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7e25af8f-e9f3-414c-a686-c5cee4a2fd51 does not exist
Nov 22 03:27:13 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f1aed88f-15dd-4228-bbaf-cf13ea744a97 does not exist
Nov 22 03:27:13 compute-0 sudo[104393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:27:13 compute-0 sudo[104393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:27:13 compute-0 sudo[104393]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:13 compute-0 sudo[104418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:27:13 compute-0 sudo[104418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:27:13 compute-0 sudo[104418]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v120: 212 pgs: 3 active+clean+scrubbing, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 124 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:27:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 22 03:27:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 03:27:13 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Nov 22 03:27:13 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Nov 22 03:27:13 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Nov 22 03:27:13 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Nov 22 03:27:13 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Nov 22 03:27:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/45/0 sis=51 pruub=10.881677628s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=37'39 mlcod 37'39 active pruub 103.848526001s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 51 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/45/0 sis=51 pruub=10.881585121s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 103.848526001s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.881602287s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=37'39 mlcod 37'39 active pruub 103.848579407s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 51 pg[6.5( v 37'39 (0'0,37'39] local-lis/les=43/44 n=2 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=51 pruub=10.881525040s) [0] r=-1 lpr=51 pi=[43,51)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 103.848579407s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 51 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/45/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 51 pg[6.5( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:13 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Nov 22 03:27:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Nov 22 03:27:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 03:27:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Nov 22 03:27:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Nov 22 03:27:14 compute-0 ceph-mon[75011]: 2.14 scrub starts
Nov 22 03:27:14 compute-0 ceph-mon[75011]: 2.14 scrub ok
Nov 22 03:27:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:27:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 03:27:14 compute-0 ceph-mon[75011]: 2.1a scrub starts
Nov 22 03:27:14 compute-0 ceph-mon[75011]: 2.1a scrub ok
Nov 22 03:27:14 compute-0 ceph-mon[75011]: 4.19 scrub starts
Nov 22 03:27:14 compute-0 ceph-mon[75011]: 4.19 scrub ok
Nov 22 03:27:14 compute-0 ceph-mon[75011]: 3.14 scrub starts
Nov 22 03:27:14 compute-0 ceph-mon[75011]: 3.14 scrub ok
Nov 22 03:27:14 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 52 pg[6.d( v 37'39 lc 32'13 (0'0,37'39] local-lis/les=51/52 n=1 ec=39/20 lis/c=43/43 les/c/f=44/45/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:14 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 52 pg[6.5( v 37'39 lc 32'11 (0'0,37'39] local-lis/les=51/52 n=2 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=51) [0] r=0 lpr=51 pi=[43,51)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:14 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Nov 22 03:27:14 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Nov 22 03:27:15 compute-0 ceph-mon[75011]: pgmap v120: 212 pgs: 3 active+clean+scrubbing, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 124 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:27:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 03:27:15 compute-0 ceph-mon[75011]: osdmap e52: 3 total, 3 up, 3 in
Nov 22 03:27:15 compute-0 ceph-mon[75011]: 3.19 scrub starts
Nov 22 03:27:15 compute-0 ceph-mon[75011]: 3.19 scrub ok
Nov 22 03:27:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v122: 212 pgs: 3 active+clean+scrubbing, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 127 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:27:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 22 03:27:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 03:27:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Nov 22 03:27:16 compute-0 ceph-mon[75011]: pgmap v122: 212 pgs: 3 active+clean+scrubbing, 209 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 127 B/s, 1 keys/s, 1 objects/s recovering
Nov 22 03:27:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 03:27:16 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Nov 22 03:27:16 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Nov 22 03:27:16 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 03:27:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Nov 22 03:27:16 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Nov 22 03:27:16 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Nov 22 03:27:16 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Nov 22 03:27:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v124: 212 pgs: 1 active+clean+scrubbing, 211 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 379 B/s, 1 keys/s, 2 objects/s recovering
Nov 22 03:27:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 22 03:27:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 03:27:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Nov 22 03:27:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 03:27:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Nov 22 03:27:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Nov 22 03:27:17 compute-0 ceph-mon[75011]: 2.1e scrub starts
Nov 22 03:27:17 compute-0 ceph-mon[75011]: 2.1e scrub ok
Nov 22 03:27:17 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 03:27:17 compute-0 ceph-mon[75011]: osdmap e53: 3 total, 3 up, 3 in
Nov 22 03:27:17 compute-0 ceph-mon[75011]: 3.1a scrub starts
Nov 22 03:27:17 compute-0 ceph-mon[75011]: 3.1a scrub ok
Nov 22 03:27:17 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 03:27:17 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Nov 22 03:27:17 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Nov 22 03:27:18 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 54 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=12.940479279s) [2] r=-1 lpr=54 pi=[39,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 115.475669861s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:18 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 54 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=39/40 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=12.940379143s) [2] r=-1 lpr=54 pi=[39,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 115.475669861s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:18 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 54 pg[6.8( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=54) [2] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:18 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Nov 22 03:27:18 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Nov 22 03:27:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Nov 22 03:27:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Nov 22 03:27:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Nov 22 03:27:18 compute-0 ceph-mon[75011]: pgmap v124: 212 pgs: 1 active+clean+scrubbing, 211 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 379 B/s, 1 keys/s, 2 objects/s recovering
Nov 22 03:27:18 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 03:27:18 compute-0 ceph-mon[75011]: osdmap e54: 3 total, 3 up, 3 in
Nov 22 03:27:18 compute-0 ceph-mon[75011]: 4.1d scrub starts
Nov 22 03:27:18 compute-0 ceph-mon[75011]: 4.1d scrub ok
Nov 22 03:27:18 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 55 pg[6.8( v 37'39 (0'0,37'39] local-lis/les=54/55 n=1 ec=39/20 lis/c=39/39 les/c/f=40/40/0 sis=54) [2] r=0 lpr=54 pi=[39,54)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v127: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 1 objects/s recovering
Nov 22 03:27:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 22 03:27:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 03:27:19 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Nov 22 03:27:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Nov 22 03:27:19 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Nov 22 03:27:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 03:27:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Nov 22 03:27:19 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Nov 22 03:27:19 compute-0 ceph-mon[75011]: 5.6 scrub starts
Nov 22 03:27:19 compute-0 ceph-mon[75011]: 5.6 scrub ok
Nov 22 03:27:19 compute-0 ceph-mon[75011]: osdmap e55: 3 total, 3 up, 3 in
Nov 22 03:27:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 03:27:20 compute-0 ceph-mon[75011]: pgmap v127: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 333 B/s, 1 objects/s recovering
Nov 22 03:27:20 compute-0 ceph-mon[75011]: 3.1c scrub starts
Nov 22 03:27:20 compute-0 ceph-mon[75011]: 3.1c scrub ok
Nov 22 03:27:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 03:27:20 compute-0 ceph-mon[75011]: osdmap e56: 3 total, 3 up, 3 in
Nov 22 03:27:20 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Nov 22 03:27:20 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Nov 22 03:27:20 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 56 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.957586288s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 111.848587036s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:20 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 56 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=43/44 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=56 pruub=11.957526207s) [0] r=-1 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 111.848587036s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:20 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 56 pg[6.9( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v129: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 0 objects/s recovering
Nov 22 03:27:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 22 03:27:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 03:27:21 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Nov 22 03:27:21 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Nov 22 03:27:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Nov 22 03:27:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 03:27:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Nov 22 03:27:21 compute-0 ceph-mon[75011]: 4.1e scrub starts
Nov 22 03:27:21 compute-0 ceph-mon[75011]: 4.1e scrub ok
Nov 22 03:27:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 03:27:21 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Nov 22 03:27:21 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 57 pg[6.9( v 37'39 (0'0,37'39] local-lis/les=56/57 n=1 ec=39/20 lis/c=43/43 les/c/f=44/44/0 sis=56) [0] r=0 lpr=56 pi=[43,56)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:22 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 57 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=39/20 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=11.947973251s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active pruub 113.678535461s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:22 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 57 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=45/46 n=1 ec=39/20 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=11.947881699s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.678535461s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:22 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 57 pg[6.a( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Nov 22 03:27:22 compute-0 ceph-mon[75011]: pgmap v129: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 32 B/s, 0 objects/s recovering
Nov 22 03:27:22 compute-0 ceph-mon[75011]: 8.1 scrub starts
Nov 22 03:27:22 compute-0 ceph-mon[75011]: 8.1 scrub ok
Nov 22 03:27:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 03:27:22 compute-0 ceph-mon[75011]: osdmap e57: 3 total, 3 up, 3 in
Nov 22 03:27:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Nov 22 03:27:22 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Nov 22 03:27:22 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 58 pg[6.a( v 37'39 (0'0,37'39] local-lis/les=57/58 n=1 ec=39/20 lis/c=45/45 les/c/f=46/46/0 sis=57) [0] r=0 lpr=57 pi=[45,57)/1 crt=37'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v132: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 22 03:27:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 03:27:23 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Nov 22 03:27:23 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Nov 22 03:27:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Nov 22 03:27:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 03:27:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Nov 22 03:27:23 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Nov 22 03:27:23 compute-0 ceph-mon[75011]: osdmap e58: 3 total, 3 up, 3 in
Nov 22 03:27:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 03:27:24 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.a scrub starts
Nov 22 03:27:24 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.a scrub ok
Nov 22 03:27:24 compute-0 ceph-mon[75011]: pgmap v132: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:24 compute-0 ceph-mon[75011]: 5.8 scrub starts
Nov 22 03:27:24 compute-0 ceph-mon[75011]: 5.8 scrub ok
Nov 22 03:27:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 03:27:24 compute-0 ceph-mon[75011]: osdmap e59: 3 total, 3 up, 3 in
Nov 22 03:27:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v134: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 22 03:27:25 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 03:27:25 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.b scrub starts
Nov 22 03:27:25 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.b scrub ok
Nov 22 03:27:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Nov 22 03:27:25 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 03:27:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Nov 22 03:27:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Nov 22 03:27:25 compute-0 ceph-mon[75011]: 5.a scrub starts
Nov 22 03:27:25 compute-0 ceph-mon[75011]: 5.a scrub ok
Nov 22 03:27:25 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 03:27:25 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=39/20 lis/c=47/47 les/c/f=49/50/0 sis=59 pruub=8.739512444s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=37'39 mlcod 37'39 active pruub 118.927970886s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:25 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 59 pg[6.b( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=39/20 lis/c=47/47 les/c/f=49/50/0 sis=59 pruub=8.739431381s) [1] r=-1 lpr=59 pi=[47,59)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 118.927970886s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:25 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=47/47 les/c/f=49/50/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:26 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Nov 22 03:27:26 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Nov 22 03:27:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Nov 22 03:27:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Nov 22 03:27:26 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Nov 22 03:27:27 compute-0 ceph-mon[75011]: pgmap v134: 212 pgs: 212 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:27 compute-0 ceph-mon[75011]: 5.b scrub starts
Nov 22 03:27:27 compute-0 ceph-mon[75011]: 5.b scrub ok
Nov 22 03:27:27 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 03:27:27 compute-0 ceph-mon[75011]: osdmap e60: 3 total, 3 up, 3 in
Nov 22 03:27:27 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 61 pg[6.b( v 37'39 lc 0'0 (0'0,37'39] local-lis/les=59/61 n=1 ec=39/20 lis/c=47/47 les/c/f=49/50/0 sis=59) [1] r=0 lpr=59 pi=[47,59)/1 crt=37'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v137: 212 pgs: 212 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 22 03:27:27 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 03:27:27 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.d scrub starts
Nov 22 03:27:27 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.d scrub ok
Nov 22 03:27:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Nov 22 03:27:28 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 03:27:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Nov 22 03:27:28 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Nov 22 03:27:28 compute-0 ceph-mon[75011]: 4.1f scrub starts
Nov 22 03:27:28 compute-0 ceph-mon[75011]: 4.1f scrub ok
Nov 22 03:27:28 compute-0 ceph-mon[75011]: osdmap e61: 3 total, 3 up, 3 in
Nov 22 03:27:28 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 03:27:28 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Nov 22 03:27:28 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Nov 22 03:27:28 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.e scrub starts
Nov 22 03:27:28 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.e scrub ok
Nov 22 03:27:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v139: 212 pgs: 212 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 22 03:27:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 03:27:29 compute-0 ceph-mon[75011]: pgmap v137: 212 pgs: 212 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:29 compute-0 ceph-mon[75011]: 5.d scrub starts
Nov 22 03:27:29 compute-0 ceph-mon[75011]: 5.d scrub ok
Nov 22 03:27:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 03:27:29 compute-0 ceph-mon[75011]: osdmap e62: 3 total, 3 up, 3 in
Nov 22 03:27:29 compute-0 ceph-mon[75011]: 8.3 scrub starts
Nov 22 03:27:29 compute-0 ceph-mon[75011]: 8.3 scrub ok
Nov 22 03:27:29 compute-0 ceph-mon[75011]: 5.e scrub starts
Nov 22 03:27:29 compute-0 ceph-mon[75011]: 5.e scrub ok
Nov 22 03:27:29 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 62 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=51/52 n=1 ec=39/20 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=8.892234802s) [1] r=-1 lpr=62 pi=[51,62)/1 crt=37'39 mlcod 37'39 active pruub 122.595130920s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:29 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 62 pg[6.d( v 37'39 (0'0,37'39] local-lis/les=51/52 n=1 ec=39/20 lis/c=51/51 les/c/f=52/52/0 sis=62 pruub=8.892162323s) [1] r=-1 lpr=62 pi=[51,62)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 122.595130920s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:29 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Nov 22 03:27:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 03:27:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Nov 22 03:27:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Nov 22 03:27:29 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 63 pg[6.d( v 37'39 lc 32'13 (0'0,37'39] local-lis/les=62/63 n=1 ec=39/20 lis/c=51/51 les/c/f=52/52/0 sis=62) [1] r=0 lpr=62 pi=[51,62)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:29 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Nov 22 03:27:29 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Nov 22 03:27:30 compute-0 ceph-mon[75011]: pgmap v139: 212 pgs: 212 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 03:27:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 03:27:30 compute-0 ceph-mon[75011]: osdmap e63: 3 total, 3 up, 3 in
Nov 22 03:27:30 compute-0 ceph-mon[75011]: 2.19 scrub starts
Nov 22 03:27:30 compute-0 ceph-mon[75011]: 2.19 scrub ok
Nov 22 03:27:30 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.7 deep-scrub starts
Nov 22 03:27:30 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.7 deep-scrub ok
Nov 22 03:27:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v141: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 22 03:27:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 22 03:27:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 03:27:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Nov 22 03:27:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 03:27:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Nov 22 03:27:31 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Nov 22 03:27:31 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 64 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=39/20 lis/c=47/47 les/c/f=49/49/0 sis=64 pruub=11.078885078s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=37'39 mlcod 37'39 active pruub 126.830223083s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:31 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 64 pg[6.f( v 37'39 (0'0,37'39] local-lis/les=47/49 n=1 ec=39/20 lis/c=47/47 les/c/f=49/49/0 sis=64 pruub=11.078794479s) [2] r=-1 lpr=64 pi=[47,64)/1 crt=37'39 mlcod 0'0 unknown NOTIFY pruub 126.830223083s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:31 compute-0 ceph-mon[75011]: 5.7 deep-scrub starts
Nov 22 03:27:31 compute-0 ceph-mon[75011]: 5.7 deep-scrub ok
Nov 22 03:27:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 03:27:31 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 64 pg[6.f( empty local-lis/les=0/0 n=0 ec=39/20 lis/c=47/47 les/c/f=49/49/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:31 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Nov 22 03:27:31 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Nov 22 03:27:31 compute-0 sudo[104466]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umgwtqbaxkzzifednzbbinjcnjifdaym ; /usr/bin/python3'
Nov 22 03:27:31 compute-0 sudo[104466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:27:31 compute-0 python3[104468]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:27:32 compute-0 podman[104469]: 2025-11-22 03:27:32.062261405 +0000 UTC m=+0.069749316 container create 860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902 (image=quay.io/ceph/ceph:v18, name=busy_euler, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:27:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:32 compute-0 systemd[1]: Started libpod-conmon-860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902.scope.
Nov 22 03:27:32 compute-0 podman[104469]: 2025-11-22 03:27:32.03609862 +0000 UTC m=+0.043586571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:27:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af12ae7336d112642a69b6cd9e0206cdae5fa6c259d503da70490255d55435c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9af12ae7336d112642a69b6cd9e0206cdae5fa6c259d503da70490255d55435c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:27:32 compute-0 podman[104469]: 2025-11-22 03:27:32.168974989 +0000 UTC m=+0.176462890 container init 860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902 (image=quay.io/ceph/ceph:v18, name=busy_euler, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:27:32 compute-0 podman[104469]: 2025-11-22 03:27:32.177041317 +0000 UTC m=+0.184529268 container start 860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902 (image=quay.io/ceph/ceph:v18, name=busy_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:27:32 compute-0 podman[104469]: 2025-11-22 03:27:32.182586876 +0000 UTC m=+0.190074837 container attach 860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902 (image=quay.io/ceph/ceph:v18, name=busy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:27:32 compute-0 busy_euler[104484]: could not fetch user info: no user info saved
Nov 22 03:27:32 compute-0 systemd[1]: libpod-860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902.scope: Deactivated successfully.
Nov 22 03:27:32 compute-0 podman[104569]: 2025-11-22 03:27:32.495730124 +0000 UTC m=+0.029114415 container died 860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902 (image=quay.io/ceph/ceph:v18, name=busy_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:27:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9af12ae7336d112642a69b6cd9e0206cdae5fa6c259d503da70490255d55435c-merged.mount: Deactivated successfully.
Nov 22 03:27:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Nov 22 03:27:32 compute-0 ceph-mon[75011]: pgmap v141: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Nov 22 03:27:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 03:27:32 compute-0 ceph-mon[75011]: osdmap e64: 3 total, 3 up, 3 in
Nov 22 03:27:32 compute-0 ceph-mon[75011]: 5.10 scrub starts
Nov 22 03:27:32 compute-0 ceph-mon[75011]: 5.10 scrub ok
Nov 22 03:27:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Nov 22 03:27:32 compute-0 podman[104569]: 2025-11-22 03:27:32.549595318 +0000 UTC m=+0.082979609 container remove 860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902 (image=quay.io/ceph/ceph:v18, name=busy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:27:32 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Nov 22 03:27:32 compute-0 systemd[1]: libpod-conmon-860db35ccef9744b8ce4a63abc2b2df440b72bf588115276aae2904b17fcd902.scope: Deactivated successfully.
Nov 22 03:27:32 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 65 pg[6.f( v 37'39 lc 32'1 (0'0,37'39] local-lis/les=64/65 n=1 ec=39/20 lis/c=47/47 les/c/f=49/49/0 sis=64) [2] r=0 lpr=64 pi=[47,64)/1 crt=37'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:32 compute-0 sudo[104466]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:32 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Nov 22 03:27:32 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Nov 22 03:27:32 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Nov 22 03:27:32 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Nov 22 03:27:32 compute-0 sudo[104607]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmcoiuhqwqivwdydqktiwoqhiqxmxmfu ; /usr/bin/python3'
Nov 22 03:27:32 compute-0 sudo[104607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:27:32 compute-0 python3[104609]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 7adcc38b-6484-5de6-b879-33a0309153df -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:27:32 compute-0 podman[104610]: 2025-11-22 03:27:32.990610402 +0000 UTC m=+0.049830337 container create b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1 (image=quay.io/ceph/ceph:v18, name=charming_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:27:33 compute-0 systemd[1]: Started libpod-conmon-b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1.scope.
Nov 22 03:27:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f34ed1badbaac5df14a1aa5a7cef72db992f2168ceb666953bbd809abf666ef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:27:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f34ed1badbaac5df14a1aa5a7cef72db992f2168ceb666953bbd809abf666ef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:27:33 compute-0 podman[104610]: 2025-11-22 03:27:32.968035991 +0000 UTC m=+0.027255966 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Nov 22 03:27:33 compute-0 podman[104610]: 2025-11-22 03:27:33.074754887 +0000 UTC m=+0.133974842 container init b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1 (image=quay.io/ceph/ceph:v18, name=charming_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:27:33 compute-0 podman[104610]: 2025-11-22 03:27:33.079396906 +0000 UTC m=+0.138616831 container start b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1 (image=quay.io/ceph/ceph:v18, name=charming_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:27:33 compute-0 podman[104610]: 2025-11-22 03:27:33.08591533 +0000 UTC m=+0.145135305 container attach b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1 (image=quay.io/ceph/ceph:v18, name=charming_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:27:33 compute-0 charming_goodall[104625]: {
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "user_id": "openstack",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "display_name": "openstack",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "email": "",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "suspended": 0,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "max_buckets": 1000,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "subusers": [],
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "keys": [
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         {
Nov 22 03:27:33 compute-0 charming_goodall[104625]:             "user": "openstack",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:             "access_key": "ZF0GKUU1YXMN94L9P38G",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:             "secret_key": "jc1KULSMUU20usw648NLRWANdFZfEkCPjq2mcTcf"
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         }
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     ],
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "swift_keys": [],
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "caps": [],
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "op_mask": "read, write, delete",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "default_placement": "",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "default_storage_class": "",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "placement_tags": [],
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "bucket_quota": {
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "enabled": false,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "check_on_raw": false,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "max_size": -1,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "max_size_kb": 0,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "max_objects": -1
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     },
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "user_quota": {
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "enabled": false,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "check_on_raw": false,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "max_size": -1,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "max_size_kb": 0,
Nov 22 03:27:33 compute-0 charming_goodall[104625]:         "max_objects": -1
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     },
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "temp_url_keys": [],
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "type": "rgw",
Nov 22 03:27:33 compute-0 charming_goodall[104625]:     "mfa_ids": []
Nov 22 03:27:33 compute-0 charming_goodall[104625]: }
Nov 22 03:27:33 compute-0 charming_goodall[104625]: 
Nov 22 03:27:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v144: 212 pgs: 1 peering, 211 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 404 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Nov 22 03:27:33 compute-0 systemd[1]: libpod-b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1.scope: Deactivated successfully.
Nov 22 03:27:33 compute-0 podman[104610]: 2025-11-22 03:27:33.376208523 +0000 UTC m=+0.435428468 container died b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1 (image=quay.io/ceph/ceph:v18, name=charming_goodall, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f34ed1badbaac5df14a1aa5a7cef72db992f2168ceb666953bbd809abf666ef-merged.mount: Deactivated successfully.
Nov 22 03:27:33 compute-0 podman[104610]: 2025-11-22 03:27:33.418101715 +0000 UTC m=+0.477321640 container remove b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1 (image=quay.io/ceph/ceph:v18, name=charming_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:27:33 compute-0 systemd[1]: libpod-conmon-b82a862941e440d017f7c2a82efed1a068d9ecad1b25fc8abcaac2d8e0cbeab1.scope: Deactivated successfully.
Nov 22 03:27:33 compute-0 sudo[104607]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:33 compute-0 ceph-mon[75011]: osdmap e65: 3 total, 3 up, 3 in
Nov 22 03:27:33 compute-0 ceph-mon[75011]: 2.18 scrub starts
Nov 22 03:27:33 compute-0 ceph-mon[75011]: 2.18 scrub ok
Nov 22 03:27:33 compute-0 ceph-mon[75011]: 5.17 scrub starts
Nov 22 03:27:33 compute-0 ceph-mon[75011]: 5.17 scrub ok
Nov 22 03:27:34 compute-0 ceph-mon[75011]: pgmap v144: 212 pgs: 1 peering, 211 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 404 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Nov 22 03:27:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v145: 212 pgs: 1 peering, 211 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 11 B/s, 0 objects/s recovering
Nov 22 03:27:35 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Nov 22 03:27:35 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:27:36
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Some PGs (0.004717) are inactive; try again later
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:27:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:27:36 compute-0 ceph-mon[75011]: pgmap v145: 212 pgs: 1 peering, 211 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 11 B/s, 0 objects/s recovering
Nov 22 03:27:36 compute-0 ceph-mon[75011]: 8.5 scrub starts
Nov 22 03:27:36 compute-0 ceph-mon[75011]: 8.5 scrub ok
Nov 22 03:27:36 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Nov 22 03:27:36 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Nov 22 03:27:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v146: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 130 B/s wr, 1 op/s; 86 B/s, 0 objects/s recovering
Nov 22 03:27:37 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Nov 22 03:27:37 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Nov 22 03:27:37 compute-0 ceph-mon[75011]: 5.1e scrub starts
Nov 22 03:27:37 compute-0 ceph-mon[75011]: 5.1e scrub ok
Nov 22 03:27:37 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Nov 22 03:27:37 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Nov 22 03:27:38 compute-0 ceph-mon[75011]: pgmap v146: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 130 B/s wr, 1 op/s; 86 B/s, 0 objects/s recovering
Nov 22 03:27:38 compute-0 ceph-mon[75011]: 7.7 scrub starts
Nov 22 03:27:38 compute-0 ceph-mon[75011]: 7.7 scrub ok
Nov 22 03:27:38 compute-0 ceph-mon[75011]: 5.4 scrub starts
Nov 22 03:27:38 compute-0 ceph-mon[75011]: 5.4 scrub ok
Nov 22 03:27:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v147: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s; 75 B/s, 0 objects/s recovering
Nov 22 03:27:39 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Nov 22 03:27:39 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Nov 22 03:27:39 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Nov 22 03:27:39 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Nov 22 03:27:40 compute-0 ceph-mon[75011]: pgmap v147: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s; 75 B/s, 0 objects/s recovering
Nov 22 03:27:40 compute-0 ceph-mon[75011]: 8.7 scrub starts
Nov 22 03:27:40 compute-0 ceph-mon[75011]: 8.7 scrub ok
Nov 22 03:27:40 compute-0 ceph-mon[75011]: 2.1d scrub starts
Nov 22 03:27:40 compute-0 ceph-mon[75011]: 2.1d scrub ok
Nov 22 03:27:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v148: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 207 B/s wr, 1 op/s; 61 B/s, 0 objects/s recovering
Nov 22 03:27:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Nov 22 03:27:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Nov 22 03:27:41 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Nov 22 03:27:41 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Nov 22 03:27:41 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Nov 22 03:27:41 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Nov 22 03:27:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:42 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.b scrub starts
Nov 22 03:27:42 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.b scrub ok
Nov 22 03:27:42 compute-0 ceph-mon[75011]: pgmap v148: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 207 B/s wr, 1 op/s; 61 B/s, 0 objects/s recovering
Nov 22 03:27:42 compute-0 ceph-mon[75011]: 8.8 scrub starts
Nov 22 03:27:42 compute-0 ceph-mon[75011]: 8.8 scrub ok
Nov 22 03:27:42 compute-0 ceph-mon[75011]: 5.5 scrub starts
Nov 22 03:27:42 compute-0 ceph-mon[75011]: 5.5 scrub ok
Nov 22 03:27:42 compute-0 ceph-mon[75011]: 5.1b scrub starts
Nov 22 03:27:42 compute-0 ceph-mon[75011]: 5.1b scrub ok
Nov 22 03:27:42 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.1c deep-scrub starts
Nov 22 03:27:42 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.1c deep-scrub ok
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Nov 22 03:27:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:27:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v149: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 189 B/s wr, 1 op/s; 55 B/s, 0 objects/s recovering
Nov 22 03:27:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Nov 22 03:27:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:27:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Nov 22 03:27:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Nov 22 03:27:43 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 3c39e20a-6b56-401f-8dab-1d5e03e3b49d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 22 03:27:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:27:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:27:43 compute-0 ceph-mon[75011]: 7.b scrub starts
Nov 22 03:27:43 compute-0 ceph-mon[75011]: 7.b scrub ok
Nov 22 03:27:43 compute-0 ceph-mon[75011]: 5.1c deep-scrub starts
Nov 22 03:27:43 compute-0 ceph-mon[75011]: 5.1c deep-scrub ok
Nov 22 03:27:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:27:44 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.a scrub starts
Nov 22 03:27:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Nov 22 03:27:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:27:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Nov 22 03:27:44 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.a scrub ok
Nov 22 03:27:44 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Nov 22 03:27:44 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev a40cf874-f386-485e-b55a-c0d83d998e34 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 22 03:27:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Nov 22 03:27:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:27:44 compute-0 ceph-mon[75011]: pgmap v149: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 189 B/s wr, 1 op/s; 55 B/s, 0 objects/s recovering
Nov 22 03:27:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:27:44 compute-0 ceph-mon[75011]: osdmap e66: 3 total, 3 up, 3 in
Nov 22 03:27:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:27:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:27:44 compute-0 ceph-mon[75011]: osdmap e67: 3 total, 3 up, 3 in
Nov 22 03:27:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Nov 22 03:27:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v152: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 22 03:27:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:27:45 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:27:45 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Nov 22 03:27:45 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.1c deep-scrub starts
Nov 22 03:27:45 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.1c deep-scrub ok
Nov 22 03:27:45 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:27:45 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:27:45 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:27:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Nov 22 03:27:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Nov 22 03:27:45 compute-0 ceph-mgr[75294]: [progress INFO root] update: starting ev 1b8efcfe-6cf3-4a15-83c7-ddb2c08f9c4f (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 22 03:27:45 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 3c39e20a-6b56-401f-8dab-1d5e03e3b49d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Nov 22 03:27:45 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 3c39e20a-6b56-401f-8dab-1d5e03e3b49d (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Nov 22 03:27:45 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev a40cf874-f386-485e-b55a-c0d83d998e34 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Nov 22 03:27:45 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event a40cf874-f386-485e-b55a-c0d83d998e34 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Nov 22 03:27:45 compute-0 ceph-mgr[75294]: [progress INFO root] complete: finished ev 1b8efcfe-6cf3-4a15-83c7-ddb2c08f9c4f (PG autoscaler increasing pool 11 PGs from 1 to 32)
Nov 22 03:27:45 compute-0 ceph-mgr[75294]: [progress INFO root] Completed event 1b8efcfe-6cf3-4a15-83c7-ddb2c08f9c4f (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Nov 22 03:27:45 compute-0 ceph-mon[75011]: 8.a scrub starts
Nov 22 03:27:45 compute-0 ceph-mon[75011]: 8.a scrub ok
Nov 22 03:27:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 68 pg[9.0( v 65'583 (0'0,65'583] local-lis/les=33/34 n=209 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=68 pruub=11.855522156s) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 65'582 mlcod 65'582 active pruub 136.898757935s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:45 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 68 pg[9.0( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=68 pruub=11.855522156s) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 65'582 mlcod 0'0 unknown pruub 136.898757935s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 68 pg[10.0( v 65'64 (0'0,65'64] local-lis/les=35/36 n=8 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=68 pruub=12.869850159s) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 65'63 mlcod 65'63 active pruub 133.197814941s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 68 pg[10.0( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=68 pruub=12.869850159s) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 65'63 mlcod 0'0 unknown pruub 133.197814941s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Nov 22 03:27:46 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Nov 22 03:27:46 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Nov 22 03:27:46 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Nov 22 03:27:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Nov 22 03:27:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Nov 22 03:27:46 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Nov 22 03:27:46 compute-0 ceph-mon[75011]: pgmap v152: 212 pgs: 212 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Nov 22 03:27:46 compute-0 ceph-mon[75011]: 2.1c deep-scrub starts
Nov 22 03:27:46 compute-0 ceph-mon[75011]: 2.1c deep-scrub ok
Nov 22 03:27:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Nov 22 03:27:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:27:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:27:46 compute-0 ceph-mon[75011]: osdmap e68: 3 total, 3 up, 3 in
Nov 22 03:27:46 compute-0 ceph-mon[75011]: 5.2 scrub starts
Nov 22 03:27:46 compute-0 ceph-mon[75011]: 5.2 scrub ok
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.17( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.14( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1b( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.11( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.10( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.16( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.13( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.12( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.d( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.c( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.f( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.9( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.b( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.2( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.e( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.8( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.6( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.3( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.7( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.4( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.5( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1a( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.18( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.19( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1e( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1f( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.15( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1c( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1b( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.b( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.a( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.12( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.d( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1d( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.a( v 65'583 lc 0'0 (0'0,65'583] local-lis/les=33/34 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.11( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.10( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1f( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1e( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1d( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1c( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1a( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.19( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.7( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.6( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.5( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.4( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.18( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.3( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.f( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.c( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.9( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.e( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1( v 65'64 (0'0,65'64] local-lis/les=35/36 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.2( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.8( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.13( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.14( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.15( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.16( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.17( v 65'64 lc 0'0 (0'0,65'64] local-lis/les=35/36 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1b( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.14( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.12( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.a( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.b( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.10( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.11( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1f( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1e( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1d( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1a( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1c( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.19( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.7( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.6( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.4( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.3( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.f( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.5( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.18( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.0( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 65'63 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.c( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.e( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.9( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.2( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.1( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.14( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.13( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.8( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.16( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.15( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.d( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 69 pg[10.17( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=35/35 les/c/f=36/36/0 sis=68) [2] r=0 lpr=68 pi=[35,68)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.10( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.12( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.2( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.0( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 65'582 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.4( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1a( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.a( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 69 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=33/33 les/c/f=34/34/0 sis=68) [1] r=0 lpr=68 pi=[33,68)/1 crt=65'583 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:47 compute-0 sshd-session[104721]: Accepted publickey for zuul from 192.168.122.30 port 52460 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:27:47 compute-0 systemd-logind[799]: New session 34 of user zuul.
Nov 22 03:27:47 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 22 03:27:47 compute-0 sshd-session[104721]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:27:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v155: 274 pgs: 31 unknown, 243 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Nov 22 03:27:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Nov 22 03:27:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:27:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Nov 22 03:27:47 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Nov 22 03:27:47 compute-0 ceph-mon[75011]: 5.1f scrub starts
Nov 22 03:27:47 compute-0 ceph-mon[75011]: 5.1f scrub ok
Nov 22 03:27:47 compute-0 ceph-mon[75011]: osdmap e69: 3 total, 3 up, 3 in
Nov 22 03:27:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 70 pg[11.0( v 65'2 (0'0,65'2] local-lis/les=37/38 n=2 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=70 pruub=13.967015266s) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 65'1 mlcod 65'1 active pruub 141.239486694s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 70 pg[11.0( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=70 pruub=13.967015266s) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 65'1 mlcod 0'0 unknown pruub 141.239486694s@ mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 python3.9[104874]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:27:48 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Nov 22 03:27:48 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.d scrub starts
Nov 22 03:27:48 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Nov 22 03:27:48 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.d scrub ok
Nov 22 03:27:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Nov 22 03:27:48 compute-0 ceph-mon[75011]: pgmap v155: 274 pgs: 31 unknown, 243 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Nov 22 03:27:48 compute-0 ceph-mon[75011]: osdmap e70: 3 total, 3 up, 3 in
Nov 22 03:27:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Nov 22 03:27:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.19( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.17( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.16( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.15( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.14( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.13( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.12( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.11( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.10( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.f( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.e( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.d( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.b( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.9( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.2( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=1 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.3( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.c( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.8( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.a( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1( v 65'2 (0'0,65'2] local-lis/les=37/38 n=1 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.4( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.5( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.6( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.7( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.18( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1a( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1b( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1c( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1d( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1f( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1e( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=37/38 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.19( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.14( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.16( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.15( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.13( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.12( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.17( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.11( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.10( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.e( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.d( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.f( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.b( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.9( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.0( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 65'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.3( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.2( v 65'2 (0'0,65'2] local-lis/les=70/71 n=1 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.c( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.8( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1( v 65'2 (0'0,65'2] local-lis/les=70/71 n=1 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.4( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.a( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.5( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.6( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.7( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.18( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1a( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1b( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1c( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1d( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1f( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:48 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 71 pg[11.1e( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=37/37 les/c/f=38/38/0 sis=70) [1] r=0 lpr=70 pi=[37,70)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:49 compute-0 ceph-mgr[75294]: [progress INFO root] Writing back 16 completed events
Nov 22 03:27:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Nov 22 03:27:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:27:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v158: 305 pgs: 62 unknown, 243 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:49 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Nov 22 03:27:49 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Nov 22 03:27:49 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Nov 22 03:27:49 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Nov 22 03:27:49 compute-0 ceph-mon[75011]: 4.13 scrub starts
Nov 22 03:27:49 compute-0 ceph-mon[75011]: 7.d scrub starts
Nov 22 03:27:49 compute-0 ceph-mon[75011]: 4.13 scrub ok
Nov 22 03:27:49 compute-0 ceph-mon[75011]: 7.d scrub ok
Nov 22 03:27:49 compute-0 ceph-mon[75011]: osdmap e71: 3 total, 3 up, 3 in
Nov 22 03:27:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:27:49 compute-0 ceph-mon[75011]: 5.3 scrub starts
Nov 22 03:27:49 compute-0 ceph-mon[75011]: 5.3 scrub ok
Nov 22 03:27:49 compute-0 ceph-mon[75011]: 7.10 scrub starts
Nov 22 03:27:49 compute-0 ceph-mon[75011]: 7.10 scrub ok
Nov 22 03:27:50 compute-0 sudo[105091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emxpyqiooygzddilazxvqduedenkzkxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782070.067264-32-122675180101658/AnsiballZ_command.py'
Nov 22 03:27:50 compute-0 sudo[105091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:27:50 compute-0 python3.9[105093]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:27:51 compute-0 ceph-mon[75011]: pgmap v158: 305 pgs: 62 unknown, 243 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:27:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Nov 22 03:27:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 03:27:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:27:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:51 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.e scrub starts
Nov 22 03:27:51 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.e scrub ok
Nov 22 03:27:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Nov 22 03:27:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Nov 22 03:27:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:27:52 compute-0 ceph-mon[75011]: 4.e scrub starts
Nov 22 03:27:52 compute-0 ceph-mon[75011]: 4.e scrub ok
Nov 22 03:27:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:27:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 03:27:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:27:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Nov 22 03:27:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Nov 22 03:27:52 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.b scrub starts
Nov 22 03:27:52 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.b scrub ok
Nov 22 03:27:53 compute-0 ceph-mon[75011]: pgmap v159: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:27:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Nov 22 03:27:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:27:53 compute-0 ceph-mon[75011]: osdmap e72: 3 total, 3 up, 3 in
Nov 22 03:27:53 compute-0 ceph-mon[75011]: 2.b scrub starts
Nov 22 03:27:53 compute-0 ceph-mon[75011]: 2.b scrub ok
Nov 22 03:27:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v161: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Nov 22 03:27:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 03:27:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Nov 22 03:27:54 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 03:27:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Nov 22 03:27:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.b( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.652436256s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913787842s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.10( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.652418137s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913803101s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.b( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.652376175s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913787842s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.10( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.652355194s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913803101s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.1e( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.652122498s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913833618s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.1e( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.652099609s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913833618s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.12( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651877403s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913650513s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.12( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651838303s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913650513s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.1a( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651921272s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913864136s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.1a( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651899338s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913864136s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.19( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651886940s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913909912s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.d( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.652420044s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 65'64 active pruub 136.914505005s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.7( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651782990s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913940430s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.d( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.652361870s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 0'0 unknown NOTIFY pruub 136.914505005s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.7( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651762009s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913940430s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.6( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651659012s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913970947s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.6( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651627541s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913970947s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.4( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651575089s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913986206s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.4( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651552200s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913986206s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.19( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651374817s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913909912s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.f( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651429176s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.914031982s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.f( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651404381s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.914031982s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.8( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651670456s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.914398193s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.8( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651639938s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.914398193s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.9( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651347160s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 65'64 active pruub 136.914215088s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.9( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651312828s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 0'0 unknown NOTIFY pruub 136.914215088s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.e( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651171684s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 65'64 active pruub 136.914169312s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.1( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651232719s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.914276123s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.1( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651206970s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.914276123s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.e( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.651117325s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 0'0 unknown NOTIFY pruub 136.914169312s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.13( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650915146s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.914367676s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.2( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650792122s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.914245605s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.13( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650895119s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.914367676s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.2( v 65'64 (0'0,65'64] local-lis/les=68/69 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650754929s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.914245605s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.14( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650735855s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 65'64 active pruub 136.914337158s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.14( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650708199s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 0'0 unknown NOTIFY pruub 136.914337158s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.15( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650786400s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 65'64 active pruub 136.914474487s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.16( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650688171s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.914443970s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.15( v 70'65 (0'0,70'65] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650721550s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 65'64 mlcod 0'0 unknown NOTIFY pruub 136.914474487s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.16( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650661469s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.914443970s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.17( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650618553s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.914581299s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.17( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.650586128s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.914581299s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 72 pg[10.11( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.648262978s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active pruub 136.913803101s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[10.11( v 65'64 (0'0,65'64] local-lis/les=68/69 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.648067474s) [1] r=-1 lpr=72 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 136.913803101s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.1a( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.19( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.6( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.b( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.2( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.f( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.11( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.10( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.13( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.12( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[10.14( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.19( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680093765s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.115005493s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.19( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680065155s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.115005493s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.17( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.683902740s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.118988037s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.647751808s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.082855225s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.644657135s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.080688477s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.646953583s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.082855225s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.644626617s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.080688477s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.17( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.683053970s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.118988037s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.15( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682636261s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.118835449s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.14( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682567596s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.118804932s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.14( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682501793s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.118804932s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.15( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682535172s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.118835449s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.644210815s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.080703735s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.12( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682375908s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.118957520s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.12( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682321548s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.118957520s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.644106865s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.080703735s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.11( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682188988s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119003296s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.11( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682152748s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119003296s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.644197464s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.081069946s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.644158363s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.081069946s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.10( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682057381s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119018555s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.10( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.682038307s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119018555s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.f( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681998253s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119049072s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.f( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681978226s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119049072s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.644018173s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.081130981s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643972397s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.081130981s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.e( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681797028s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119033813s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.e( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681775093s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119033813s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643806458s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.081146240s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.d( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681687355s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119033813s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643786430s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.081146240s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.d( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681651115s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119033813s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.b( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681575775s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119064331s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.b( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681547165s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119064331s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643639565s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.081176758s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643604279s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.081176758s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.12( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.11( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.9( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681391716s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119079590s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.9( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.681368828s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119079590s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643646240s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.081405640s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643298149s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.081405640s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.2( v 65'2 (0'0,65'2] local-lis/les=70/71 n=1 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680951118s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119155884s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.2( v 65'2 (0'0,65'2] local-lis/les=70/71 n=1 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680904388s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119155884s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.d( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.3( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680774689s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119140625s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.3( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680751801s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119140625s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.8( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680428505s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119186401s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.8( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680404663s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119186401s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.b( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.642793655s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.081710815s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.1( v 65'2 (0'0,65'2] local-lis/les=70/71 n=1 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680264473s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119186401s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.1( v 65'2 (0'0,65'2] local-lis/les=70/71 n=1 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680218697s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119186401s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.642735481s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.081710815s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.9( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643077850s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.082122803s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.15( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.4( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680098534s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119201660s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.10( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.2( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.643012047s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.082122803s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.4( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.680078506s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119201660s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.642531395s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.082000732s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.3( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.642505646s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.082000732s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.642556190s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.082321167s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.642522812s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.082321167s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.8( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.13( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.11( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.1b( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.679358482s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119323730s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.1b( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.679338455s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119323730s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.1a( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.679327011s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119323730s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.642337799s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.082489014s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.6( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.679181099s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119262695s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.642230034s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.082489014s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.1a( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.679028511s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119323730s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.1c( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.678961754s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119338989s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.1c( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.678943634s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119338989s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.6( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.678817749s) [0] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119262695s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.18( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.678495407s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119293213s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.18( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.678411484s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119293213s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.1b( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.5( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.9( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.4( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.628843307s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.071105957s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.628814697s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.071105957s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.640244484s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.082702637s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.640224457s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.082702637s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.1e( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.676713943s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119384766s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.1e( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.676692009s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119384766s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.8( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.b( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.15( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.14( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.4( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[11.1f( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.674530029s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active pruub 144.119354248s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.1a( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.1c( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.18( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.7( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[11.1f( v 65'2 (0'0,65'2] local-lis/les=70/71 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72 pruub=10.674503326s) [2] r=-1 lpr=72 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 144.119354248s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 72 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.638029099s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 142.082962036s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:54 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 73 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72 pruub=8.637969971s) [0] r=-1 lpr=72 pi=[68,72)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 142.082962036s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.1f( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 73 pg[11.1e( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.7( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.9( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.17( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.d( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.e( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.e( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.f( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.d( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.1( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.1( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.3( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.1e( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.19( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.1b( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.16( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.15( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.17( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[10.1( empty local-lis/les=0/0 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.19( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[11.6( empty local-lis/les=0/0 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.17( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:54 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 73 pg[9.1d( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Nov 22 03:27:55 compute-0 ceph-mon[75011]: pgmap v161: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:55 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Nov 22 03:27:55 compute-0 ceph-mon[75011]: osdmap e73: 3 total, 3 up, 3 in
Nov 22 03:27:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Nov 22 03:27:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.1b( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.1b( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.15( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.15( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.19( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.19( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.1d( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.1d( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.3( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.3( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.1( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.1( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.d( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.d( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.9( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.9( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.17( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.5( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.5( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.b( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.13( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.11( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.13( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.11( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.7( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.b( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[9.f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=-1 lpr=74 pi=[68,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.1a( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.11( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.b( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.8( v 65'64 (0'0,65'64] local-lis/les=72/74 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.12( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.10( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.11( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.f( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.2( v 65'64 (0'0,65'64] local-lis/les=72/74 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.b( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.19( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.6( v 65'64 (0'0,65'64] local-lis/les=72/74 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.13( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.1c( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.1b( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.18( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.1f( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.2( v 65'2 (0'0,65'2] local-lis/les=72/74 n=1 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.8( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.9( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.d( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.3( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.15( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.12( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 74 pg[11.1e( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [2] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.d( v 70'65 lc 65'50 (0'0,70'65] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=70'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.15( v 70'65 lc 65'46 (0'0,70'65] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=70'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.7( v 65'64 (0'0,65'64] local-lis/les=72/74 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.4( v 65'64 (0'0,65'64] local-lis/les=72/74 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.17( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.9( v 70'65 lc 65'56 (0'0,70'65] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=70'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.1( v 65'64 (0'0,65'64] local-lis/les=72/74 n=1 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.16( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.19( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.1e( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.e( v 70'65 lc 65'48 (0'0,70'65] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=70'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.1( v 65'2 (0'0,65'2] local-lis/les=72/74 n=1 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[10.17( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [0] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.6( v 65'2 lc 0'0 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.e( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.14( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.4( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.10( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 74 pg[11.f( v 65'2 (0'0,65'2] local-lis/les=72/74 n=0 ec=70/37 lis/c=70/70 les/c/f=71/71/0 sis=72) [0] r=0 lpr=73 pi=[70,72)/1 crt=65'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.1a( v 65'64 (0'0,65'64] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=65'64 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 74 pg[10.14( v 70'65 lc 65'54 (0'0,70'65] local-lis/les=72/74 n=0 ec=68/35 lis/c=68/68 les/c/f=69/69/0 sis=72) [1] r=0 lpr=73 pi=[68,72)/1 crt=70'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Nov 22 03:27:55 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:27:55 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Nov 22 03:27:55 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Nov 22 03:27:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Nov 22 03:27:56 compute-0 ceph-mon[75011]: osdmap e74: 3 total, 3 up, 3 in
Nov 22 03:27:56 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Nov 22 03:27:56 compute-0 ceph-mon[75011]: 7.12 scrub starts
Nov 22 03:27:56 compute-0 ceph-mon[75011]: 7.12 scrub ok
Nov 22 03:27:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:27:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Nov 22 03:27:56 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 75 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=74) [0]/[1] async=[0] r=0 lpr=74 pi=[68,74)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:56 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Nov 22 03:27:56 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Nov 22 03:27:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:27:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Nov 22 03:27:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Nov 22 03:27:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.586384773s) [0] async=[0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.844818115s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.587124825s) [0] async=[0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.845870972s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.587065697s) [0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.845870972s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.586777687s) [0] async=[0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.845733643s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.586742401s) [0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.845733643s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.585706711s) [0] async=[0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.844818115s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.585680962s) [0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.844818115s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.569645882s) [0] async=[0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.829376221s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.569566727s) [0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.829376221s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.585826874s) [0] async=[0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.845764160s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.585749626s) [0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.845764160s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:57 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 76 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76 pruub=15.586072922s) [0] r=-1 lpr=76 pi=[68,76)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.844818115s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:57 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 76 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:57 compute-0 ceph-mon[75011]: pgmap v164: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:27:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Nov 22 03:27:57 compute-0 ceph-mon[75011]: osdmap e75: 3 total, 3 up, 3 in
Nov 22 03:27:57 compute-0 ceph-mon[75011]: 4.18 scrub starts
Nov 22 03:27:57 compute-0 ceph-mon[75011]: 4.18 scrub ok
Nov 22 03:27:57 compute-0 ceph-mon[75011]: osdmap e76: 3 total, 3 up, 3 in
Nov 22 03:27:57 compute-0 sudo[105091]: pam_unix(sudo:session): session closed for user root
Nov 22 03:27:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 16 remapped+peering, 289 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 90 B/s, 0 objects/s recovering
Nov 22 03:27:57 compute-0 sshd-session[104724]: Connection closed by 192.168.122.30 port 52460
Nov 22 03:27:57 compute-0 sshd-session[104721]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:27:57 compute-0 systemd-logind[799]: Session 34 logged out. Waiting for processes to exit.
Nov 22 03:27:57 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 22 03:27:57 compute-0 systemd[1]: session-34.scope: Consumed 8.867s CPU time.
Nov 22 03:27:57 compute-0 systemd-logind[799]: Removed session 34.
Nov 22 03:27:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Nov 22 03:27:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Nov 22 03:27:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579493523s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.845687866s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.580215454s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.846221924s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579414368s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.845687866s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579932213s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.846221924s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579256058s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.845794678s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579214096s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.845794678s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579676628s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.846466064s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579564095s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.846466064s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579084396s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.846221924s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579040527s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.846221924s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.579004288s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.846359253s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578232765s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.845611572s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578946114s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.846359253s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=74/75 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578184128s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.845611572s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578820229s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.846374512s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578781128s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.846374512s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578437805s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.846176147s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578379631s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.846176147s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578235626s) [0] async=[0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 151.846145630s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 77 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=74/75 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77 pruub=14.578171730s) [0] r=-1 lpr=77 pi=[68,77)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 151.846145630s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.9( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.b( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=76/77 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.5( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 77 pg[9.11( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=76) [0] r=0 lpr=76 pi=[68,76)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:58 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Nov 22 03:27:58 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Nov 22 03:27:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Nov 22 03:27:59 compute-0 ceph-mon[75011]: pgmap v167: 305 pgs: 16 remapped+peering, 289 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 90 B/s, 0 objects/s recovering
Nov 22 03:27:59 compute-0 ceph-mon[75011]: osdmap e77: 3 total, 3 up, 3 in
Nov 22 03:27:59 compute-0 ceph-mon[75011]: 4.1a scrub starts
Nov 22 03:27:59 compute-0 ceph-mon[75011]: 4.1a scrub ok
Nov 22 03:27:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Nov 22 03:27:59 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.1b( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.3( v 65'583 (0'0,65'583] local-lis/les=77/78 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.1( v 65'583 (0'0,65'583] local-lis/les=77/78 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.d( v 65'583 (0'0,65'583] local-lis/les=77/78 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=77/78 n=7 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 78 pg[9.1d( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=74/68 les/c/f=75/69/0 sis=77) [0] r=0 lpr=77 pi=[68,77)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:27:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 6 peering, 10 remapped+peering, 289 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 297 B/s, 10 objects/s recovering
Nov 22 03:27:59 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Nov 22 03:27:59 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Nov 22 03:28:00 compute-0 ceph-mon[75011]: osdmap e78: 3 total, 3 up, 3 in
Nov 22 03:28:00 compute-0 ceph-mon[75011]: 2.1f scrub starts
Nov 22 03:28:00 compute-0 ceph-mon[75011]: 2.1f scrub ok
Nov 22 03:28:00 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.f scrub starts
Nov 22 03:28:00 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.f scrub ok
Nov 22 03:28:00 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Nov 22 03:28:00 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Nov 22 03:28:01 compute-0 ceph-mon[75011]: pgmap v170: 305 pgs: 6 peering, 10 remapped+peering, 289 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 297 B/s, 10 objects/s recovering
Nov 22 03:28:01 compute-0 ceph-mon[75011]: 2.f scrub starts
Nov 22 03:28:01 compute-0 ceph-mon[75011]: 2.f scrub ok
Nov 22 03:28:01 compute-0 ceph-mon[75011]: 7.14 scrub starts
Nov 22 03:28:01 compute-0 ceph-mon[75011]: 7.14 scrub ok
Nov 22 03:28:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v171: 305 pgs: 6 peering, 10 remapped+peering, 289 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 236 B/s, 8 objects/s recovering
Nov 22 03:28:01 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.a scrub starts
Nov 22 03:28:01 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.a scrub ok
Nov 22 03:28:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:02 compute-0 ceph-mon[75011]: 4.a scrub starts
Nov 22 03:28:02 compute-0 ceph-mon[75011]: 4.a scrub ok
Nov 22 03:28:03 compute-0 ceph-mon[75011]: pgmap v171: 305 pgs: 6 peering, 10 remapped+peering, 289 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 236 B/s, 8 objects/s recovering
Nov 22 03:28:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 418 B/s, 17 objects/s recovering
Nov 22 03:28:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Nov 22 03:28:03 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:28:03 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Nov 22 03:28:03 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Nov 22 03:28:03 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Nov 22 03:28:03 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Nov 22 03:28:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Nov 22 03:28:04 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Nov 22 03:28:04 compute-0 ceph-mon[75011]: 4.1c scrub starts
Nov 22 03:28:04 compute-0 ceph-mon[75011]: 4.1c scrub ok
Nov 22 03:28:04 compute-0 ceph-mon[75011]: 7.16 scrub starts
Nov 22 03:28:04 compute-0 ceph-mon[75011]: 7.16 scrub ok
Nov 22 03:28:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:28:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Nov 22 03:28:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Nov 22 03:28:05 compute-0 ceph-mon[75011]: pgmap v172: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 418 B/s, 17 objects/s recovering
Nov 22 03:28:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Nov 22 03:28:05 compute-0 ceph-mon[75011]: osdmap e79: 3 total, 3 up, 3 in
Nov 22 03:28:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 197 B/s, 9 objects/s recovering
Nov 22 03:28:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Nov 22 03:28:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 03:28:05 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Nov 22 03:28:05 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Nov 22 03:28:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:28:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:28:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:28:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:28:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:28:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:28:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Nov 22 03:28:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Nov 22 03:28:06 compute-0 ceph-mon[75011]: 7.17 scrub starts
Nov 22 03:28:06 compute-0 ceph-mon[75011]: 7.17 scrub ok
Nov 22 03:28:06 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 03:28:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Nov 22 03:28:06 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Nov 22 03:28:06 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Nov 22 03:28:06 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Nov 22 03:28:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:07 compute-0 ceph-mon[75011]: pgmap v174: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 197 B/s, 9 objects/s recovering
Nov 22 03:28:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Nov 22 03:28:07 compute-0 ceph-mon[75011]: osdmap e80: 3 total, 3 up, 3 in
Nov 22 03:28:07 compute-0 ceph-mon[75011]: 4.1b scrub starts
Nov 22 03:28:07 compute-0 ceph-mon[75011]: 4.1b scrub ok
Nov 22 03:28:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 178 B/s, 8 objects/s recovering
Nov 22 03:28:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Nov 22 03:28:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 03:28:07 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Nov 22 03:28:07 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Nov 22 03:28:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Nov 22 03:28:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Nov 22 03:28:08 compute-0 ceph-mon[75011]: 2.8 scrub starts
Nov 22 03:28:08 compute-0 ceph-mon[75011]: 2.8 scrub ok
Nov 22 03:28:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 03:28:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Nov 22 03:28:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Nov 22 03:28:08 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Nov 22 03:28:08 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Nov 22 03:28:09 compute-0 ceph-mon[75011]: pgmap v176: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 178 B/s, 8 objects/s recovering
Nov 22 03:28:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Nov 22 03:28:09 compute-0 ceph-mon[75011]: osdmap e81: 3 total, 3 up, 3 in
Nov 22 03:28:09 compute-0 ceph-mon[75011]: 8.1c scrub starts
Nov 22 03:28:09 compute-0 ceph-mon[75011]: 8.1c scrub ok
Nov 22 03:28:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Nov 22 03:28:09 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 03:28:09 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Nov 22 03:28:09 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Nov 22 03:28:09 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Nov 22 03:28:09 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Nov 22 03:28:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Nov 22 03:28:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 03:28:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Nov 22 03:28:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Nov 22 03:28:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 82 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82 pruub=12.806990623s) [2] r=-1 lpr=82 pi=[77,82)/1 crt=65'583 mlcod 0'0 active pruub 167.388107300s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 82 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82 pruub=12.806902885s) [2] r=-1 lpr=82 pi=[77,82)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 167.388107300s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 82 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=82 pruub=11.790916443s) [2] r=-1 lpr=82 pi=[76,82)/1 crt=65'583 mlcod 0'0 active pruub 166.372360229s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 82 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=82 pruub=11.790869713s) [2] r=-1 lpr=82 pi=[76,82)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 166.372360229s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Nov 22 03:28:10 compute-0 ceph-mon[75011]: 2.16 scrub starts
Nov 22 03:28:10 compute-0 ceph-mon[75011]: 2.16 scrub ok
Nov 22 03:28:10 compute-0 ceph-mon[75011]: 3.16 deep-scrub starts
Nov 22 03:28:10 compute-0 ceph-mon[75011]: 3.16 deep-scrub ok
Nov 22 03:28:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 82 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82 pruub=12.806462288s) [2] r=-1 lpr=82 pi=[77,82)/1 crt=65'583 mlcod 0'0 active pruub 167.388244629s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 82 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82 pruub=12.806386948s) [2] r=-1 lpr=82 pi=[77,82)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 167.388244629s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 82 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=77/78 n=7 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82 pruub=12.806280136s) [2] r=-1 lpr=82 pi=[77,82)/1 crt=65'583 mlcod 0'0 active pruub 167.388229370s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:10 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 82 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=77/78 n=7 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82 pruub=12.806190491s) [2] r=-1 lpr=82 pi=[77,82)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 167.388229370s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:10 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 82 pg[9.17( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82) [2] r=0 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:10 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 82 pg[9.f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=82) [2] r=0 lpr=82 pi=[76,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:10 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 82 pg[9.7( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82) [2] r=0 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:10 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 82 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=82) [2] r=0 lpr=82 pi=[77,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 81 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81 pruub=8.334571838s) [2] r=-1 lpr=81 pi=[68,81)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 158.081283569s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 82 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81 pruub=8.334495544s) [2] r=-1 lpr=81 pi=[68,81)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.081283569s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 81 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81 pruub=8.334467888s) [2] r=-1 lpr=81 pi=[68,81)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 158.082077026s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 82 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81 pruub=8.334383011s) [2] r=-1 lpr=81 pi=[68,81)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.082077026s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 81 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81 pruub=8.334571838s) [2] r=-1 lpr=81 pi=[68,81)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 158.082443237s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:10 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 82 pg[9.16( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81) [2] r=0 lpr=82 pi=[68,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 82 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81 pruub=8.334434509s) [2] r=-1 lpr=81 pi=[68,81)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.082443237s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 81 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81 pruub=8.334692001s) [2] r=-1 lpr=81 pi=[68,81)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 158.082870483s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:10 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 82 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81 pruub=8.334594727s) [2] r=-1 lpr=81 pi=[68,81)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 158.082870483s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:10 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 82 pg[9.e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81) [2] r=0 lpr=82 pi=[68,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:10 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 82 pg[9.6( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81) [2] r=0 lpr=82 pi=[68,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:10 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 82 pg[9.1e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=81) [2] r=0 lpr=82 pi=[68,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:10 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Nov 22 03:28:10 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Nov 22 03:28:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Nov 22 03:28:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Nov 22 03:28:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Nov 22 03:28:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Nov 22 03:28:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=-1 lpr=83 pi=[77,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=-1 lpr=83 pi=[77,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.6( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.6( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:11 compute-0 ceph-mon[75011]: pgmap v178: 305 pgs: 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Nov 22 03:28:11 compute-0 ceph-mon[75011]: osdmap e82: 3 total, 3 up, 3 in
Nov 22 03:28:11 compute-0 ceph-mon[75011]: 7.11 scrub starts
Nov 22 03:28:11 compute-0 ceph-mon[75011]: 7.11 scrub ok
Nov 22 03:28:11 compute-0 ceph-mon[75011]: 7.19 scrub starts
Nov 22 03:28:11 compute-0 ceph-mon[75011]: 7.19 scrub ok
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.1e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=83) [2]/[0] r=-1 lpr=83 pi=[76,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.1e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=83) [2]/[0] r=-1 lpr=83 pi=[76,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.17( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=-1 lpr=83 pi=[77,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.7( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=-1 lpr=83 pi=[77,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.17( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=-1 lpr=83 pi=[77,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.7( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=-1 lpr=83 pi=[77,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.16( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 83 pg[9.16( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[68,83)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 83 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 83 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 83 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 83 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 83 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 83 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 83 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 83 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 83 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=77/78 n=7 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 83 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=77/78 n=7 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 83 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 83 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 83 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 83 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 83 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=83) [2]/[0] r=0 lpr=83 pi=[76,83)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 83 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=76/77 n=7 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=83) [2]/[0] r=0 lpr=83 pi=[76,83)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Nov 22 03:28:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 03:28:11 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Nov 22 03:28:11 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Nov 22 03:28:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Nov 22 03:28:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 03:28:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Nov 22 03:28:12 compute-0 ceph-mon[75011]: osdmap e83: 3 total, 3 up, 3 in
Nov 22 03:28:12 compute-0 ceph-mon[75011]: pgmap v181: 305 pgs: 305 active+clean; 457 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Nov 22 03:28:12 compute-0 ceph-mon[75011]: 8.13 scrub starts
Nov 22 03:28:12 compute-0 ceph-mon[75011]: 8.13 scrub ok
Nov 22 03:28:12 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Nov 22 03:28:12 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 84 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=84 pruub=14.552654266s) [2] r=-1 lpr=84 pi=[68,84)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 166.082351685s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:12 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 84 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=84 pruub=14.552339554s) [2] r=-1 lpr=84 pi=[68,84)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.082351685s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:12 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 84 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=84 pruub=14.552721977s) [2] r=-1 lpr=84 pi=[68,84)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 166.082839966s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:12 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 84 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=84 pruub=14.552634239s) [2] r=-1 lpr=84 pi=[68,84)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 166.082839966s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:12 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 84 pg[9.18( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=84) [2] r=0 lpr=84 pi=[68,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:12 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 84 pg[9.8( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=84) [2] r=0 lpr=84 pi=[68,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:12 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 84 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=83) [2]/[0] async=[2] r=0 lpr=83 pi=[76,83)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:12 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 84 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:12 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 84 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] async=[2] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:12 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 84 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] async=[2] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:12 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 84 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=83) [2]/[0] async=[2] r=0 lpr=83 pi=[77,83)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:12 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 84 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:12 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 84 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:12 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 84 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[68,83)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 1 active+recovery_wait+remapped, 3 active+remapped, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 4/246 objects misplaced (1.626%); 108 B/s, 3 objects/s recovering
Nov 22 03:28:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Nov 22 03:28:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 03:28:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Nov 22 03:28:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 03:28:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Nov 22 03:28:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Nov 22 03:28:13 compute-0 ceph-mon[75011]: osdmap e84: 3 total, 3 up, 3 in
Nov 22 03:28:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Nov 22 03:28:13 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.8( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] r=-1 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.8( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] r=-1 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.18( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] r=-1 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.18( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] r=-1 lpr=85 pi=[68,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=83/76 les/c/f=84/77/0 sis=85) [2] r=0 lpr=85 pi=[76,85)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=83/76 les/c/f=84/77/0 sis=85) [2] r=0 lpr=85 pi=[76,85)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 85 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] r=0 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] r=0 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85 pruub=15.006658554s) [2] async=[2] r=-1 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 167.548721313s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] r=0 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 85 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85 pruub=15.006406784s) [2] async=[2] r=-1 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 65'583 active pruub 172.643218994s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 85 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85 pruub=15.006294250s) [2] r=-1 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 172.643218994s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85 pruub=15.006536484s) [2] r=-1 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.548721313s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 85 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85 pruub=15.005842209s) [2] async=[2] r=-1 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 65'583 active pruub 172.643142700s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 85 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85 pruub=15.005842209s) [2] async=[2] r=-1 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 65'583 active pruub 172.643188477s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 85 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85 pruub=15.005789757s) [2] r=-1 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 172.643142700s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] r=0 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 85 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85 pruub=15.005758286s) [2] r=-1 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 172.643188477s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85 pruub=15.012905121s) [2] async=[2] r=-1 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 167.555206299s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85 pruub=15.012851715s) [2] r=-1 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.555206299s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85 pruub=15.012298584s) [2] async=[2] r=-1 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 167.555206299s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=83/84 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85 pruub=15.012242317s) [2] r=-1 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.555206299s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85 pruub=15.011571884s) [2] async=[2] r=-1 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 167.555191040s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 85 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=83/76 les/c/f=84/77/0 sis=85 pruub=15.005183220s) [2] async=[2] r=-1 lpr=85 pi=[76,85)/1 crt=65'583 mlcod 65'583 active pruub 172.643157959s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:13 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 85 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85 pruub=15.011510849s) [2] r=-1 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 167.555191040s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 85 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=83/84 n=7 ec=68/33 lis/c=83/76 les/c/f=84/77/0 sis=85 pruub=15.004100800s) [2] r=-1 lpr=85 pi=[76,85)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 172.643157959s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:13 compute-0 sudo[105150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:13 compute-0 sudo[105150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:13 compute-0 sudo[105150]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:13 compute-0 sudo[105175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:28:13 compute-0 sudo[105175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:13 compute-0 sudo[105175]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:13 compute-0 sshd-session[105180]: Accepted publickey for zuul from 192.168.122.30 port 48398 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:28:13 compute-0 systemd-logind[799]: New session 35 of user zuul.
Nov 22 03:28:13 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 22 03:28:13 compute-0 sudo[105202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:13 compute-0 sshd-session[105180]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:28:13 compute-0 sudo[105202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:13 compute-0 sudo[105202]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:13 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.1b deep-scrub starts
Nov 22 03:28:13 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.1b deep-scrub ok
Nov 22 03:28:13 compute-0 sudo[105229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:28:13 compute-0 sudo[105229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:13 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Nov 22 03:28:13 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Nov 22 03:28:14 compute-0 sudo[105229]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:28:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:28:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:28:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:28:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e15e88b9-3354-49ab-8fda-d77465e216a2 does not exist
Nov 22 03:28:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 804d6296-e040-4c2b-b398-4f16fb5e1f8e does not exist
Nov 22 03:28:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 97d6569a-eb1b-41fc-b8a2-321e15a06089 does not exist
Nov 22 03:28:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:28:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:28:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:28:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Nov 22 03:28:14 compute-0 ceph-mon[75011]: pgmap v183: 305 pgs: 1 active+recovery_wait+remapped, 3 active+remapped, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 4/246 objects misplaced (1.626%); 108 B/s, 3 objects/s recovering
Nov 22 03:28:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Nov 22 03:28:14 compute-0 ceph-mon[75011]: osdmap e85: 3 total, 3 up, 3 in
Nov 22 03:28:14 compute-0 ceph-mon[75011]: 8.1b deep-scrub starts
Nov 22 03:28:14 compute-0 ceph-mon[75011]: 8.1b deep-scrub ok
Nov 22 03:28:14 compute-0 ceph-mon[75011]: 8.16 scrub starts
Nov 22 03:28:14 compute-0 ceph-mon[75011]: 8.16 scrub ok
Nov 22 03:28:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:28:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:28:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Nov 22 03:28:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Nov 22 03:28:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 86 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 86 pg[9.6( v 65'583 (0'0,65'583] local-lis/les=85/86 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 86 pg[9.7( v 65'583 (0'0,65'583] local-lis/les=85/86 n=7 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 86 pg[9.f( v 65'583 (0'0,65'583] local-lis/les=85/86 n=7 ec=68/33 lis/c=83/76 les/c/f=84/77/0 sis=85) [2] r=0 lpr=85 pi=[76,85)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 86 pg[9.17( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 86 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 86 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=83/77 les/c/f=84/78/0 sis=85) [2] r=0 lpr=85 pi=[77,85)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 86 pg[9.e( v 65'583 (0'0,65'583] local-lis/les=85/86 n=7 ec=68/33 lis/c=83/68 les/c/f=84/69/0 sis=85) [2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 86 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] async=[2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 86 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=85/86 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=85) [2]/[1] async=[2] r=0 lpr=85 pi=[68,85)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:14 compute-0 sudo[105435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:14 compute-0 sudo[105435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:14 compute-0 sudo[105435]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:14 compute-0 python3.9[105434]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 22 03:28:14 compute-0 sudo[105460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:28:14 compute-0 sudo[105460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:14 compute-0 sudo[105460]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:14 compute-0 sudo[105485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:14 compute-0 sudo[105485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:14 compute-0 sudo[105485]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:14 compute-0 sudo[105534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:28:14 compute-0 sudo[105534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:14 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Nov 22 03:28:14 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Nov 22 03:28:15 compute-0 podman[105651]: 2025-11-22 03:28:15.149784279 +0000 UTC m=+0.059002383 container create 9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:28:15 compute-0 systemd[1]: Started libpod-conmon-9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507.scope.
Nov 22 03:28:15 compute-0 podman[105651]: 2025-11-22 03:28:15.127752086 +0000 UTC m=+0.036970190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:28:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:28:15 compute-0 podman[105651]: 2025-11-22 03:28:15.25669441 +0000 UTC m=+0.165912574 container init 9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:28:15 compute-0 podman[105651]: 2025-11-22 03:28:15.266839158 +0000 UTC m=+0.176057262 container start 9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:28:15 compute-0 podman[105651]: 2025-11-22 03:28:15.270622929 +0000 UTC m=+0.179841043 container attach 9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:28:15 compute-0 keen_khayyam[105667]: 167 167
Nov 22 03:28:15 compute-0 systemd[1]: libpod-9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507.scope: Deactivated successfully.
Nov 22 03:28:15 compute-0 podman[105651]: 2025-11-22 03:28:15.275188449 +0000 UTC m=+0.184406563 container died 9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:28:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cd2ba626764c0bde52d8a9daa642e21d783c2541795618833ee7115e9f90c9f-merged.mount: Deactivated successfully.
Nov 22 03:28:15 compute-0 podman[105651]: 2025-11-22 03:28:15.324791642 +0000 UTC m=+0.234009766 container remove 9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:28:15 compute-0 systemd[1]: libpod-conmon-9309b7e8160b83aa1aba81bde70700272c91c7a92c0a6879457b22379f7af507.scope: Deactivated successfully.
Nov 22 03:28:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 1 active+recovery_wait+remapped, 3 active+remapped, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 4/246 objects misplaced (1.626%); 136 B/s, 4 objects/s recovering
Nov 22 03:28:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Nov 22 03:28:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 03:28:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Nov 22 03:28:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 03:28:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Nov 22 03:28:15 compute-0 ceph-mon[75011]: osdmap e86: 3 total, 3 up, 3 in
Nov 22 03:28:15 compute-0 ceph-mon[75011]: 7.15 scrub starts
Nov 22 03:28:15 compute-0 ceph-mon[75011]: 7.15 scrub ok
Nov 22 03:28:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Nov 22 03:28:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Nov 22 03:28:15 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 87 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=85/86 n=7 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87 pruub=14.994482040s) [2] async=[2] r=-1 lpr=87 pi=[68,87)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 169.576385498s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:15 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 87 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=85/86 n=7 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87 pruub=14.994410515s) [2] r=-1 lpr=87 pi=[68,87)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.576385498s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:15 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 87 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87 pruub=14.993615150s) [2] async=[2] r=-1 lpr=87 pi=[68,87)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 169.576370239s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:15 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 87 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87 pruub=14.993231773s) [2] r=-1 lpr=87 pi=[68,87)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 169.576370239s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:15 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 87 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87) [2] r=0 lpr=87 pi=[68,87)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:15 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 87 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87) [2] r=0 lpr=87 pi=[68,87)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:15 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 87 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87) [2] r=0 lpr=87 pi=[68,87)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:15 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 87 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87) [2] r=0 lpr=87 pi=[68,87)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:15 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Nov 22 03:28:15 compute-0 podman[105715]: 2025-11-22 03:28:15.533032996 +0000 UTC m=+0.056949328 container create 66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banach, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:28:15 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Nov 22 03:28:15 compute-0 systemd[1]: Started libpod-conmon-66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d.scope.
Nov 22 03:28:15 compute-0 podman[105715]: 2025-11-22 03:28:15.500517975 +0000 UTC m=+0.024434247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:28:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05f01079a2d5202a2d9b7988d4e553defd354ba5cec63a91fefff2357cd1bd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05f01079a2d5202a2d9b7988d4e553defd354ba5cec63a91fefff2357cd1bd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05f01079a2d5202a2d9b7988d4e553defd354ba5cec63a91fefff2357cd1bd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05f01079a2d5202a2d9b7988d4e553defd354ba5cec63a91fefff2357cd1bd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f05f01079a2d5202a2d9b7988d4e553defd354ba5cec63a91fefff2357cd1bd2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:15 compute-0 podman[105715]: 2025-11-22 03:28:15.669724945 +0000 UTC m=+0.193641167 container init 66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:28:15 compute-0 podman[105715]: 2025-11-22 03:28:15.687066674 +0000 UTC m=+0.210982886 container start 66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:28:15 compute-0 podman[105715]: 2025-11-22 03:28:15.691401559 +0000 UTC m=+0.215317731 container attach 66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:28:16 compute-0 python3.9[105809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:28:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Nov 22 03:28:16 compute-0 ceph-mon[75011]: pgmap v186: 305 pgs: 1 active+recovery_wait+remapped, 3 active+remapped, 301 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 4/246 objects misplaced (1.626%); 136 B/s, 4 objects/s recovering
Nov 22 03:28:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Nov 22 03:28:16 compute-0 ceph-mon[75011]: osdmap e87: 3 total, 3 up, 3 in
Nov 22 03:28:16 compute-0 ceph-mon[75011]: 5.15 scrub starts
Nov 22 03:28:16 compute-0 ceph-mon[75011]: 5.15 scrub ok
Nov 22 03:28:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Nov 22 03:28:16 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Nov 22 03:28:16 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 88 pg[9.8( v 65'583 (0'0,65'583] local-lis/les=87/88 n=7 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87) [2] r=0 lpr=87 pi=[68,87)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:16 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 88 pg[9.18( v 65'583 (0'0,65'583] local-lis/les=87/88 n=6 ec=68/33 lis/c=85/68 les/c/f=86/69/0 sis=87) [2] r=0 lpr=87 pi=[68,87)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:16 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Nov 22 03:28:16 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Nov 22 03:28:16 compute-0 brave_banach[105754]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:28:16 compute-0 brave_banach[105754]: --> relative data size: 1.0
Nov 22 03:28:16 compute-0 brave_banach[105754]: --> All data devices are unavailable
Nov 22 03:28:16 compute-0 systemd[1]: libpod-66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d.scope: Deactivated successfully.
Nov 22 03:28:16 compute-0 podman[105715]: 2025-11-22 03:28:16.788865346 +0000 UTC m=+1.312781538 container died 66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banach, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:28:16 compute-0 systemd[1]: libpod-66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d.scope: Consumed 1.038s CPU time.
Nov 22 03:28:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f05f01079a2d5202a2d9b7988d4e553defd354ba5cec63a91fefff2357cd1bd2-merged.mount: Deactivated successfully.
Nov 22 03:28:16 compute-0 podman[105715]: 2025-11-22 03:28:16.865526626 +0000 UTC m=+1.389442818 container remove 66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_banach, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:28:16 compute-0 systemd[1]: libpod-conmon-66d4076caea821b34e03c762d56270908b5a3284b037509d01231a6b6c436e6d.scope: Deactivated successfully.
Nov 22 03:28:16 compute-0 sudo[105534]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:16 compute-0 sudo[105926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:16 compute-0 sudo[105926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:16 compute-0 sudo[105926]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:17 compute-0 sudo[105974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:28:17 compute-0 sudo[105974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:17 compute-0 sudo[105974]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:17 compute-0 sudo[105999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:17 compute-0 sudo[105999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:17 compute-0 sudo[105999]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:17 compute-0 sudo[106044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:28:17 compute-0 sudo[106044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:17 compute-0 sudo[106099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgzsrnjaliiqyghyyqxjfhrrwxjztgzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782096.7075129-45-96045475233888/AnsiballZ_command.py'
Nov 22 03:28:17 compute-0 sudo[106099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:28:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 10 objects/s recovering
Nov 22 03:28:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Nov 22 03:28:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 03:28:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Nov 22 03:28:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 03:28:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Nov 22 03:28:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Nov 22 03:28:17 compute-0 ceph-mon[75011]: osdmap e88: 3 total, 3 up, 3 in
Nov 22 03:28:17 compute-0 ceph-mon[75011]: 5.14 scrub starts
Nov 22 03:28:17 compute-0 ceph-mon[75011]: 5.14 scrub ok
Nov 22 03:28:17 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Nov 22 03:28:17 compute-0 python3.9[106101]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:28:17 compute-0 sudo[106099]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:17 compute-0 podman[106143]: 2025-11-22 03:28:17.562611643 +0000 UTC m=+0.073251081 container create bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:28:17 compute-0 systemd[1]: Started libpod-conmon-bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c.scope.
Nov 22 03:28:17 compute-0 podman[106143]: 2025-11-22 03:28:17.530687077 +0000 UTC m=+0.041326565 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:28:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:28:17 compute-0 podman[106143]: 2025-11-22 03:28:17.698011827 +0000 UTC m=+0.208651255 container init bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Nov 22 03:28:17 compute-0 podman[106143]: 2025-11-22 03:28:17.709627205 +0000 UTC m=+0.220266603 container start bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:28:17 compute-0 adoring_cohen[106181]: 167 167
Nov 22 03:28:17 compute-0 systemd[1]: libpod-bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c.scope: Deactivated successfully.
Nov 22 03:28:17 compute-0 podman[106143]: 2025-11-22 03:28:17.746372858 +0000 UTC m=+0.257012256 container attach bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:28:17 compute-0 podman[106143]: 2025-11-22 03:28:17.747656272 +0000 UTC m=+0.258295700 container died bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:28:17 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Nov 22 03:28:17 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Nov 22 03:28:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-780cfb1011a9d52f50f14135d35b8348731b3fa7c4e993307a1b3c38020f668f-merged.mount: Deactivated successfully.
Nov 22 03:28:17 compute-0 podman[106143]: 2025-11-22 03:28:17.963648681 +0000 UTC m=+0.474288079 container remove bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:28:17 compute-0 systemd[1]: libpod-conmon-bff6bc3445cc3662ad6ec8644cef5b0de1a67f38c54043b72281b10e374b442c.scope: Deactivated successfully.
Nov 22 03:28:18 compute-0 podman[106262]: 2025-11-22 03:28:18.169962764 +0000 UTC m=+0.078690806 container create 1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:28:18 compute-0 podman[106262]: 2025-11-22 03:28:18.11580794 +0000 UTC m=+0.024535962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:28:18 compute-0 systemd[1]: Started libpod-conmon-1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd.scope.
Nov 22 03:28:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e1082619c46f8d9e3de318c23093594f75127d9215336ed80faae77d43cca2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e1082619c46f8d9e3de318c23093594f75127d9215336ed80faae77d43cca2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e1082619c46f8d9e3de318c23093594f75127d9215336ed80faae77d43cca2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82e1082619c46f8d9e3de318c23093594f75127d9215336ed80faae77d43cca2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:18 compute-0 sudo[106355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xykkprgvisjbuuzpxegutaovbexdqbxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782098.0584197-57-114457408151066/AnsiballZ_stat.py'
Nov 22 03:28:18 compute-0 sudo[106355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:28:18 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Nov 22 03:28:18 compute-0 python3.9[106357]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:28:18 compute-0 sudo[106355]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:18 compute-0 podman[106262]: 2025-11-22 03:28:18.738875196 +0000 UTC m=+0.647603238 container init 1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:28:18 compute-0 podman[106262]: 2025-11-22 03:28:18.754326245 +0000 UTC m=+0.663054267 container start 1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:28:18 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Nov 22 03:28:18 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Nov 22 03:28:18 compute-0 ceph-mon[75011]: pgmap v189: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 10 objects/s recovering
Nov 22 03:28:18 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Nov 22 03:28:18 compute-0 ceph-mon[75011]: osdmap e89: 3 total, 3 up, 3 in
Nov 22 03:28:18 compute-0 ceph-mon[75011]: 7.1d scrub starts
Nov 22 03:28:18 compute-0 ceph-mon[75011]: 7.1d scrub ok
Nov 22 03:28:18 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Nov 22 03:28:19 compute-0 podman[106262]: 2025-11-22 03:28:19.000208985 +0000 UTC m=+0.908937047 container attach 1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:28:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 88 B/s, 8 objects/s recovering
Nov 22 03:28:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Nov 22 03:28:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]: {
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:     "0": [
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:         {
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "devices": [
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "/dev/loop3"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             ],
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_name": "ceph_lv0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_size": "21470642176",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "name": "ceph_lv0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "tags": {
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cluster_name": "ceph",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.crush_device_class": "",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.encrypted": "0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osd_id": "0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.type": "block",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.vdo": "0"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             },
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "type": "block",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "vg_name": "ceph_vg0"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:         }
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:     ],
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:     "1": [
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:         {
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "devices": [
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "/dev/loop4"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             ],
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_name": "ceph_lv1",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_size": "21470642176",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "name": "ceph_lv1",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "tags": {
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cluster_name": "ceph",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.crush_device_class": "",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.encrypted": "0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osd_id": "1",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.type": "block",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.vdo": "0"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             },
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "type": "block",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "vg_name": "ceph_vg1"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:         }
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:     ],
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:     "2": [
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:         {
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "devices": [
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "/dev/loop5"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             ],
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_name": "ceph_lv2",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_size": "21470642176",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "name": "ceph_lv2",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "tags": {
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.cluster_name": "ceph",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.crush_device_class": "",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.encrypted": "0",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osd_id": "2",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.type": "block",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:                 "ceph.vdo": "0"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             },
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "type": "block",
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:             "vg_name": "ceph_vg2"
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:         }
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]:     ]
Nov 22 03:28:19 compute-0 inspiring_shaw[106326]: }
Nov 22 03:28:19 compute-0 systemd[1]: libpod-1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd.scope: Deactivated successfully.
Nov 22 03:28:19 compute-0 podman[106262]: 2025-11-22 03:28:19.561691972 +0000 UTC m=+1.470419984 container died 1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:28:19 compute-0 sudo[106527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwbvknzloktvvdjvjtdftaabsgwqnidc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782099.0879989-68-43179613321817/AnsiballZ_file.py'
Nov 22 03:28:19 compute-0 sudo[106527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:28:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-82e1082619c46f8d9e3de318c23093594f75127d9215336ed80faae77d43cca2-merged.mount: Deactivated successfully.
Nov 22 03:28:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Nov 22 03:28:19 compute-0 podman[106262]: 2025-11-22 03:28:19.862317011 +0000 UTC m=+1.771045023 container remove 1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_shaw, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:28:19 compute-0 systemd[1]: libpod-conmon-1c7c221c24fecc670cdb19f99837515930d5155bb5b4e9fd163cd7756cb6c3fd.scope: Deactivated successfully.
Nov 22 03:28:19 compute-0 sudo[106044]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:19 compute-0 python3.9[106529]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:28:19 compute-0 sudo[106527]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:19 compute-0 sudo[106531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:19 compute-0 sudo[106531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:19 compute-0 sudo[106531]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:19 compute-0 sudo[106556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:28:19 compute-0 sudo[106556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:19 compute-0 sudo[106556]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:20 compute-0 sudo[106605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:20 compute-0 sudo[106605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:20 compute-0 sudo[106605]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:20 compute-0 sudo[106630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:28:20 compute-0 sudo[106630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:20 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 03:28:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Nov 22 03:28:20 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Nov 22 03:28:20 compute-0 ceph-mon[75011]: 2.13 scrub starts
Nov 22 03:28:20 compute-0 ceph-mon[75011]: 8.4 scrub starts
Nov 22 03:28:20 compute-0 ceph-mon[75011]: 2.13 scrub ok
Nov 22 03:28:20 compute-0 ceph-mon[75011]: 8.4 scrub ok
Nov 22 03:28:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Nov 22 03:28:20 compute-0 sudo[106819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmrsgwcadpmjhclzsnxfulwsvxusnrdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782100.1904454-77-15321020632550/AnsiballZ_file.py'
Nov 22 03:28:20 compute-0 sudo[106819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:28:20 compute-0 podman[106820]: 2025-11-22 03:28:20.522233484 +0000 UTC m=+0.044369846 container create 98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pare, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:28:20 compute-0 systemd[1]: Started libpod-conmon-98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c.scope.
Nov 22 03:28:20 compute-0 podman[106820]: 2025-11-22 03:28:20.506476826 +0000 UTC m=+0.028613188 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:28:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:28:20 compute-0 podman[106820]: 2025-11-22 03:28:20.640518775 +0000 UTC m=+0.162655207 container init 98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pare, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:28:20 compute-0 podman[106820]: 2025-11-22 03:28:20.649095182 +0000 UTC m=+0.171231534 container start 98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:28:20 compute-0 podman[106820]: 2025-11-22 03:28:20.653643162 +0000 UTC m=+0.175779544 container attach 98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:28:20 compute-0 elastic_pare[106838]: 167 167
Nov 22 03:28:20 compute-0 podman[106820]: 2025-11-22 03:28:20.654710841 +0000 UTC m=+0.176847193 container died 98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:28:20 compute-0 systemd[1]: libpod-98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c.scope: Deactivated successfully.
Nov 22 03:28:20 compute-0 python3.9[106823]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:28:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-c157d95155060cf6e10867ada6752dd28fc1fdd0f8eb2d1e8159d1e157fa4b03-merged.mount: Deactivated successfully.
Nov 22 03:28:20 compute-0 sudo[106819]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:20 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Nov 22 03:28:20 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Nov 22 03:28:20 compute-0 podman[106820]: 2025-11-22 03:28:20.744376965 +0000 UTC m=+0.266513327 container remove 98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:28:20 compute-0 systemd[1]: libpod-conmon-98aa0d247a3171bb13ffdbd00ee2b2c9d2d28673586f23d1d087da23a119554c.scope: Deactivated successfully.
Nov 22 03:28:20 compute-0 podman[106913]: 2025-11-22 03:28:20.923256521 +0000 UTC m=+0.035459160 container create 46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gagarin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:28:20 compute-0 systemd[1]: Started libpod-conmon-46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5.scope.
Nov 22 03:28:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:28:21 compute-0 podman[106913]: 2025-11-22 03:28:20.909158388 +0000 UTC m=+0.021361047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a7ea1bf65e97c655b2ca6f65298b2971e523bb52a1af751b7245e73a7b39c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a7ea1bf65e97c655b2ca6f65298b2971e523bb52a1af751b7245e73a7b39c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a7ea1bf65e97c655b2ca6f65298b2971e523bb52a1af751b7245e73a7b39c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a7ea1bf65e97c655b2ca6f65298b2971e523bb52a1af751b7245e73a7b39c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:28:21 compute-0 podman[106913]: 2025-11-22 03:28:21.022871368 +0000 UTC m=+0.135074017 container init 46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:28:21 compute-0 podman[106913]: 2025-11-22 03:28:21.030555222 +0000 UTC m=+0.142757851 container start 46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gagarin, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:28:21 compute-0 podman[106913]: 2025-11-22 03:28:21.033681445 +0000 UTC m=+0.145884134 container attach 46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gagarin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:28:21 compute-0 ceph-mon[75011]: pgmap v191: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 88 B/s, 8 objects/s recovering
Nov 22 03:28:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Nov 22 03:28:21 compute-0 ceph-mon[75011]: osdmap e90: 3 total, 3 up, 3 in
Nov 22 03:28:21 compute-0 ceph-mon[75011]: 3.11 scrub starts
Nov 22 03:28:21 compute-0 ceph-mon[75011]: 3.11 scrub ok
Nov 22 03:28:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 7 objects/s recovering
Nov 22 03:28:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Nov 22 03:28:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 03:28:21 compute-0 python3.9[107033]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:28:21 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Nov 22 03:28:21 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Nov 22 03:28:21 compute-0 network[107050]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:28:21 compute-0 network[107051]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:28:21 compute-0 network[107052]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:28:21 compute-0 sad_gagarin[106955]: {
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "osd_id": 1,
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "type": "bluestore"
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:     },
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "osd_id": 0,
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "type": "bluestore"
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:     },
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "osd_id": 2,
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:         "type": "bluestore"
Nov 22 03:28:21 compute-0 sad_gagarin[106955]:     }
Nov 22 03:28:21 compute-0 sad_gagarin[106955]: }
Nov 22 03:28:22 compute-0 podman[106913]: 2025-11-22 03:28:22.011129584 +0000 UTC m=+1.123332243 container died 46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:28:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Nov 22 03:28:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Nov 22 03:28:22 compute-0 ceph-mon[75011]: 2.11 scrub starts
Nov 22 03:28:22 compute-0 ceph-mon[75011]: 2.11 scrub ok
Nov 22 03:28:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 03:28:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Nov 22 03:28:22 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Nov 22 03:28:22 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 90 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=90 pruub=12.631612778s) [2] r=-1 lpr=90 pi=[68,90)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 174.081665039s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:22 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 91 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=90 pruub=12.631557465s) [2] r=-1 lpr=90 pi=[68,90)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.081665039s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:22 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 91 pg[9.c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=90) [2] r=0 lpr=91 pi=[68,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:22 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 90 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=90 pruub=12.630196571s) [2] r=-1 lpr=90 pi=[68,90)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 174.083312988s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:22 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 91 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=90 pruub=12.629769325s) [2] r=-1 lpr=90 pi=[68,90)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 174.083312988s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:22 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 91 pg[9.1c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=90) [2] r=0 lpr=91 pi=[68,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:22 compute-0 systemd[1]: libpod-46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5.scope: Deactivated successfully.
Nov 22 03:28:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-34a7ea1bf65e97c655b2ca6f65298b2971e523bb52a1af751b7245e73a7b39c1-merged.mount: Deactivated successfully.
Nov 22 03:28:22 compute-0 podman[106913]: 2025-11-22 03:28:22.421350846 +0000 UTC m=+1.533553515 container remove 46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_gagarin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:28:22 compute-0 systemd[1]: libpod-conmon-46a074249b41c24696ea65bc0ac8dc615b3946c2d113dd7a29e59c98ae4648e5.scope: Deactivated successfully.
Nov 22 03:28:22 compute-0 sudo[106630]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:28:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:28:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:28:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:28:22 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b9b639f1-0388-4425-a357-f9ed0d6f8a32 does not exist
Nov 22 03:28:22 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e36eb1aa-1aba-4830-8623-b46bdb9e406b does not exist
Nov 22 03:28:22 compute-0 sudo[107106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:28:22 compute-0 sudo[107106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:22 compute-0 sudo[107106]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:22 compute-0 sudo[107134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:28:22 compute-0 sudo[107134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:28:22 compute-0 sudo[107134]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:22 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Nov 22 03:28:22 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Nov 22 03:28:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Nov 22 03:28:23 compute-0 ceph-mon[75011]: pgmap v193: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 7 objects/s recovering
Nov 22 03:28:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Nov 22 03:28:23 compute-0 ceph-mon[75011]: osdmap e91: 3 total, 3 up, 3 in
Nov 22 03:28:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:28:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:28:23 compute-0 ceph-mon[75011]: 8.17 scrub starts
Nov 22 03:28:23 compute-0 ceph-mon[75011]: 8.17 scrub ok
Nov 22 03:28:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Nov 22 03:28:23 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Nov 22 03:28:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Nov 22 03:28:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 03:28:23 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 92 pg[9.1c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] r=-1 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:23 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 92 pg[9.1c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] r=-1 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:23 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 92 pg[9.c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] r=-1 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:23 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 92 pg[9.c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] r=-1 lpr=92 pi=[68,92)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:23 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 92 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] r=0 lpr=92 pi=[68,92)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:23 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 92 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] r=0 lpr=92 pi=[68,92)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:23 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 92 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] r=0 lpr=92 pi=[68,92)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:23 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 92 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=68/69 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] r=0 lpr=92 pi=[68,92)/1 crt=65'583 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:23 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.1e deep-scrub starts
Nov 22 03:28:23 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.e scrub starts
Nov 22 03:28:23 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 7.1e deep-scrub ok
Nov 22 03:28:23 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.e scrub ok
Nov 22 03:28:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Nov 22 03:28:24 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 03:28:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Nov 22 03:28:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Nov 22 03:28:24 compute-0 ceph-mon[75011]: osdmap e92: 3 total, 3 up, 3 in
Nov 22 03:28:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Nov 22 03:28:24 compute-0 ceph-mon[75011]: 7.1e deep-scrub starts
Nov 22 03:28:24 compute-0 ceph-mon[75011]: 3.e scrub starts
Nov 22 03:28:24 compute-0 ceph-mon[75011]: 7.1e deep-scrub ok
Nov 22 03:28:24 compute-0 ceph-mon[75011]: 3.e scrub ok
Nov 22 03:28:24 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 93 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=92/93 n=6 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] async=[2] r=0 lpr=92 pi=[68,92)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:24 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 93 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=92/93 n=7 ec=68/33 lis/c=68/68 les/c/f=69/69/0 sis=92) [2]/[1] async=[2] r=0 lpr=92 pi=[68,92)/1 crt=65'583 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:24 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Nov 22 03:28:24 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Nov 22 03:28:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Nov 22 03:28:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v198: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Nov 22 03:28:25 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 03:28:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Nov 22 03:28:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Nov 22 03:28:25 compute-0 ceph-mon[75011]: pgmap v196: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:25 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Nov 22 03:28:25 compute-0 ceph-mon[75011]: osdmap e93: 3 total, 3 up, 3 in
Nov 22 03:28:25 compute-0 ceph-mon[75011]: 3.17 scrub starts
Nov 22 03:28:25 compute-0 ceph-mon[75011]: 3.17 scrub ok
Nov 22 03:28:25 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 94 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=92/93 n=7 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94 pruub=14.991993904s) [2] async=[2] r=-1 lpr=94 pi=[68,94)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 179.527038574s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:25 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 94 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=92/93 n=7 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94 pruub=14.991873741s) [2] r=-1 lpr=94 pi=[68,94)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.527038574s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:25 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 94 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=92/93 n=6 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94 pruub=14.990732193s) [2] async=[2] r=-1 lpr=94 pi=[68,94)/1 crt=65'583 lcod 0'0 mlcod 0'0 active pruub 179.527038574s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:25 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 94 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=92/93 n=6 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94 pruub=14.990468025s) [2] r=-1 lpr=94 pi=[68,94)/1 crt=65'583 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 179.527038574s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:25 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 94 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94) [2] r=0 lpr=94 pi=[68,94)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:25 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 94 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=0/0 n=7 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94) [2] r=0 lpr=94 pi=[68,94)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:25 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 94 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94) [2] r=0 lpr=94 pi=[68,94)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:25 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 94 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94) [2] r=0 lpr=94 pi=[68,94)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:25 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Nov 22 03:28:25 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Nov 22 03:28:25 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Nov 22 03:28:25 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Nov 22 03:28:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Nov 22 03:28:26 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 03:28:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Nov 22 03:28:26 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Nov 22 03:28:26 compute-0 ceph-mon[75011]: pgmap v198: 305 pgs: 305 active+clean; 457 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:26 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Nov 22 03:28:26 compute-0 ceph-mon[75011]: osdmap e94: 3 total, 3 up, 3 in
Nov 22 03:28:26 compute-0 ceph-mon[75011]: 8.1d scrub starts
Nov 22 03:28:26 compute-0 ceph-mon[75011]: 8.1d scrub ok
Nov 22 03:28:26 compute-0 ceph-mon[75011]: 8.19 scrub starts
Nov 22 03:28:26 compute-0 ceph-mon[75011]: 8.19 scrub ok
Nov 22 03:28:26 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Nov 22 03:28:26 compute-0 ceph-mon[75011]: osdmap e95: 3 total, 3 up, 3 in
Nov 22 03:28:26 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 95 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=94/95 n=6 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94) [2] r=0 lpr=94 pi=[68,94)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:26 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 95 pg[9.c( v 65'583 (0'0,65'583] local-lis/les=94/95 n=7 ec=68/33 lis/c=92/68 les/c/f=93/69/0 sis=94) [2] r=0 lpr=94 pi=[68,94)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:26 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.a scrub starts
Nov 22 03:28:26 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.a scrub ok
Nov 22 03:28:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:27 compute-0 python3.9[107404]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:28:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 22 03:28:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Nov 22 03:28:27 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 22 03:28:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Nov 22 03:28:27 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 22 03:28:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Nov 22 03:28:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Nov 22 03:28:27 compute-0 ceph-mon[75011]: 7.a scrub starts
Nov 22 03:28:27 compute-0 ceph-mon[75011]: 7.a scrub ok
Nov 22 03:28:27 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Nov 22 03:28:27 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Nov 22 03:28:27 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Nov 22 03:28:28 compute-0 python3.9[107554]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:28:28 compute-0 ceph-mon[75011]: pgmap v201: 305 pgs: 305 active+clean; 457 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 2 objects/s recovering
Nov 22 03:28:28 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Nov 22 03:28:28 compute-0 ceph-mon[75011]: osdmap e96: 3 total, 3 up, 3 in
Nov 22 03:28:28 compute-0 ceph-mon[75011]: 8.1e scrub starts
Nov 22 03:28:28 compute-0 ceph-mon[75011]: 8.1e scrub ok
Nov 22 03:28:28 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Nov 22 03:28:28 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Nov 22 03:28:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 2 objects/s recovering
Nov 22 03:28:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Nov 22 03:28:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 22 03:28:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Nov 22 03:28:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 22 03:28:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Nov 22 03:28:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Nov 22 03:28:29 compute-0 ceph-mon[75011]: 5.19 deep-scrub starts
Nov 22 03:28:29 compute-0 ceph-mon[75011]: 5.19 deep-scrub ok
Nov 22 03:28:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Nov 22 03:28:29 compute-0 python3.9[107708]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:28:29 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Nov 22 03:28:29 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Nov 22 03:28:29 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Nov 22 03:28:29 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Nov 22 03:28:30 compute-0 sudo[107864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcxcuezrskuamqiqfxrpwcbasqtqztxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782110.0568316-125-59451244564706/AnsiballZ_setup.py'
Nov 22 03:28:30 compute-0 sudo[107864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:28:30 compute-0 ceph-mon[75011]: pgmap v203: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 2 objects/s recovering
Nov 22 03:28:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Nov 22 03:28:30 compute-0 ceph-mon[75011]: osdmap e97: 3 total, 3 up, 3 in
Nov 22 03:28:30 compute-0 ceph-mon[75011]: 7.13 scrub starts
Nov 22 03:28:30 compute-0 ceph-mon[75011]: 7.13 scrub ok
Nov 22 03:28:30 compute-0 ceph-mon[75011]: 4.12 scrub starts
Nov 22 03:28:30 compute-0 ceph-mon[75011]: 4.12 scrub ok
Nov 22 03:28:30 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.15 deep-scrub starts
Nov 22 03:28:30 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.15 deep-scrub ok
Nov 22 03:28:30 compute-0 python3.9[107866]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:28:30 compute-0 sudo[107864]: pam_unix(sudo:session): session closed for user root
Nov 22 03:28:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Nov 22 03:28:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Nov 22 03:28:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 22 03:28:31 compute-0 sudo[107948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijdeztmfbebkamjriiiujxrayyeiisuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782110.0568316-125-59451244564706/AnsiballZ_dnf.py'
Nov 22 03:28:31 compute-0 sudo[107948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:28:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Nov 22 03:28:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 22 03:28:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Nov 22 03:28:31 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Nov 22 03:28:31 compute-0 ceph-mon[75011]: 3.15 deep-scrub starts
Nov 22 03:28:31 compute-0 ceph-mon[75011]: 3.15 deep-scrub ok
Nov 22 03:28:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Nov 22 03:28:31 compute-0 python3.9[107950]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:28:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:32 compute-0 ceph-mon[75011]: pgmap v205: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 2 objects/s recovering
Nov 22 03:28:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Nov 22 03:28:32 compute-0 ceph-mon[75011]: osdmap e98: 3 total, 3 up, 3 in
Nov 22 03:28:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Nov 22 03:28:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 22 03:28:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Nov 22 03:28:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Nov 22 03:28:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 22 03:28:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Nov 22 03:28:33 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Nov 22 03:28:33 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 99 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=76/77 n=6 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=99 pruub=12.645100594s) [2] r=-1 lpr=99 pi=[76,99)/1 crt=65'583 mlcod 0'0 active pruub 190.372985840s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:33 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 99 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=76/77 n=6 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=99 pruub=12.644308090s) [2] r=-1 lpr=99 pi=[76,99)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 190.372985840s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:33 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 99 pg[9.13( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=99) [2] r=0 lpr=99 pi=[76,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Nov 22 03:28:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Nov 22 03:28:34 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Nov 22 03:28:34 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 100 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=76/77 n=6 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=100) [2]/[0] r=0 lpr=100 pi=[76,100)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:34 compute-0 ceph-mon[75011]: pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:34 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Nov 22 03:28:34 compute-0 ceph-mon[75011]: osdmap e99: 3 total, 3 up, 3 in
Nov 22 03:28:34 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 100 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=76/77 n=6 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=100) [2]/[0] r=0 lpr=100 pi=[76,100)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:34 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 100 pg[9.13( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[76,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:34 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 100 pg[9.13( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[76,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:34 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Nov 22 03:28:34 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Nov 22 03:28:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Nov 22 03:28:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 22 03:28:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Nov 22 03:28:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 22 03:28:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Nov 22 03:28:35 compute-0 ceph-mon[75011]: osdmap e100: 3 total, 3 up, 3 in
Nov 22 03:28:35 compute-0 ceph-mon[75011]: 7.8 deep-scrub starts
Nov 22 03:28:35 compute-0 ceph-mon[75011]: 7.8 deep-scrub ok
Nov 22 03:28:35 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Nov 22 03:28:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Nov 22 03:28:35 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 101 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=100/101 n=6 ec=68/33 lis/c=76/76 les/c/f=77/77/0 sis=100) [2]/[0] async=[2] r=0 lpr=100 pi=[76,100)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:28:36
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['images', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'volumes', '.mgr']
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:28:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:28:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Nov 22 03:28:36 compute-0 ceph-mon[75011]: pgmap v210: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Nov 22 03:28:36 compute-0 ceph-mon[75011]: osdmap e101: 3 total, 3 up, 3 in
Nov 22 03:28:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Nov 22 03:28:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Nov 22 03:28:36 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 102 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=100/101 n=6 ec=68/33 lis/c=100/76 les/c/f=101/77/0 sis=102 pruub=15.117643356s) [2] async=[2] r=-1 lpr=102 pi=[76,102)/1 crt=65'583 mlcod 65'583 active pruub 195.948760986s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:36 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 102 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=100/101 n=6 ec=68/33 lis/c=100/76 les/c/f=101/77/0 sis=102 pruub=15.117566109s) [2] r=-1 lpr=102 pi=[76,102)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 195.948760986s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:36 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 102 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=100/76 les/c/f=101/77/0 sis=102) [2] r=0 lpr=102 pi=[76,102)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:36 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 102 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=100/76 les/c/f=101/77/0 sis=102) [2] r=0 lpr=102 pi=[76,102)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 03:28:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Nov 22 03:28:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Nov 22 03:28:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Nov 22 03:28:37 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Nov 22 03:28:37 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Nov 22 03:28:37 compute-0 ceph-mon[75011]: osdmap e102: 3 total, 3 up, 3 in
Nov 22 03:28:37 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 103 pg[9.13( v 65'583 (0'0,65'583] local-lis/les=102/103 n=6 ec=68/33 lis/c=100/76 les/c/f=101/77/0 sis=102) [2] r=0 lpr=102 pi=[76,102)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:37 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Nov 22 03:28:37 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Nov 22 03:28:38 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.18 deep-scrub starts
Nov 22 03:28:38 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.18 deep-scrub ok
Nov 22 03:28:38 compute-0 ceph-mon[75011]: pgmap v213: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Nov 22 03:28:38 compute-0 ceph-mon[75011]: osdmap e103: 3 total, 3 up, 3 in
Nov 22 03:28:38 compute-0 ceph-mon[75011]: 4.14 scrub starts
Nov 22 03:28:38 compute-0 ceph-mon[75011]: 4.14 scrub ok
Nov 22 03:28:38 compute-0 ceph-mon[75011]: 3.8 scrub starts
Nov 22 03:28:38 compute-0 ceph-mon[75011]: 3.8 scrub ok
Nov 22 03:28:38 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.e scrub starts
Nov 22 03:28:38 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.e scrub ok
Nov 22 03:28:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Nov 22 03:28:39 compute-0 ceph-mon[75011]: 8.18 deep-scrub starts
Nov 22 03:28:39 compute-0 ceph-mon[75011]: 8.18 deep-scrub ok
Nov 22 03:28:39 compute-0 ceph-mon[75011]: 7.e scrub starts
Nov 22 03:28:39 compute-0 ceph-mon[75011]: 7.e scrub ok
Nov 22 03:28:40 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.f deep-scrub starts
Nov 22 03:28:40 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.f deep-scrub ok
Nov 22 03:28:40 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.5 deep-scrub starts
Nov 22 03:28:40 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.5 deep-scrub ok
Nov 22 03:28:40 compute-0 ceph-mon[75011]: pgmap v215: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 1 objects/s recovering
Nov 22 03:28:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 22 03:28:41 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.f scrub starts
Nov 22 03:28:41 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.f scrub ok
Nov 22 03:28:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.d scrub starts
Nov 22 03:28:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.d scrub ok
Nov 22 03:28:41 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Nov 22 03:28:41 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Nov 22 03:28:41 compute-0 ceph-mon[75011]: 4.f deep-scrub starts
Nov 22 03:28:41 compute-0 ceph-mon[75011]: 4.f deep-scrub ok
Nov 22 03:28:41 compute-0 ceph-mon[75011]: 7.5 deep-scrub starts
Nov 22 03:28:41 compute-0 ceph-mon[75011]: 7.5 deep-scrub ok
Nov 22 03:28:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:42 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Nov 22 03:28:42 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Nov 22 03:28:42 compute-0 ceph-mon[75011]: pgmap v216: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Nov 22 03:28:42 compute-0 ceph-mon[75011]: 3.f scrub starts
Nov 22 03:28:42 compute-0 ceph-mon[75011]: 3.f scrub ok
Nov 22 03:28:42 compute-0 ceph-mon[75011]: 4.d scrub starts
Nov 22 03:28:42 compute-0 ceph-mon[75011]: 4.d scrub ok
Nov 22 03:28:42 compute-0 ceph-mon[75011]: 7.1 scrub starts
Nov 22 03:28:42 compute-0 ceph-mon[75011]: 7.1 scrub ok
Nov 22 03:28:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 22 03:28:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Nov 22 03:28:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 22 03:28:43 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Nov 22 03:28:43 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Nov 22 03:28:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Nov 22 03:28:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 22 03:28:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Nov 22 03:28:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Nov 22 03:28:43 compute-0 ceph-mon[75011]: 4.10 scrub starts
Nov 22 03:28:43 compute-0 ceph-mon[75011]: 4.10 scrub ok
Nov 22 03:28:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Nov 22 03:28:43 compute-0 ceph-mon[75011]: 8.1f scrub starts
Nov 22 03:28:43 compute-0 ceph-mon[75011]: 8.1f scrub ok
Nov 22 03:28:44 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 104 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=104 pruub=10.642375946s) [1] r=-1 lpr=104 pi=[77,104)/1 crt=65'583 mlcod 0'0 active pruub 199.385269165s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:44 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 104 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=104 pruub=10.641713142s) [1] r=-1 lpr=104 pi=[77,104)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 199.385269165s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:44 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 104 pg[9.15( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=104) [1] r=0 lpr=104 pi=[77,104)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:44 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Nov 22 03:28:44 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Nov 22 03:28:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Nov 22 03:28:44 compute-0 ceph-mon[75011]: pgmap v217: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Nov 22 03:28:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Nov 22 03:28:44 compute-0 ceph-mon[75011]: osdmap e104: 3 total, 3 up, 3 in
Nov 22 03:28:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Nov 22 03:28:44 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Nov 22 03:28:44 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=105) [1]/[0] r=-1 lpr=105 pi=[77,105)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:44 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 105 pg[9.15( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=105) [1]/[0] r=-1 lpr=105 pi=[77,105)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:44 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 105 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=105) [1]/[0] r=0 lpr=105 pi=[77,105)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:44 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 105 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=105) [1]/[0] r=0 lpr=105 pi=[77,105)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Nov 22 03:28:45 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 22 03:28:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Nov 22 03:28:45 compute-0 ceph-mon[75011]: 8.2 scrub starts
Nov 22 03:28:45 compute-0 ceph-mon[75011]: 8.2 scrub ok
Nov 22 03:28:45 compute-0 ceph-mon[75011]: osdmap e105: 3 total, 3 up, 3 in
Nov 22 03:28:45 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Nov 22 03:28:45 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 22 03:28:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Nov 22 03:28:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:28:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:28:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 106 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=106 pruub=8.272363663s) [0] r=-1 lpr=106 pi=[85,106)/1 crt=65'583 mlcod 0'0 active pruub 188.411682129s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 106 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=106 pruub=8.272042274s) [0] r=-1 lpr=106 pi=[85,106)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 188.411682129s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:46 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 106 pg[9.16( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=106) [0] r=0 lpr=106 pi=[85,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:46 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 106 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=105/106 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=105) [1]/[0] async=[1] r=0 lpr=105 pi=[77,105)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Nov 22 03:28:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Nov 22 03:28:46 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Nov 22 03:28:46 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 107 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=105/106 n=6 ec=68/33 lis/c=105/77 les/c/f=106/78/0 sis=107 pruub=15.836413383s) [1] async=[1] r=-1 lpr=107 pi=[77,107)/1 crt=65'583 mlcod 65'583 active pruub 206.949203491s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:46 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 107 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=105/106 n=6 ec=68/33 lis/c=105/77 les/c/f=106/78/0 sis=107 pruub=15.836302757s) [1] r=-1 lpr=107 pi=[77,107)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 206.949203491s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:46 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 107 pg[9.16( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[85,107)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:46 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 107 pg[9.16( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=107) [0]/[2] r=-1 lpr=107 pi=[85,107)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:46 compute-0 ceph-mon[75011]: pgmap v220: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Nov 22 03:28:46 compute-0 ceph-mon[75011]: osdmap e106: 3 total, 3 up, 3 in
Nov 22 03:28:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 107 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=105/77 les/c/f=106/78/0 sis=107) [1] r=0 lpr=107 pi=[77,107)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:46 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 107 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=105/77 les/c/f=106/78/0 sis=107) [1] r=0 lpr=107 pi=[77,107)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 107 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=107) [0]/[2] r=0 lpr=107 pi=[85,107)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:46 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 107 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=107) [0]/[2] r=0 lpr=107 pi=[85,107)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e107 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:47 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Nov 22 03:28:47 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Nov 22 03:28:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Nov 22 03:28:47 compute-0 ceph-mon[75011]: osdmap e107: 3 total, 3 up, 3 in
Nov 22 03:28:47 compute-0 ceph-mon[75011]: 8.1a scrub starts
Nov 22 03:28:47 compute-0 ceph-mon[75011]: 8.1a scrub ok
Nov 22 03:28:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Nov 22 03:28:47 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Nov 22 03:28:47 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 108 pg[9.15( v 65'583 (0'0,65'583] local-lis/les=107/108 n=6 ec=68/33 lis/c=105/77 les/c/f=106/78/0 sis=107) [1] r=0 lpr=107 pi=[77,107)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:48 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 108 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=107/108 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=107) [0]/[2] async=[0] r=0 lpr=107 pi=[85,107)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:48 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Nov 22 03:28:48 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Nov 22 03:28:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Nov 22 03:28:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Nov 22 03:28:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Nov 22 03:28:48 compute-0 ceph-mon[75011]: pgmap v223: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:48 compute-0 ceph-mon[75011]: osdmap e108: 3 total, 3 up, 3 in
Nov 22 03:28:48 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 109 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=107/85 les/c/f=108/86/0 sis=109) [0] r=0 lpr=109 pi=[85,109)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:48 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 109 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=107/85 les/c/f=108/86/0 sis=109) [0] r=0 lpr=109 pi=[85,109)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:48 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 109 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=107/108 n=6 ec=68/33 lis/c=107/85 les/c/f=108/86/0 sis=109 pruub=15.561989784s) [0] async=[0] r=-1 lpr=109 pi=[85,109)/1 crt=65'583 mlcod 65'583 active pruub 198.453536987s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:48 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 109 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=107/108 n=6 ec=68/33 lis/c=107/85 les/c/f=108/86/0 sis=109 pruub=15.561827660s) [0] r=-1 lpr=109 pi=[85,109)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 198.453536987s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:49 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.9 deep-scrub starts
Nov 22 03:28:49 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.9 deep-scrub ok
Nov 22 03:28:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Nov 22 03:28:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Nov 22 03:28:49 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Nov 22 03:28:49 compute-0 ceph-mon[75011]: 4.9 scrub starts
Nov 22 03:28:49 compute-0 ceph-mon[75011]: 4.9 scrub ok
Nov 22 03:28:49 compute-0 ceph-mon[75011]: osdmap e109: 3 total, 3 up, 3 in
Nov 22 03:28:49 compute-0 ceph-mon[75011]: 7.9 deep-scrub starts
Nov 22 03:28:49 compute-0 ceph-mon[75011]: 7.9 deep-scrub ok
Nov 22 03:28:49 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 110 pg[9.16( v 65'583 (0'0,65'583] local-lis/les=109/110 n=6 ec=68/33 lis/c=107/85 les/c/f=108/86/0 sis=109) [0] r=0 lpr=109 pi=[85,109)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:28:50 compute-0 ceph-mon[75011]: pgmap v226: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:50 compute-0 ceph-mon[75011]: osdmap e110: 3 total, 3 up, 3 in
Nov 22 03:28:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Nov 22 03:28:51 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.c scrub starts
Nov 22 03:28:51 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.c scrub ok
Nov 22 03:28:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e110 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:52 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Nov 22 03:28:52 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Nov 22 03:28:52 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Nov 22 03:28:52 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Nov 22 03:28:52 compute-0 ceph-mon[75011]: pgmap v228: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 24 B/s, 1 objects/s recovering
Nov 22 03:28:52 compute-0 ceph-mon[75011]: 7.c scrub starts
Nov 22 03:28:52 compute-0 ceph-mon[75011]: 7.c scrub ok
Nov 22 03:28:52 compute-0 ceph-mon[75011]: 3.12 scrub starts
Nov 22 03:28:52 compute-0 ceph-mon[75011]: 3.12 scrub ok
Nov 22 03:28:52 compute-0 ceph-mon[75011]: 4.7 scrub starts
Nov 22 03:28:52 compute-0 ceph-mon[75011]: 4.7 scrub ok
Nov 22 03:28:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 22 03:28:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Nov 22 03:28:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 22 03:28:53 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Nov 22 03:28:53 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Nov 22 03:28:53 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Nov 22 03:28:53 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Nov 22 03:28:53 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Nov 22 03:28:53 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Nov 22 03:28:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Nov 22 03:28:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Nov 22 03:28:53 compute-0 ceph-mon[75011]: 7.6 scrub starts
Nov 22 03:28:53 compute-0 ceph-mon[75011]: 7.6 scrub ok
Nov 22 03:28:53 compute-0 ceph-mon[75011]: 6.1 scrub starts
Nov 22 03:28:53 compute-0 ceph-mon[75011]: 6.1 scrub ok
Nov 22 03:28:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 22 03:28:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Nov 22 03:28:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Nov 22 03:28:54 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Nov 22 03:28:54 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Nov 22 03:28:54 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Nov 22 03:28:54 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Nov 22 03:28:54 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Nov 22 03:28:54 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Nov 22 03:28:54 compute-0 ceph-mon[75011]: pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Nov 22 03:28:54 compute-0 ceph-mon[75011]: 8.d deep-scrub starts
Nov 22 03:28:54 compute-0 ceph-mon[75011]: 8.d deep-scrub ok
Nov 22 03:28:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Nov 22 03:28:54 compute-0 ceph-mon[75011]: osdmap e111: 3 total, 3 up, 3 in
Nov 22 03:28:54 compute-0 ceph-mon[75011]: 8.9 scrub starts
Nov 22 03:28:54 compute-0 ceph-mon[75011]: 8.9 scrub ok
Nov 22 03:28:54 compute-0 ceph-mon[75011]: 4.8 scrub starts
Nov 22 03:28:54 compute-0 ceph-mon[75011]: 4.8 scrub ok
Nov 22 03:28:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 1 objects/s recovering
Nov 22 03:28:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Nov 22 03:28:55 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 22 03:28:55 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.c scrub starts
Nov 22 03:28:55 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.c scrub ok
Nov 22 03:28:55 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Nov 22 03:28:55 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Nov 22 03:28:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Nov 22 03:28:55 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 22 03:28:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Nov 22 03:28:56 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Nov 22 03:28:56 compute-0 ceph-mon[75011]: 3.18 scrub starts
Nov 22 03:28:56 compute-0 ceph-mon[75011]: 3.18 scrub ok
Nov 22 03:28:56 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Nov 22 03:28:56 compute-0 ceph-mon[75011]: 3.c scrub starts
Nov 22 03:28:56 compute-0 ceph-mon[75011]: 3.c scrub ok
Nov 22 03:28:56 compute-0 ceph-mon[75011]: 5.18 scrub starts
Nov 22 03:28:56 compute-0 ceph-mon[75011]: 5.18 scrub ok
Nov 22 03:28:56 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Nov 22 03:28:56 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Nov 22 03:28:56 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Nov 22 03:28:56 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Nov 22 03:28:57 compute-0 ceph-mon[75011]: pgmap v231: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 1 objects/s recovering
Nov 22 03:28:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Nov 22 03:28:57 compute-0 ceph-mon[75011]: osdmap e112: 3 total, 3 up, 3 in
Nov 22 03:28:57 compute-0 ceph-mon[75011]: 5.1d scrub starts
Nov 22 03:28:57 compute-0 ceph-mon[75011]: 5.1d scrub ok
Nov 22 03:28:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:28:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Nov 22 03:28:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Nov 22 03:28:57 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 22 03:28:57 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.4 deep-scrub starts
Nov 22 03:28:57 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.4 deep-scrub ok
Nov 22 03:28:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Nov 22 03:28:58 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 22 03:28:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Nov 22 03:28:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Nov 22 03:28:58 compute-0 ceph-mon[75011]: 8.12 scrub starts
Nov 22 03:28:58 compute-0 ceph-mon[75011]: 8.12 scrub ok
Nov 22 03:28:58 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Nov 22 03:28:58 compute-0 ceph-mon[75011]: 7.4 deep-scrub starts
Nov 22 03:28:58 compute-0 ceph-mon[75011]: 7.4 deep-scrub ok
Nov 22 03:28:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 113 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=113 pruub=12.792016983s) [2] r=-1 lpr=113 pi=[77,113)/1 crt=65'583 mlcod 0'0 active pruub 215.389266968s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:58 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 113 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=113 pruub=12.791762352s) [2] r=-1 lpr=113 pi=[77,113)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 215.389266968s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:58 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 113 pg[9.19( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=113) [2] r=0 lpr=113 pi=[77,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:58 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.f scrub starts
Nov 22 03:28:58 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.f scrub ok
Nov 22 03:28:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Nov 22 03:28:59 compute-0 ceph-mon[75011]: pgmap v233: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Nov 22 03:28:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Nov 22 03:28:59 compute-0 ceph-mon[75011]: osdmap e113: 3 total, 3 up, 3 in
Nov 22 03:28:59 compute-0 ceph-mon[75011]: 7.f scrub starts
Nov 22 03:28:59 compute-0 ceph-mon[75011]: 7.f scrub ok
Nov 22 03:28:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Nov 22 03:28:59 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Nov 22 03:28:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 114 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[0] r=0 lpr=114 pi=[77,114)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:59 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 114 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=77/78 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[0] r=0 lpr=114 pi=[77,114)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:28:59 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[0] r=-1 lpr=114 pi=[77,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:28:59 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[0] r=-1 lpr=114 pi=[77,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:28:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:28:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Nov 22 03:28:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 22 03:28:59 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.f scrub starts
Nov 22 03:28:59 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.f scrub ok
Nov 22 03:29:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Nov 22 03:29:00 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 22 03:29:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Nov 22 03:29:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Nov 22 03:29:00 compute-0 ceph-mon[75011]: osdmap e114: 3 total, 3 up, 3 in
Nov 22 03:29:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Nov 22 03:29:00 compute-0 ceph-mon[75011]: 8.f scrub starts
Nov 22 03:29:00 compute-0 ceph-mon[75011]: 8.f scrub ok
Nov 22 03:29:00 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 115 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=114/115 n=6 ec=68/33 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[0] async=[2] r=0 lpr=114 pi=[77,114)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=11}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:29:00 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Nov 22 03:29:00 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Nov 22 03:29:00 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Nov 22 03:29:00 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Nov 22 03:29:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Nov 22 03:29:01 compute-0 ceph-mon[75011]: pgmap v236: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Nov 22 03:29:01 compute-0 ceph-mon[75011]: osdmap e115: 3 total, 3 up, 3 in
Nov 22 03:29:01 compute-0 ceph-mon[75011]: 3.9 scrub starts
Nov 22 03:29:01 compute-0 ceph-mon[75011]: 3.9 scrub ok
Nov 22 03:29:01 compute-0 ceph-mon[75011]: 7.1c scrub starts
Nov 22 03:29:01 compute-0 ceph-mon[75011]: 7.1c scrub ok
Nov 22 03:29:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Nov 22 03:29:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Nov 22 03:29:01 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 116 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:01 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 116 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:01 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 116 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=114/115 n=6 ec=68/33 lis/c=114/77 les/c/f=115/78/0 sis=116 pruub=14.981416702s) [2] async=[2] r=-1 lpr=116 pi=[77,116)/1 crt=65'583 mlcod 65'583 active pruub 220.355102539s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:01 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 116 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=114/115 n=6 ec=68/33 lis/c=114/77 les/c/f=115/78/0 sis=116 pruub=14.981292725s) [2] r=-1 lpr=116 pi=[77,116)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 220.355102539s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Nov 22 03:29:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 22 03:29:01 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Nov 22 03:29:01 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Nov 22 03:29:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Nov 22 03:29:02 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 22 03:29:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Nov 22 03:29:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Nov 22 03:29:02 compute-0 ceph-mon[75011]: osdmap e116: 3 total, 3 up, 3 in
Nov 22 03:29:02 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Nov 22 03:29:02 compute-0 ceph-mon[75011]: 3.1d scrub starts
Nov 22 03:29:02 compute-0 ceph-mon[75011]: 3.1d scrub ok
Nov 22 03:29:02 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 117 pg[9.19( v 65'583 (0'0,65'583] local-lis/les=116/117 n=6 ec=68/33 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:29:02 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.a scrub starts
Nov 22 03:29:02 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.a scrub ok
Nov 22 03:29:02 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Nov 22 03:29:02 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Nov 22 03:29:03 compute-0 ceph-mon[75011]: pgmap v239: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Nov 22 03:29:03 compute-0 ceph-mon[75011]: osdmap e117: 3 total, 3 up, 3 in
Nov 22 03:29:03 compute-0 ceph-mon[75011]: 3.a scrub starts
Nov 22 03:29:03 compute-0 ceph-mon[75011]: 3.a scrub ok
Nov 22 03:29:03 compute-0 ceph-mon[75011]: 7.2 scrub starts
Nov 22 03:29:03 compute-0 ceph-mon[75011]: 7.2 scrub ok
Nov 22 03:29:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 2 objects/s recovering
Nov 22 03:29:03 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.e scrub starts
Nov 22 03:29:03 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.e scrub ok
Nov 22 03:29:04 compute-0 ceph-mon[75011]: pgmap v241: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 4 B/s, 2 objects/s recovering
Nov 22 03:29:04 compute-0 ceph-mon[75011]: 8.e scrub starts
Nov 22 03:29:04 compute-0 ceph-mon[75011]: 8.e scrub ok
Nov 22 03:29:04 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Nov 22 03:29:04 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Nov 22 03:29:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 1 objects/s recovering
Nov 22 03:29:05 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Nov 22 03:29:05 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Nov 22 03:29:05 compute-0 ceph-mon[75011]: 8.11 scrub starts
Nov 22 03:29:05 compute-0 ceph-mon[75011]: 8.11 scrub ok
Nov 22 03:29:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:29:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:29:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:29:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:29:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:29:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:29:06 compute-0 ceph-mon[75011]: pgmap v242: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 1 objects/s recovering
Nov 22 03:29:06 compute-0 ceph-mon[75011]: 8.6 scrub starts
Nov 22 03:29:06 compute-0 ceph-mon[75011]: 8.6 scrub ok
Nov 22 03:29:06 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Nov 22 03:29:06 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Nov 22 03:29:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 1 objects/s recovering
Nov 22 03:29:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Nov 22 03:29:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 22 03:29:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Nov 22 03:29:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 22 03:29:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Nov 22 03:29:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Nov 22 03:29:07 compute-0 ceph-mon[75011]: 5.1a scrub starts
Nov 22 03:29:07 compute-0 ceph-mon[75011]: 5.1a scrub ok
Nov 22 03:29:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Nov 22 03:29:07 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Nov 22 03:29:07 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Nov 22 03:29:08 compute-0 ceph-mon[75011]: pgmap v243: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 1 objects/s recovering
Nov 22 03:29:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Nov 22 03:29:08 compute-0 ceph-mon[75011]: osdmap e118: 3 total, 3 up, 3 in
Nov 22 03:29:08 compute-0 ceph-mon[75011]: 2.1b scrub starts
Nov 22 03:29:08 compute-0 ceph-mon[75011]: 2.1b scrub ok
Nov 22 03:29:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 1 objects/s recovering
Nov 22 03:29:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Nov 22 03:29:09 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 22 03:29:09 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Nov 22 03:29:09 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Nov 22 03:29:09 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 118 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=94/95 n=6 ec=68/33 lis/c=94/94 les/c/f=95/95/0 sis=118 pruub=12.954506874s) [0] r=-1 lpr=118 pi=[94,118)/1 crt=65'583 mlcod 0'0 active pruub 216.383575439s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:09 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 118 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=94/95 n=6 ec=68/33 lis/c=94/94 les/c/f=95/95/0 sis=118 pruub=12.954405785s) [0] r=-1 lpr=118 pi=[94,118)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 216.383575439s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:09 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 118 pg[9.1c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=94/94 les/c/f=95/95/0 sis=118) [0] r=0 lpr=118 pi=[94,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Nov 22 03:29:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Nov 22 03:29:09 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 22 03:29:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Nov 22 03:29:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Nov 22 03:29:09 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=94/94 les/c/f=95/95/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[94,119)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:09 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 119 pg[9.1c( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=94/94 les/c/f=95/95/0 sis=119) [0]/[2] r=-1 lpr=119 pi=[94,119)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:09 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 119 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=94/95 n=6 ec=68/33 lis/c=94/94 les/c/f=95/95/0 sis=119) [0]/[2] r=0 lpr=119 pi=[94,119)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:09 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 119 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=94/95 n=6 ec=68/33 lis/c=94/94 les/c/f=95/95/0 sis=119) [0]/[2] r=0 lpr=119 pi=[94,119)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Nov 22 03:29:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Nov 22 03:29:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Nov 22 03:29:10 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Nov 22 03:29:10 compute-0 ceph-mon[75011]: pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 2 B/s, 1 objects/s recovering
Nov 22 03:29:10 compute-0 ceph-mon[75011]: 3.6 scrub starts
Nov 22 03:29:10 compute-0 ceph-mon[75011]: 3.6 scrub ok
Nov 22 03:29:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Nov 22 03:29:10 compute-0 ceph-mon[75011]: osdmap e119: 3 total, 3 up, 3 in
Nov 22 03:29:10 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Nov 22 03:29:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Nov 22 03:29:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 22 03:29:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 120 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=119/120 n=6 ec=68/33 lis/c=94/94 les/c/f=95/95/0 sis=119) [0]/[2] async=[0] r=0 lpr=119 pi=[94,119)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:29:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Nov 22 03:29:11 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 22 03:29:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Nov 22 03:29:11 compute-0 ceph-mon[75011]: osdmap e120: 3 total, 3 up, 3 in
Nov 22 03:29:11 compute-0 ceph-mon[75011]: 7.1a scrub starts
Nov 22 03:29:11 compute-0 ceph-mon[75011]: 7.1a scrub ok
Nov 22 03:29:11 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Nov 22 03:29:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Nov 22 03:29:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 121 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=121 pruub=14.601915359s) [0] r=-1 lpr=121 pi=[85,121)/1 crt=65'583 mlcod 0'0 active pruub 220.406661987s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 121 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=121 pruub=14.601821899s) [0] r=-1 lpr=121 pi=[85,121)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 220.406661987s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 121 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=119/120 n=6 ec=68/33 lis/c=119/94 les/c/f=120/95/0 sis=121 pruub=15.641610146s) [0] async=[0] r=-1 lpr=121 pi=[94,121)/1 crt=65'583 mlcod 65'583 active pruub 221.446624756s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:11 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 121 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=119/120 n=6 ec=68/33 lis/c=119/94 les/c/f=120/95/0 sis=121 pruub=15.641337395s) [0] r=-1 lpr=121 pi=[94,121)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 221.446624756s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 121 pg[9.1e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=121) [0] r=0 lpr=121 pi=[85,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 121 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=119/94 les/c/f=120/95/0 sis=121) [0] r=0 lpr=121 pi=[94,121)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:11 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 121 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=119/94 les/c/f=120/95/0 sis=121) [0] r=0 lpr=121 pi=[94,121)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Nov 22 03:29:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Nov 22 03:29:12 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Nov 22 03:29:12 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 122 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=122) [0]/[2] r=0 lpr=122 pi=[85,122)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:12 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 122 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=122) [0]/[2] r=0 lpr=122 pi=[85,122)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:12 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[85,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:12 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 122 pg[9.1e( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[85,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:12 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 122 pg[9.1c( v 65'583 (0'0,65'583] local-lis/les=121/122 n=6 ec=68/33 lis/c=119/94 les/c/f=120/95/0 sis=121) [0] r=0 lpr=121 pi=[94,121)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:29:12 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Nov 22 03:29:12 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Nov 22 03:29:12 compute-0 ceph-mon[75011]: pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:12 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Nov 22 03:29:12 compute-0 ceph-mon[75011]: osdmap e121: 3 total, 3 up, 3 in
Nov 22 03:29:12 compute-0 ceph-mon[75011]: osdmap e122: 3 total, 3 up, 3 in
Nov 22 03:29:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Nov 22 03:29:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Nov 22 03:29:13 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Nov 22 03:29:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Nov 22 03:29:13 compute-0 rsyslogd[1007]: imjournal from <np0005531666:ceph-mgr>: begin to drop messages due to rate-limiting
Nov 22 03:29:13 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 123 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=122/123 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=122) [0]/[2] async=[0] r=0 lpr=122 pi=[85,122)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:29:13 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Nov 22 03:29:13 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Nov 22 03:29:13 compute-0 ceph-mon[75011]: 8.15 scrub starts
Nov 22 03:29:13 compute-0 ceph-mon[75011]: 8.15 scrub ok
Nov 22 03:29:13 compute-0 ceph-mon[75011]: osdmap e123: 3 total, 3 up, 3 in
Nov 22 03:29:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Nov 22 03:29:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Nov 22 03:29:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Nov 22 03:29:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 124 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=122/123 n=6 ec=68/33 lis/c=122/85 les/c/f=123/86/0 sis=124 pruub=15.261218071s) [0] async=[0] r=-1 lpr=124 pi=[85,124)/1 crt=65'583 mlcod 65'583 active pruub 223.388717651s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:14 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 124 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=122/123 n=6 ec=68/33 lis/c=122/85 les/c/f=123/86/0 sis=124 pruub=15.260874748s) [0] r=-1 lpr=124 pi=[85,124)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 223.388717651s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:14 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 124 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=122/85 les/c/f=123/86/0 sis=124) [0] r=0 lpr=124 pi=[85,124)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:14 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 124 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=122/85 les/c/f=123/86/0 sis=124) [0] r=0 lpr=124 pi=[85,124)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:14 compute-0 ceph-mon[75011]: pgmap v252: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 29 B/s, 1 objects/s recovering
Nov 22 03:29:14 compute-0 ceph-mon[75011]: 3.7 scrub starts
Nov 22 03:29:14 compute-0 ceph-mon[75011]: 3.7 scrub ok
Nov 22 03:29:14 compute-0 ceph-mon[75011]: osdmap e124: 3 total, 3 up, 3 in
Nov 22 03:29:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Nov 22 03:29:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Nov 22 03:29:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Nov 22 03:29:15 compute-0 ceph-osd[88575]: osd.0 pg_epoch: 125 pg[9.1e( v 65'583 (0'0,65'583] local-lis/les=124/125 n=6 ec=68/33 lis/c=122/85 les/c/f=123/86/0 sis=124) [0] r=0 lpr=124 pi=[85,124)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:29:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Nov 22 03:29:15 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Nov 22 03:29:15 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Nov 22 03:29:16 compute-0 ceph-mon[75011]: osdmap e125: 3 total, 3 up, 3 in
Nov 22 03:29:16 compute-0 ceph-mon[75011]: 2.7 scrub starts
Nov 22 03:29:16 compute-0 ceph-mon[75011]: 2.7 scrub ok
Nov 22 03:29:16 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Nov 22 03:29:16 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Nov 22 03:29:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:17 compute-0 ceph-mon[75011]: pgmap v255: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 1 objects/s recovering
Nov 22 03:29:17 compute-0 ceph-mon[75011]: 3.1e scrub starts
Nov 22 03:29:17 compute-0 ceph-mon[75011]: 3.1e scrub ok
Nov 22 03:29:17 compute-0 sudo[107948]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 1 objects/s recovering
Nov 22 03:29:17 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.c scrub starts
Nov 22 03:29:17 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.c scrub ok
Nov 22 03:29:17 compute-0 sudo[108242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otpqsayrniznvphnxyrjwbshyrhofxjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782157.6496327-137-76299293923316/AnsiballZ_command.py'
Nov 22 03:29:17 compute-0 sudo[108242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:18 compute-0 python3.9[108244]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:29:18 compute-0 ceph-mon[75011]: 5.c scrub starts
Nov 22 03:29:18 compute-0 ceph-mon[75011]: 5.c scrub ok
Nov 22 03:29:18 compute-0 sudo[108242]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:19 compute-0 ceph-mon[75011]: pgmap v256: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 20 B/s, 1 objects/s recovering
Nov 22 03:29:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 03:29:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Nov 22 03:29:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:29:19 compute-0 sudo[108529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkehoaihbuhhaxyzkokxzsykfpqpytei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782159.2155013-145-56135097907878/AnsiballZ_selinux.py'
Nov 22 03:29:19 compute-0 sudo[108529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:20 compute-0 python3.9[108531]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 22 03:29:20 compute-0 sudo[108529]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Nov 22 03:29:20 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:29:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Nov 22 03:29:20 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Nov 22 03:29:20 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 126 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=126 pruub=14.217867851s) [1] r=-1 lpr=126 pi=[85,126)/1 crt=65'583 mlcod 0'0 active pruub 228.412857056s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:20 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 126 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=126 pruub=14.217772484s) [1] r=-1 lpr=126 pi=[85,126)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 228.412857056s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Nov 22 03:29:20 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 126 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=126) [1] r=0 lpr=126 pi=[85,126)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:20 compute-0 sudo[108681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrwwodikrnvdshpqjnrpmuflcmonudaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782160.5552094-156-140019100382937/AnsiballZ_command.py'
Nov 22 03:29:20 compute-0 sudo[108681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:20 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Nov 22 03:29:20 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Nov 22 03:29:20 compute-0 python3.9[108683]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 22 03:29:20 compute-0 sudo[108681]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Nov 22 03:29:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Nov 22 03:29:21 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Nov 22 03:29:21 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=127) [1]/[2] r=-1 lpr=127 pi=[85,127)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:21 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 127 pg[9.1f( empty local-lis/les=0/0 n=0 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=127) [1]/[2] r=-1 lpr=127 pi=[85,127)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:21 compute-0 ceph-mon[75011]: pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Nov 22 03:29:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Nov 22 03:29:21 compute-0 ceph-mon[75011]: osdmap e126: 3 total, 3 up, 3 in
Nov 22 03:29:21 compute-0 ceph-mon[75011]: 4.11 scrub starts
Nov 22 03:29:21 compute-0 ceph-mon[75011]: 4.11 scrub ok
Nov 22 03:29:21 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 127 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=127) [1]/[2] r=0 lpr=127 pi=[85,127)/1 crt=65'583 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:21 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 127 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=85/86 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=127) [1]/[2] r=0 lpr=127 pi=[85,127)/1 crt=65'583 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:21 compute-0 sudo[108833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddgmjxljydnsoqzdfacevhkpxfmwpxkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782161.2783546-164-41310338910892/AnsiballZ_file.py'
Nov 22 03:29:21 compute-0 sudo[108833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:21 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Nov 22 03:29:21 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Nov 22 03:29:21 compute-0 python3.9[108835]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:29:21 compute-0 sudo[108833]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Nov 22 03:29:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Nov 22 03:29:22 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Nov 22 03:29:22 compute-0 ceph-mon[75011]: osdmap e127: 3 total, 3 up, 3 in
Nov 22 03:29:22 compute-0 ceph-mon[75011]: 5.1 scrub starts
Nov 22 03:29:22 compute-0 ceph-mon[75011]: 5.1 scrub ok
Nov 22 03:29:22 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 128 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=127/128 n=6 ec=68/33 lis/c=85/85 les/c/f=86/86/0 sis=127) [1]/[2] async=[1] r=0 lpr=127 pi=[85,127)/1 crt=65'583 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:29:22 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.c scrub starts
Nov 22 03:29:22 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.c scrub ok
Nov 22 03:29:22 compute-0 sudo[108986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfkfswnentznvddxgqdulmapbvbbdlws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782162.1270697-172-32501505795708/AnsiballZ_mount.py'
Nov 22 03:29:22 compute-0 sudo[108986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:22 compute-0 sudo[108989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:22 compute-0 sudo[108989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:22 compute-0 sudo[108989]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:22 compute-0 python3.9[108988]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 22 03:29:22 compute-0 sudo[108986]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:22 compute-0 sudo[109014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:29:22 compute-0 sudo[109014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:22 compute-0 sudo[109014]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:22 compute-0 sudo[109045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:22 compute-0 sudo[109045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:22 compute-0 sudo[109045]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:23 compute-0 sudo[109088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:29:23 compute-0 sudo[109088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Nov 22 03:29:23 compute-0 ceph-mon[75011]: pgmap v260: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:23 compute-0 ceph-mon[75011]: osdmap e128: 3 total, 3 up, 3 in
Nov 22 03:29:23 compute-0 ceph-mon[75011]: 8.c scrub starts
Nov 22 03:29:23 compute-0 ceph-mon[75011]: 8.c scrub ok
Nov 22 03:29:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Nov 22 03:29:23 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Nov 22 03:29:23 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 129 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=127/85 les/c/f=128/86/0 sis=129) [1] r=0 lpr=129 pi=[85,129)/1 luod=0'0 crt=65'583 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:23 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 129 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=0/0 n=6 ec=68/33 lis/c=127/85 les/c/f=128/86/0 sis=129) [1] r=0 lpr=129 pi=[85,129)/1 crt=65'583 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Nov 22 03:29:23 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 129 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=127/128 n=6 ec=68/33 lis/c=127/85 les/c/f=128/86/0 sis=129 pruub=15.058409691s) [1] async=[1] r=-1 lpr=129 pi=[85,129)/1 crt=65'583 mlcod 65'583 active pruub 232.344818115s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Nov 22 03:29:23 compute-0 ceph-osd[90752]: osd.2 pg_epoch: 129 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=127/128 n=6 ec=68/33 lis/c=127/85 les/c/f=128/86/0 sis=129 pruub=15.057771683s) [1] r=-1 lpr=129 pi=[85,129)/1 crt=65'583 mlcod 0'0 unknown NOTIFY pruub 232.344818115s@ mbc={}] state<Start>: transitioning to Stray
Nov 22 03:29:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%)
Nov 22 03:29:23 compute-0 sudo[109088]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:29:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:29:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:29:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:29:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:29:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:29:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 0b998336-268e-4b91-bf46-c7e0881eddc8 does not exist
Nov 22 03:29:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d885a11f-ce18-4989-a716-ff6270248bae does not exist
Nov 22 03:29:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8a63b19a-b722-464b-8832-f71f99d5955d does not exist
Nov 22 03:29:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:29:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:29:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:29:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:29:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:29:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:29:23 compute-0 sudo[109145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:23 compute-0 sudo[109145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:23 compute-0 sudo[109145]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:23 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.a scrub starts
Nov 22 03:29:23 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.a scrub ok
Nov 22 03:29:23 compute-0 sudo[109196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:29:23 compute-0 sudo[109196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:23 compute-0 sudo[109196]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:23 compute-0 sudo[109248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:23 compute-0 sudo[109248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:23 compute-0 sudo[109248]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:23 compute-0 sudo[109299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:29:23 compute-0 sudo[109299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:23 compute-0 sudo[109370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aetqgyugabpgdfyxsorivfibthsrqcrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782163.7670774-200-207996001470887/AnsiballZ_file.py'
Nov 22 03:29:23 compute-0 sudo[109370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Nov 22 03:29:24 compute-0 podman[109413]: 2025-11-22 03:29:24.303796089 +0000 UTC m=+0.066685104 container create 91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:29:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Nov 22 03:29:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Nov 22 03:29:24 compute-0 python3.9[109372]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:29:24 compute-0 systemd[1]: Started libpod-conmon-91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18.scope.
Nov 22 03:29:24 compute-0 ceph-mon[75011]: osdmap e129: 3 total, 3 up, 3 in
Nov 22 03:29:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:29:24 compute-0 sudo[109370]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:29:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:29:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:29:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:29:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:29:24 compute-0 ceph-mon[75011]: 2.a scrub starts
Nov 22 03:29:24 compute-0 ceph-mon[75011]: 2.a scrub ok
Nov 22 03:29:24 compute-0 ceph-osd[89585]: osd.1 pg_epoch: 130 pg[9.1f( v 65'583 (0'0,65'583] local-lis/les=129/130 n=6 ec=68/33 lis/c=127/85 les/c/f=128/86/0 sis=129) [1] r=0 lpr=129 pi=[85,129)/1 crt=65'583 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Nov 22 03:29:24 compute-0 podman[109413]: 2025-11-22 03:29:24.27576073 +0000 UTC m=+0.038649765 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:29:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:29:24 compute-0 podman[109413]: 2025-11-22 03:29:24.390209158 +0000 UTC m=+0.153098183 container init 91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:29:24 compute-0 podman[109413]: 2025-11-22 03:29:24.396740128 +0000 UTC m=+0.159629133 container start 91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:29:24 compute-0 vigilant_jackson[109430]: 167 167
Nov 22 03:29:24 compute-0 systemd[1]: libpod-91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18.scope: Deactivated successfully.
Nov 22 03:29:24 compute-0 podman[109413]: 2025-11-22 03:29:24.486044057 +0000 UTC m=+0.248933152 container attach 91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:29:24 compute-0 podman[109413]: 2025-11-22 03:29:24.486728239 +0000 UTC m=+0.249617284 container died 91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:29:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a94f5283050835a7a64b33888a85dded3adbb6054d239fcdc3cd81f4043a96de-merged.mount: Deactivated successfully.
Nov 22 03:29:24 compute-0 podman[109413]: 2025-11-22 03:29:24.721554888 +0000 UTC m=+0.484443923 container remove 91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:29:24 compute-0 systemd[1]: libpod-conmon-91ae096772eba530e44ca28b79b858ad59ff63e931341c91b95efbf349dfcc18.scope: Deactivated successfully.
Nov 22 03:29:24 compute-0 podman[109556]: 2025-11-22 03:29:24.945648416 +0000 UTC m=+0.062510247 container create 5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:29:24 compute-0 systemd[1]: Started libpod-conmon-5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041.scope.
Nov 22 03:29:25 compute-0 podman[109556]: 2025-11-22 03:29:24.926638172 +0000 UTC m=+0.043500053 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:29:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:29:25 compute-0 sudo[109624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsfyxsvbvrzwqyhjaptcqyztfwaqyxpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782164.6382532-208-113073665720926/AnsiballZ_stat.py'
Nov 22 03:29:25 compute-0 sudo[109624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebb14c13f6d7f236cbe76e5cdf8b58cf3a0dfb1cd549d97e5d6299f6ee9a02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebb14c13f6d7f236cbe76e5cdf8b58cf3a0dfb1cd549d97e5d6299f6ee9a02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebb14c13f6d7f236cbe76e5cdf8b58cf3a0dfb1cd549d97e5d6299f6ee9a02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebb14c13f6d7f236cbe76e5cdf8b58cf3a0dfb1cd549d97e5d6299f6ee9a02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63ebb14c13f6d7f236cbe76e5cdf8b58cf3a0dfb1cd549d97e5d6299f6ee9a02/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:25 compute-0 podman[109556]: 2025-11-22 03:29:25.044905955 +0000 UTC m=+0.161767806 container init 5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:29:25 compute-0 podman[109556]: 2025-11-22 03:29:25.061152848 +0000 UTC m=+0.178014719 container start 5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:29:25 compute-0 podman[109556]: 2025-11-22 03:29:25.078558338 +0000 UTC m=+0.195420209 container attach 5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:29:25 compute-0 python3.9[109626]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:29:25 compute-0 rsyslogd[1007]: imjournal: 182 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 03:29:25 compute-0 sudo[109624]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:25 compute-0 ceph-mon[75011]: pgmap v263: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6/244 objects misplaced (2.459%)
Nov 22 03:29:25 compute-0 ceph-mon[75011]: osdmap e130: 3 total, 3 up, 3 in
Nov 22 03:29:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 245 B/s wr, 11 op/s; 6/244 objects misplaced (2.459%); 52 B/s, 2 objects/s recovering
Nov 22 03:29:25 compute-0 sudo[109704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arsswbkhjhpqqeohcqjzxjzszrhohrfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782164.6382532-208-113073665720926/AnsiballZ_file.py'
Nov 22 03:29:25 compute-0 sudo[109704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:25 compute-0 python3.9[109706]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:29:25 compute-0 sudo[109704]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:26 compute-0 loving_poitras[109619]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:29:26 compute-0 loving_poitras[109619]: --> relative data size: 1.0
Nov 22 03:29:26 compute-0 loving_poitras[109619]: --> All data devices are unavailable
Nov 22 03:29:26 compute-0 systemd[1]: libpod-5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041.scope: Deactivated successfully.
Nov 22 03:29:26 compute-0 systemd[1]: libpod-5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041.scope: Consumed 1.104s CPU time.
Nov 22 03:29:26 compute-0 podman[109556]: 2025-11-22 03:29:26.230131802 +0000 UTC m=+1.346993672 container died 5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:29:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-63ebb14c13f6d7f236cbe76e5cdf8b58cf3a0dfb1cd549d97e5d6299f6ee9a02-merged.mount: Deactivated successfully.
Nov 22 03:29:26 compute-0 podman[109556]: 2025-11-22 03:29:26.324787016 +0000 UTC m=+1.441648877 container remove 5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:29:26 compute-0 systemd[1]: libpod-conmon-5ac26e12d074aa81fe240f2c69f3f6186555b4e0e5d7d455a363258296c9b041.scope: Deactivated successfully.
Nov 22 03:29:26 compute-0 sudo[109299]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:26 compute-0 sudo[109809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:26 compute-0 sudo[109809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:26 compute-0 sudo[109809]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:26 compute-0 sudo[109854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:29:26 compute-0 sudo[109854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:26 compute-0 sudo[109854]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:26 compute-0 sudo[109910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:26 compute-0 sudo[109910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:26 compute-0 sudo[109910]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:26 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.f scrub starts
Nov 22 03:29:26 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.f scrub ok
Nov 22 03:29:26 compute-0 sudo[109939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:29:26 compute-0 sudo[109939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:26 compute-0 sudo[109990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnuwwlkkxeunpdvwhcyjmgrxtxmhslia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782166.4443128-229-174568867925124/AnsiballZ_stat.py'
Nov 22 03:29:26 compute-0 sudo[109990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:26 compute-0 python3.9[109992]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:29:26 compute-0 sudo[109990]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:27 compute-0 podman[110059]: 2025-11-22 03:29:27.044601299 +0000 UTC m=+0.021973787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:29:27 compute-0 podman[110059]: 2025-11-22 03:29:27.172544621 +0000 UTC m=+0.149917009 container create aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:29:27 compute-0 systemd[1]: Started libpod-conmon-aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62.scope.
Nov 22 03:29:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:29:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Nov 22 03:29:27 compute-0 podman[110059]: 2025-11-22 03:29:27.441645665 +0000 UTC m=+0.419018073 container init aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:29:27 compute-0 podman[110059]: 2025-11-22 03:29:27.453700501 +0000 UTC m=+0.431072919 container start aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:29:27 compute-0 ceph-mon[75011]: pgmap v265: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 245 B/s wr, 11 op/s; 6/244 objects misplaced (2.459%); 52 B/s, 2 objects/s recovering
Nov 22 03:29:27 compute-0 ceph-mon[75011]: 5.f scrub starts
Nov 22 03:29:27 compute-0 ceph-mon[75011]: 5.f scrub ok
Nov 22 03:29:27 compute-0 zen_hawking[110076]: 167 167
Nov 22 03:29:27 compute-0 systemd[1]: libpod-aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62.scope: Deactivated successfully.
Nov 22 03:29:27 compute-0 podman[110059]: 2025-11-22 03:29:27.538748842 +0000 UTC m=+0.516121240 container attach aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:29:27 compute-0 podman[110059]: 2025-11-22 03:29:27.539962396 +0000 UTC m=+0.517334824 container died aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:29:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-69096cac3b9e2c6d0ddee6951552d6b489a529ba246bb26f9cb91ae768c1eb30-merged.mount: Deactivated successfully.
Nov 22 03:29:27 compute-0 podman[110059]: 2025-11-22 03:29:27.777633325 +0000 UTC m=+0.755005753 container remove aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_hawking, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:29:27 compute-0 systemd[1]: libpod-conmon-aaac5bb2f178654f49b2a14c43904c0876df27d1a5767cd64435832ea9155a62.scope: Deactivated successfully.
Nov 22 03:29:27 compute-0 podman[110198]: 2025-11-22 03:29:27.959086591 +0000 UTC m=+0.041155503 container create 6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:29:27 compute-0 systemd[1]: Started libpod-conmon-6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b.scope.
Nov 22 03:29:27 compute-0 sudo[110241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocygvgjffiqepqoykornrdmvwzdzoprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782167.5957355-242-278667824172905/AnsiballZ_getent.py'
Nov 22 03:29:27 compute-0 sudo[110241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94604d66d8663d532e4797e427154819ce0777b911c47d2380170a5c7d47abe3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94604d66d8663d532e4797e427154819ce0777b911c47d2380170a5c7d47abe3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94604d66d8663d532e4797e427154819ce0777b911c47d2380170a5c7d47abe3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94604d66d8663d532e4797e427154819ce0777b911c47d2380170a5c7d47abe3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:28 compute-0 podman[110198]: 2025-11-22 03:29:28.025995312 +0000 UTC m=+0.108064254 container init 6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:29:28 compute-0 podman[110198]: 2025-11-22 03:29:28.032718829 +0000 UTC m=+0.114787761 container start 6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:29:28 compute-0 podman[110198]: 2025-11-22 03:29:28.036292212 +0000 UTC m=+0.118361134 container attach 6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:29:28 compute-0 podman[110198]: 2025-11-22 03:29:27.943983866 +0000 UTC m=+0.026052798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:29:28 compute-0 python3.9[110247]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 22 03:29:28 compute-0 sudo[110241]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:28 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Nov 22 03:29:28 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Nov 22 03:29:28 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Nov 22 03:29:28 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Nov 22 03:29:28 compute-0 ceph-mon[75011]: pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 170 B/s wr, 8 op/s; 54 B/s, 2 objects/s recovering
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]: {
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:     "0": [
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:         {
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "devices": [
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "/dev/loop3"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             ],
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_name": "ceph_lv0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_size": "21470642176",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "name": "ceph_lv0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "tags": {
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cluster_name": "ceph",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.crush_device_class": "",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.encrypted": "0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osd_id": "0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.type": "block",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.vdo": "0"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             },
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "type": "block",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "vg_name": "ceph_vg0"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:         }
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:     ],
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:     "1": [
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:         {
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "devices": [
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "/dev/loop4"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             ],
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_name": "ceph_lv1",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_size": "21470642176",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "name": "ceph_lv1",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "tags": {
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cluster_name": "ceph",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.crush_device_class": "",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.encrypted": "0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osd_id": "1",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.type": "block",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.vdo": "0"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             },
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "type": "block",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "vg_name": "ceph_vg1"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:         }
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:     ],
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:     "2": [
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:         {
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "devices": [
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "/dev/loop5"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             ],
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_name": "ceph_lv2",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_size": "21470642176",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "name": "ceph_lv2",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "tags": {
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.cluster_name": "ceph",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.crush_device_class": "",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.encrypted": "0",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osd_id": "2",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.type": "block",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:                 "ceph.vdo": "0"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             },
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "type": "block",
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:             "vg_name": "ceph_vg2"
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:         }
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]:     ]
Nov 22 03:29:28 compute-0 pedantic_cerf[110245]: }
Nov 22 03:29:28 compute-0 systemd[1]: libpod-6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b.scope: Deactivated successfully.
Nov 22 03:29:28 compute-0 podman[110198]: 2025-11-22 03:29:28.771073198 +0000 UTC m=+0.853142150 container died 6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:29:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-94604d66d8663d532e4797e427154819ce0777b911c47d2380170a5c7d47abe3-merged.mount: Deactivated successfully.
Nov 22 03:29:28 compute-0 sudo[110415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyotngafveltqhfltrfuullxqxkyvbsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782168.589684-252-80356401173791/AnsiballZ_getent.py'
Nov 22 03:29:28 compute-0 sudo[110415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:28 compute-0 podman[110198]: 2025-11-22 03:29:28.885144407 +0000 UTC m=+0.967213329 container remove 6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:29:28 compute-0 systemd[1]: libpod-conmon-6019a4471d07d7310cbcf0d946648f0dc45d74625e2200e161623b5c96c1da1b.scope: Deactivated successfully.
Nov 22 03:29:28 compute-0 sudo[109939]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:29 compute-0 sudo[110419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:29 compute-0 sudo[110419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:29 compute-0 sudo[110419]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:29 compute-0 python3.9[110418]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 22 03:29:29 compute-0 sudo[110415]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:29 compute-0 sudo[110444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:29:29 compute-0 sudo[110444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:29 compute-0 sudo[110444]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:29 compute-0 sudo[110470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:29 compute-0 sudo[110470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:29 compute-0 sudo[110470]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:29 compute-0 sudo[110519]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:29:29 compute-0 sudo[110519]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 143 B/s wr, 6 op/s; 46 B/s, 2 objects/s recovering
Nov 22 03:29:29 compute-0 ceph-mon[75011]: 8.10 scrub starts
Nov 22 03:29:29 compute-0 ceph-mon[75011]: 8.10 scrub ok
Nov 22 03:29:29 compute-0 ceph-mon[75011]: 2.9 scrub starts
Nov 22 03:29:29 compute-0 ceph-mon[75011]: 2.9 scrub ok
Nov 22 03:29:29 compute-0 podman[110637]: 2025-11-22 03:29:29.748019716 +0000 UTC m=+0.090037940 container create addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:29:29 compute-0 podman[110637]: 2025-11-22 03:29:29.677376925 +0000 UTC m=+0.019395119 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:29:29 compute-0 systemd[1]: Started libpod-conmon-addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e.scope.
Nov 22 03:29:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:29:29 compute-0 podman[110637]: 2025-11-22 03:29:29.874535281 +0000 UTC m=+0.216553515 container init addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:29:29 compute-0 podman[110637]: 2025-11-22 03:29:29.884124644 +0000 UTC m=+0.226142878 container start addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 03:29:29 compute-0 sudo[110729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfxuvlcsphhydsovglszxuvimczyhlfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782169.465607-260-177871571182972/AnsiballZ_group.py'
Nov 22 03:29:29 compute-0 fervent_ramanujan[110700]: 167 167
Nov 22 03:29:29 compute-0 systemd[1]: libpod-addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e.scope: Deactivated successfully.
Nov 22 03:29:29 compute-0 sudo[110729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:30 compute-0 podman[110637]: 2025-11-22 03:29:30.120306216 +0000 UTC m=+0.462324450 container attach addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:29:30 compute-0 podman[110637]: 2025-11-22 03:29:30.121480614 +0000 UTC m=+0.463498838 container died addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:29:30 compute-0 python3.9[110733]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:29:30 compute-0 sudo[110729]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8ae74dce38e19153da88ee492de7bd3b28d1cdacd087cba52fe4c5569bdb81f-merged.mount: Deactivated successfully.
Nov 22 03:29:30 compute-0 sudo[110894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzvbdzwksctxehjlrhtxrlemzsxiayvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782170.620766-269-117415316967594/AnsiballZ_file.py'
Nov 22 03:29:30 compute-0 sudo[110894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:30 compute-0 ceph-mon[75011]: pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 143 B/s wr, 6 op/s; 46 B/s, 2 objects/s recovering
Nov 22 03:29:31 compute-0 podman[110637]: 2025-11-22 03:29:31.101309599 +0000 UTC m=+1.443327833 container remove addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:29:31 compute-0 systemd[1]: libpod-conmon-addb36123996ca805b9d88a97c37549c4ae966a7e7656f9ca5227b3962364d5e.scope: Deactivated successfully.
Nov 22 03:29:31 compute-0 python3.9[110896]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 22 03:29:31 compute-0 sudo[110894]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:31 compute-0 podman[110904]: 2025-11-22 03:29:31.308886306 +0000 UTC m=+0.029585047 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:29:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 22 03:29:31 compute-0 podman[110904]: 2025-11-22 03:29:31.689220826 +0000 UTC m=+0.409919567 container create 4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:29:31 compute-0 systemd[1]: Started libpod-conmon-4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2.scope.
Nov 22 03:29:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8b94ebd6a41f7471570c579d8a3d7aff8dd8b27cca08d111562260ca91d6c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8b94ebd6a41f7471570c579d8a3d7aff8dd8b27cca08d111562260ca91d6c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8b94ebd6a41f7471570c579d8a3d7aff8dd8b27cca08d111562260ca91d6c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef8b94ebd6a41f7471570c579d8a3d7aff8dd8b27cca08d111562260ca91d6c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:29:32 compute-0 podman[110904]: 2025-11-22 03:29:32.102941826 +0000 UTC m=+0.823640616 container init 4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:29:32 compute-0 podman[110904]: 2025-11-22 03:29:32.118631788 +0000 UTC m=+0.839330529 container start 4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:29:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:32 compute-0 sudo[111075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aupagpavyuoviflgyycjctqzuonzhsym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782171.8546102-280-106581284194000/AnsiballZ_dnf.py'
Nov 22 03:29:32 compute-0 sudo[111075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:32 compute-0 podman[110904]: 2025-11-22 03:29:32.294872025 +0000 UTC m=+1.015570726 container attach 4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:29:32 compute-0 python3.9[111077]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:29:32 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.d scrub starts
Nov 22 03:29:32 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.d scrub ok
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]: {
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "osd_id": 1,
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "type": "bluestore"
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:     },
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "osd_id": 0,
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "type": "bluestore"
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:     },
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "osd_id": 2,
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:         "type": "bluestore"
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]:     }
Nov 22 03:29:33 compute-0 compassionate_mccarthy[111005]: }
Nov 22 03:29:33 compute-0 systemd[1]: libpod-4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2.scope: Deactivated successfully.
Nov 22 03:29:33 compute-0 systemd[1]: libpod-4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2.scope: Consumed 1.079s CPU time.
Nov 22 03:29:33 compute-0 podman[110904]: 2025-11-22 03:29:33.207532017 +0000 UTC m=+1.928230768 container died 4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:29:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 22 03:29:33 compute-0 ceph-mon[75011]: pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Nov 22 03:29:33 compute-0 ceph-mon[75011]: 2.d scrub starts
Nov 22 03:29:33 compute-0 ceph-mon[75011]: 2.d scrub ok
Nov 22 03:29:34 compute-0 sudo[111075]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:34 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Nov 22 03:29:34 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Nov 22 03:29:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef8b94ebd6a41f7471570c579d8a3d7aff8dd8b27cca08d111562260ca91d6c4-merged.mount: Deactivated successfully.
Nov 22 03:29:34 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Nov 22 03:29:34 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Nov 22 03:29:34 compute-0 sudo[111269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohowprvviktklcdmizkhekfsnlyqthcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782174.6453226-288-84431516484696/AnsiballZ_file.py'
Nov 22 03:29:34 compute-0 sudo[111269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:35 compute-0 ceph-mon[75011]: pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Nov 22 03:29:35 compute-0 python3.9[111271]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:29:35 compute-0 sudo[111269]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:35 compute-0 podman[110904]: 2025-11-22 03:29:35.373804967 +0000 UTC m=+4.094503668 container remove 4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:29:35 compute-0 systemd[1]: libpod-conmon-4fc2629f102cebec82b29a24a5277fed1c279ed11275ee94359880d6e29cc3a2.scope: Deactivated successfully.
Nov 22 03:29:35 compute-0 sudo[110519]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:35 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Nov 22 03:29:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:29:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 22 03:29:35 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Nov 22 03:29:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:29:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:29:35 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:29:35 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ad213031-5cf5-43e2-bc3a-432ae0cbc229 does not exist
Nov 22 03:29:35 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7a453b01-89ea-42df-b0e4-082e8667eae5 does not exist
Nov 22 03:29:35 compute-0 sudo[111391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:29:35 compute-0 sudo[111391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:35 compute-0 sudo[111391]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:35 compute-0 sudo[111451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opveqalajzerewfjbotenhsjeeuafjeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782175.6011984-296-69893341990857/AnsiballZ_stat.py'
Nov 22 03:29:35 compute-0 sudo[111451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:35 compute-0 sudo[111444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:29:35 compute-0 sudo[111444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:29:35 compute-0 sudo[111444]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:36 compute-0 python3.9[111464]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:29:36 compute-0 sudo[111451]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:29:36
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'volumes', '.mgr', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:29:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:29:36 compute-0 sudo[111549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjsakebmtofpxcktxkvbcpehtazyghth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782175.6011984-296-69893341990857/AnsiballZ_file.py'
Nov 22 03:29:36 compute-0 sudo[111549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:36 compute-0 ceph-mon[75011]: 3.1b scrub starts
Nov 22 03:29:36 compute-0 ceph-mon[75011]: 3.1b scrub ok
Nov 22 03:29:36 compute-0 ceph-mon[75011]: 6.8 scrub starts
Nov 22 03:29:36 compute-0 ceph-mon[75011]: 6.8 scrub ok
Nov 22 03:29:36 compute-0 ceph-mon[75011]: 7.3 scrub starts
Nov 22 03:29:36 compute-0 ceph-mon[75011]: 7.3 scrub ok
Nov 22 03:29:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:29:36 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:29:36 compute-0 python3.9[111551]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:29:36 compute-0 sudo[111549]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:36 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 6.f scrub starts
Nov 22 03:29:36 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 6.f scrub ok
Nov 22 03:29:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:37 compute-0 sudo[111701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhljgrbzhvgleylzkvjmotzncsafucjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782176.9514878-309-23325244496573/AnsiballZ_stat.py'
Nov 22 03:29:37 compute-0 sudo[111701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:37 compute-0 python3.9[111703]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:29:37 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Nov 22 03:29:37 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Nov 22 03:29:37 compute-0 sudo[111701]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:37 compute-0 ceph-mon[75011]: pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9 B/s, 0 objects/s recovering
Nov 22 03:29:37 compute-0 ceph-mon[75011]: 6.f scrub starts
Nov 22 03:29:37 compute-0 ceph-mon[75011]: 6.f scrub ok
Nov 22 03:29:37 compute-0 sudo[111779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbyibvpuazvncnpxbsctojvudhyxjin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782176.9514878-309-23325244496573/AnsiballZ_file.py'
Nov 22 03:29:37 compute-0 sudo[111779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:38 compute-0 python3.9[111781]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:29:38 compute-0 sudo[111779]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:38 compute-0 sudo[111931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmnxbswirhteegrcbzajxqqxpxvnbuxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782178.5429616-324-149382856702006/AnsiballZ_dnf.py'
Nov 22 03:29:38 compute-0 sudo[111931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:38 compute-0 ceph-mon[75011]: pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:38 compute-0 ceph-mon[75011]: 7.18 scrub starts
Nov 22 03:29:38 compute-0 ceph-mon[75011]: 7.18 scrub ok
Nov 22 03:29:39 compute-0 python3.9[111933]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:29:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:39 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Nov 22 03:29:39 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Nov 22 03:29:39 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Nov 22 03:29:39 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Nov 22 03:29:39 compute-0 ceph-mon[75011]: 3.1f scrub starts
Nov 22 03:29:39 compute-0 ceph-mon[75011]: 3.1f scrub ok
Nov 22 03:29:40 compute-0 sudo[111931]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:40 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.b scrub starts
Nov 22 03:29:40 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.b scrub ok
Nov 22 03:29:40 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Nov 22 03:29:40 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Nov 22 03:29:41 compute-0 ceph-mon[75011]: pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:41 compute-0 ceph-mon[75011]: 10.3 scrub starts
Nov 22 03:29:41 compute-0 ceph-mon[75011]: 10.3 scrub ok
Nov 22 03:29:41 compute-0 ceph-mon[75011]: 8.b scrub starts
Nov 22 03:29:41 compute-0 ceph-mon[75011]: 8.b scrub ok
Nov 22 03:29:41 compute-0 python3.9[112084]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:29:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:41 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Nov 22 03:29:41 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Nov 22 03:29:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Nov 22 03:29:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Nov 22 03:29:42 compute-0 ceph-mon[75011]: 10.5 scrub starts
Nov 22 03:29:42 compute-0 ceph-mon[75011]: 10.5 scrub ok
Nov 22 03:29:42 compute-0 ceph-mon[75011]: 7.1b scrub starts
Nov 22 03:29:42 compute-0 ceph-mon[75011]: 7.1b scrub ok
Nov 22 03:29:42 compute-0 ceph-mon[75011]: 5.9 scrub starts
Nov 22 03:29:42 compute-0 ceph-mon[75011]: 5.9 scrub ok
Nov 22 03:29:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:42 compute-0 python3.9[112236]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 22 03:29:43 compute-0 python3.9[112386]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:29:43 compute-0 ceph-mon[75011]: pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:44 compute-0 sudo[112536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahvfxaliwqxwnmfqedokderatbnukldf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782183.5349772-365-78110716154553/AnsiballZ_systemd.py'
Nov 22 03:29:44 compute-0 sudo[112536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:44 compute-0 python3.9[112538]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:29:44 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 22 03:29:44 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Nov 22 03:29:44 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 22 03:29:44 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 22 03:29:44 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 22 03:29:44 compute-0 sudo[112536]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:45 compute-0 ceph-mon[75011]: pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:45 compute-0 python3.9[112699]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:29:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:29:46 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Nov 22 03:29:46 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Nov 22 03:29:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:47 compute-0 ceph-mon[75011]: pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:47 compute-0 ceph-mon[75011]: 5.16 scrub starts
Nov 22 03:29:47 compute-0 ceph-mon[75011]: 5.16 scrub ok
Nov 22 03:29:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:47 compute-0 sudo[112849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyhojqnpyyivlaugiiryzbjgpokdzuyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782187.7313113-422-67183708098119/AnsiballZ_systemd.py'
Nov 22 03:29:47 compute-0 sudo[112849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:48 compute-0 python3.9[112851]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:29:48 compute-0 sudo[112849]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:48 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Nov 22 03:29:48 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Nov 22 03:29:48 compute-0 sudo[113003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbhcvqsbcimjsuugarphnnhxkrcqtanr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782188.6341436-422-210604222681316/AnsiballZ_systemd.py'
Nov 22 03:29:48 compute-0 sudo[113003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:49 compute-0 ceph-mon[75011]: pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:49 compute-0 ceph-mon[75011]: 7.1f deep-scrub starts
Nov 22 03:29:49 compute-0 ceph-mon[75011]: 7.1f deep-scrub ok
Nov 22 03:29:49 compute-0 python3.9[113005]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:29:49 compute-0 sudo[113003]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:49 compute-0 sshd-session[105228]: Connection closed by 192.168.122.30 port 48398
Nov 22 03:29:49 compute-0 sshd-session[105180]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:29:49 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Nov 22 03:29:49 compute-0 systemd[1]: session-35.scope: Consumed 1min 6.684s CPU time.
Nov 22 03:29:49 compute-0 systemd-logind[799]: Session 35 logged out. Waiting for processes to exit.
Nov 22 03:29:49 compute-0 systemd-logind[799]: Removed session 35.
Nov 22 03:29:50 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Nov 22 03:29:50 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Nov 22 03:29:50 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.a scrub starts
Nov 22 03:29:50 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.a scrub ok
Nov 22 03:29:51 compute-0 ceph-mon[75011]: pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:51 compute-0 ceph-mon[75011]: 5.13 scrub starts
Nov 22 03:29:51 compute-0 ceph-mon[75011]: 5.13 scrub ok
Nov 22 03:29:51 compute-0 ceph-mon[75011]: 10.a scrub starts
Nov 22 03:29:51 compute-0 ceph-mon[75011]: 10.a scrub ok
Nov 22 03:29:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:51 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.c scrub starts
Nov 22 03:29:51 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.c scrub ok
Nov 22 03:29:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:52 compute-0 ceph-mon[75011]: 10.c scrub starts
Nov 22 03:29:52 compute-0 ceph-mon[75011]: 10.c scrub ok
Nov 22 03:29:52 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.15 deep-scrub starts
Nov 22 03:29:52 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.15 deep-scrub ok
Nov 22 03:29:52 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Nov 22 03:29:52 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Nov 22 03:29:53 compute-0 ceph-mon[75011]: pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:53 compute-0 ceph-mon[75011]: 2.15 deep-scrub starts
Nov 22 03:29:53 compute-0 ceph-mon[75011]: 2.15 deep-scrub ok
Nov 22 03:29:53 compute-0 ceph-mon[75011]: 10.18 scrub starts
Nov 22 03:29:53 compute-0 ceph-mon[75011]: 10.18 scrub ok
Nov 22 03:29:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:54 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.14 deep-scrub starts
Nov 22 03:29:54 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 8.14 deep-scrub ok
Nov 22 03:29:55 compute-0 ceph-mon[75011]: pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:55 compute-0 ceph-mon[75011]: 8.14 deep-scrub starts
Nov 22 03:29:55 compute-0 ceph-mon[75011]: 8.14 deep-scrub ok
Nov 22 03:29:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:55 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Nov 22 03:29:55 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Nov 22 03:29:55 compute-0 sshd-session[113032]: Accepted publickey for zuul from 192.168.122.30 port 40200 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:29:55 compute-0 systemd-logind[799]: New session 36 of user zuul.
Nov 22 03:29:55 compute-0 systemd[1]: Started Session 36 of User zuul.
Nov 22 03:29:55 compute-0 sshd-session[113032]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:29:56 compute-0 ceph-mon[75011]: 6.7 scrub starts
Nov 22 03:29:56 compute-0 ceph-mon[75011]: 6.7 scrub ok
Nov 22 03:29:56 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.3 deep-scrub starts
Nov 22 03:29:56 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.3 deep-scrub ok
Nov 22 03:29:56 compute-0 python3.9[113185]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:29:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:29:57 compute-0 ceph-mon[75011]: pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:57 compute-0 ceph-mon[75011]: 6.3 deep-scrub starts
Nov 22 03:29:57 compute-0 ceph-mon[75011]: 6.3 deep-scrub ok
Nov 22 03:29:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:57 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Nov 22 03:29:57 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Nov 22 03:29:57 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Nov 22 03:29:57 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Nov 22 03:29:58 compute-0 sudo[113339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xygisfsdparcdkxdfjblyzcnsiqbknfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782197.8365166-36-222142039735542/AnsiballZ_getent.py'
Nov 22 03:29:58 compute-0 sudo[113339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:58 compute-0 ceph-mon[75011]: 2.17 scrub starts
Nov 22 03:29:58 compute-0 ceph-mon[75011]: 2.17 scrub ok
Nov 22 03:29:58 compute-0 ceph-mon[75011]: 10.1b scrub starts
Nov 22 03:29:58 compute-0 ceph-mon[75011]: 10.1b scrub ok
Nov 22 03:29:58 compute-0 python3.9[113341]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 22 03:29:58 compute-0 sudo[113339]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:58 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Nov 22 03:29:58 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Nov 22 03:29:59 compute-0 sudo[113492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bclqcyfrfjoptdicxxdnfmwmiyibourn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782198.955365-48-39614978196270/AnsiballZ_setup.py'
Nov 22 03:29:59 compute-0 sudo[113492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:29:59 compute-0 ceph-mon[75011]: pgmap v281: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:59 compute-0 ceph-mon[75011]: 5.11 scrub starts
Nov 22 03:29:59 compute-0 ceph-mon[75011]: 5.11 scrub ok
Nov 22 03:29:59 compute-0 python3.9[113494]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:29:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:29:59 compute-0 sudo[113492]: pam_unix(sudo:session): session closed for user root
Nov 22 03:29:59 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Nov 22 03:29:59 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Nov 22 03:30:00 compute-0 sudo[113576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofuibwjdaxxhedllstwvpeeznmhnhofx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782198.955365-48-39614978196270/AnsiballZ_dnf.py'
Nov 22 03:30:00 compute-0 sudo[113576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:00 compute-0 ceph-mon[75011]: 5.12 scrub starts
Nov 22 03:30:00 compute-0 ceph-mon[75011]: 5.12 scrub ok
Nov 22 03:30:00 compute-0 python3.9[113578]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 03:30:00 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.e scrub starts
Nov 22 03:30:00 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.e scrub ok
Nov 22 03:30:01 compute-0 ceph-mon[75011]: pgmap v282: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:01 compute-0 ceph-mon[75011]: 6.e scrub starts
Nov 22 03:30:01 compute-0 ceph-mon[75011]: 6.e scrub ok
Nov 22 03:30:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:01 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Nov 22 03:30:01 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Nov 22 03:30:01 compute-0 sudo[113576]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:01 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Nov 22 03:30:01 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Nov 22 03:30:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:02 compute-0 sudo[113729]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keluapgpzbitwbacmtcielhpcckplzit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782201.993014-62-236642646943706/AnsiballZ_dnf.py'
Nov 22 03:30:02 compute-0 sudo[113729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:02 compute-0 ceph-mon[75011]: 6.5 scrub starts
Nov 22 03:30:02 compute-0 ceph-mon[75011]: 6.5 scrub ok
Nov 22 03:30:02 compute-0 ceph-mon[75011]: 10.1c scrub starts
Nov 22 03:30:02 compute-0 ceph-mon[75011]: 10.1c scrub ok
Nov 22 03:30:02 compute-0 python3.9[113731]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:30:02 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Nov 22 03:30:02 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Nov 22 03:30:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:03 compute-0 ceph-mon[75011]: pgmap v283: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:03 compute-0 ceph-mon[75011]: 6.2 deep-scrub starts
Nov 22 03:30:03 compute-0 ceph-mon[75011]: 6.2 deep-scrub ok
Nov 22 03:30:03 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Nov 22 03:30:03 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Nov 22 03:30:04 compute-0 sudo[113729]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:04 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Nov 22 03:30:04 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Nov 22 03:30:04 compute-0 ceph-mon[75011]: pgmap v284: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:04 compute-0 ceph-mon[75011]: 6.9 scrub starts
Nov 22 03:30:04 compute-0 ceph-mon[75011]: 6.9 scrub ok
Nov 22 03:30:04 compute-0 sudo[113882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofvvimyfpgxxqjwlrnzobcdevtpsalpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782204.3568034-70-28679896489426/AnsiballZ_systemd.py'
Nov 22 03:30:04 compute-0 sudo[113882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:05 compute-0 python3.9[113884]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:30:05 compute-0 sudo[113882]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:05 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Nov 22 03:30:05 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Nov 22 03:30:05 compute-0 ceph-mon[75011]: 6.a deep-scrub starts
Nov 22 03:30:05 compute-0 ceph-mon[75011]: 6.a deep-scrub ok
Nov 22 03:30:06 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.1d deep-scrub starts
Nov 22 03:30:06 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.1d deep-scrub ok
Nov 22 03:30:06 compute-0 python3.9[114037]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:30:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:30:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:30:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:30:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:30:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:30:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:30:06 compute-0 sudo[114187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyitjhhcwpkiimtmmmtgwdlcrahtafkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782206.439286-88-111316802941015/AnsiballZ_sefcontext.py'
Nov 22 03:30:06 compute-0 sudo[114187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:06 compute-0 ceph-mon[75011]: pgmap v285: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:06 compute-0 ceph-mon[75011]: 11.10 scrub starts
Nov 22 03:30:06 compute-0 ceph-mon[75011]: 11.10 scrub ok
Nov 22 03:30:06 compute-0 ceph-mon[75011]: 10.1d deep-scrub starts
Nov 22 03:30:06 compute-0 ceph-mon[75011]: 10.1d deep-scrub ok
Nov 22 03:30:07 compute-0 python3.9[114189]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 22 03:30:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:07 compute-0 sudo[114187]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:07 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Nov 22 03:30:07 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Nov 22 03:30:07 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Nov 22 03:30:07 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Nov 22 03:30:08 compute-0 python3.9[114339]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:30:08 compute-0 ceph-mon[75011]: pgmap v286: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:08 compute-0 ceph-mon[75011]: 11.4 scrub starts
Nov 22 03:30:08 compute-0 ceph-mon[75011]: 11.4 scrub ok
Nov 22 03:30:08 compute-0 ceph-mon[75011]: 6.6 scrub starts
Nov 22 03:30:08 compute-0 ceph-mon[75011]: 6.6 scrub ok
Nov 22 03:30:08 compute-0 sudo[114495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eblokqbndsfnnryzrzhdtdwhrngkrfvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782208.7369735-106-210130753813318/AnsiballZ_dnf.py'
Nov 22 03:30:08 compute-0 sudo[114495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:09 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Nov 22 03:30:09 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Nov 22 03:30:09 compute-0 python3.9[114497]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:30:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:10 compute-0 ceph-mon[75011]: 10.1f deep-scrub starts
Nov 22 03:30:10 compute-0 ceph-mon[75011]: 10.1f deep-scrub ok
Nov 22 03:30:10 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.e scrub starts
Nov 22 03:30:10 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.e scrub ok
Nov 22 03:30:10 compute-0 sudo[114495]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.c scrub starts
Nov 22 03:30:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.c scrub ok
Nov 22 03:30:11 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Nov 22 03:30:11 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Nov 22 03:30:11 compute-0 ceph-mon[75011]: pgmap v287: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:11 compute-0 ceph-mon[75011]: 11.e scrub starts
Nov 22 03:30:11 compute-0 ceph-mon[75011]: 11.e scrub ok
Nov 22 03:30:11 compute-0 ceph-mon[75011]: 6.c scrub starts
Nov 22 03:30:11 compute-0 ceph-mon[75011]: 6.c scrub ok
Nov 22 03:30:11 compute-0 ceph-mon[75011]: 11.12 scrub starts
Nov 22 03:30:11 compute-0 ceph-mon[75011]: 11.12 scrub ok
Nov 22 03:30:11 compute-0 sudo[114648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wskewxyyodxguhlkvrqecatbjkozejwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782210.9299448-114-13173887623206/AnsiballZ_command.py'
Nov 22 03:30:11 compute-0 sudo[114648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:11 compute-0 python3.9[114650]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:11 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Nov 22 03:30:11 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Nov 22 03:30:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:12 compute-0 sudo[114648]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:12 compute-0 ceph-mon[75011]: 6.4 scrub starts
Nov 22 03:30:12 compute-0 ceph-mon[75011]: 6.4 scrub ok
Nov 22 03:30:12 compute-0 sudo[114935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tctomvuarxtllaeuchzflmhskuwgkktf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782212.5738704-122-107867595118981/AnsiballZ_file.py'
Nov 22 03:30:12 compute-0 sudo[114935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:13 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Nov 22 03:30:13 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Nov 22 03:30:13 compute-0 python3.9[114937]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 03:30:13 compute-0 sudo[114935]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:13 compute-0 ceph-mon[75011]: pgmap v288: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:13 compute-0 ceph-mon[75011]: 11.11 scrub starts
Nov 22 03:30:13 compute-0 ceph-mon[75011]: 11.11 scrub ok
Nov 22 03:30:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:13 compute-0 python3.9[115087]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:30:14 compute-0 sudo[115239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goxzlukgokcnqecpunfegeuecbelfrxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782214.3284988-138-264730783355733/AnsiballZ_dnf.py'
Nov 22 03:30:14 compute-0 sudo[115239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:14 compute-0 python3.9[115241]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:30:15 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.d scrub starts
Nov 22 03:30:15 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.d scrub ok
Nov 22 03:30:15 compute-0 ceph-mon[75011]: pgmap v289: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:15 compute-0 ceph-mon[75011]: 11.d scrub starts
Nov 22 03:30:15 compute-0 ceph-mon[75011]: 11.d scrub ok
Nov 22 03:30:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:16 compute-0 sudo[115239]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:16 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.b scrub starts
Nov 22 03:30:16 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.b scrub ok
Nov 22 03:30:16 compute-0 ceph-mon[75011]: 11.b scrub starts
Nov 22 03:30:16 compute-0 ceph-mon[75011]: 11.b scrub ok
Nov 22 03:30:16 compute-0 sudo[115392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afwrqcvvzgqwthocemrhqaiycvnkantq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782216.376417-147-242967018278320/AnsiballZ_dnf.py'
Nov 22 03:30:16 compute-0 sudo[115392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:16 compute-0 python3.9[115394]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:30:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:17 compute-0 ceph-mon[75011]: pgmap v290: 305 pgs: 305 active+clean; 455 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:18 compute-0 sudo[115392]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:18 compute-0 sudo[115545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gskmfswdiosquyycuwhxftiahpeuuhqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782218.5843997-159-260215287783807/AnsiballZ_stat.py'
Nov 22 03:30:18 compute-0 sudo[115545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:18 compute-0 python3.9[115547]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:30:18 compute-0 sudo[115545]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:19 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Nov 22 03:30:19 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Nov 22 03:30:19 compute-0 ceph-mon[75011]: pgmap v291: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:19 compute-0 ceph-mon[75011]: 11.9 scrub starts
Nov 22 03:30:19 compute-0 ceph-mon[75011]: 11.9 scrub ok
Nov 22 03:30:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:19 compute-0 sudo[115699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqjlpnhmdzhycgtmngydlvzrxqslzwkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782219.1906378-167-4058366837835/AnsiballZ_slurp.py'
Nov 22 03:30:19 compute-0 sudo[115699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:19 compute-0 python3.9[115701]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Nov 22 03:30:19 compute-0 sudo[115699]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:20 compute-0 sshd-session[113035]: Connection closed by 192.168.122.30 port 40200
Nov 22 03:30:20 compute-0 sshd-session[113032]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:30:20 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Nov 22 03:30:20 compute-0 systemd[1]: session-36.scope: Consumed 18.790s CPU time.
Nov 22 03:30:20 compute-0 systemd-logind[799]: Session 36 logged out. Waiting for processes to exit.
Nov 22 03:30:20 compute-0 systemd-logind[799]: Removed session 36.
Nov 22 03:30:20 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.b scrub starts
Nov 22 03:30:20 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.b scrub ok
Nov 22 03:30:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:21 compute-0 ceph-mon[75011]: pgmap v292: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:21 compute-0 ceph-mon[75011]: 6.b scrub starts
Nov 22 03:30:21 compute-0 ceph-mon[75011]: 6.b scrub ok
Nov 22 03:30:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:22 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.f scrub starts
Nov 22 03:30:22 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.f scrub ok
Nov 22 03:30:22 compute-0 ceph-mon[75011]: pgmap v293: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:23 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Nov 22 03:30:23 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Nov 22 03:30:23 compute-0 ceph-mon[75011]: 11.f scrub starts
Nov 22 03:30:23 compute-0 ceph-mon[75011]: 11.f scrub ok
Nov 22 03:30:24 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Nov 22 03:30:24 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Nov 22 03:30:24 compute-0 ceph-mon[75011]: pgmap v294: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:24 compute-0 ceph-mon[75011]: 11.14 scrub starts
Nov 22 03:30:24 compute-0 ceph-mon[75011]: 11.14 scrub ok
Nov 22 03:30:25 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Nov 22 03:30:25 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Nov 22 03:30:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:25 compute-0 ceph-mon[75011]: 11.1 scrub starts
Nov 22 03:30:25 compute-0 ceph-mon[75011]: 11.1 scrub ok
Nov 22 03:30:26 compute-0 sshd-session[115726]: Accepted publickey for zuul from 192.168.122.30 port 35384 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:30:26 compute-0 systemd-logind[799]: New session 37 of user zuul.
Nov 22 03:30:26 compute-0 systemd[1]: Started Session 37 of User zuul.
Nov 22 03:30:26 compute-0 sshd-session[115726]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:30:26 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Nov 22 03:30:26 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Nov 22 03:30:26 compute-0 ceph-mon[75011]: 11.19 scrub starts
Nov 22 03:30:26 compute-0 ceph-mon[75011]: 11.19 scrub ok
Nov 22 03:30:26 compute-0 ceph-mon[75011]: pgmap v295: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:27 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Nov 22 03:30:27 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Nov 22 03:30:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:27 compute-0 python3.9[115879]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:30:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:27 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.d scrub starts
Nov 22 03:30:27 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 6.d scrub ok
Nov 22 03:30:28 compute-0 ceph-mon[75011]: 11.15 scrub starts
Nov 22 03:30:28 compute-0 ceph-mon[75011]: 11.15 scrub ok
Nov 22 03:30:28 compute-0 python3.9[116033]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:30:29 compute-0 ceph-mon[75011]: 11.3 scrub starts
Nov 22 03:30:29 compute-0 ceph-mon[75011]: 11.3 scrub ok
Nov 22 03:30:29 compute-0 ceph-mon[75011]: pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:29 compute-0 ceph-mon[75011]: 6.d scrub starts
Nov 22 03:30:29 compute-0 ceph-mon[75011]: 6.d scrub ok
Nov 22 03:30:29 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Nov 22 03:30:29 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Nov 22 03:30:29 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Nov 22 03:30:29 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Nov 22 03:30:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:29 compute-0 python3.9[116226]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:30 compute-0 sshd-session[115729]: Connection closed by 192.168.122.30 port 35384
Nov 22 03:30:30 compute-0 sshd-session[115726]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:30:30 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Nov 22 03:30:30 compute-0 systemd[1]: session-37.scope: Consumed 2.760s CPU time.
Nov 22 03:30:30 compute-0 systemd-logind[799]: Session 37 logged out. Waiting for processes to exit.
Nov 22 03:30:30 compute-0 systemd-logind[799]: Removed session 37.
Nov 22 03:30:30 compute-0 ceph-mon[75011]: 11.8 scrub starts
Nov 22 03:30:30 compute-0 ceph-mon[75011]: 11.8 scrub ok
Nov 22 03:30:30 compute-0 ceph-mon[75011]: 11.17 scrub starts
Nov 22 03:30:30 compute-0 ceph-mon[75011]: 11.17 scrub ok
Nov 22 03:30:31 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Nov 22 03:30:31 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Nov 22 03:30:31 compute-0 ceph-mon[75011]: pgmap v297: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:31 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Nov 22 03:30:31 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Nov 22 03:30:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:32 compute-0 ceph-mon[75011]: 11.1b scrub starts
Nov 22 03:30:32 compute-0 ceph-mon[75011]: 11.1b scrub ok
Nov 22 03:30:32 compute-0 ceph-mon[75011]: 9.2 scrub starts
Nov 22 03:30:33 compute-0 ceph-mon[75011]: pgmap v298: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:33 compute-0 ceph-mon[75011]: 9.2 scrub ok
Nov 22 03:30:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:34 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1a deep-scrub starts
Nov 22 03:30:34 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1a deep-scrub ok
Nov 22 03:30:35 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.6 deep-scrub starts
Nov 22 03:30:35 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 11.6 deep-scrub ok
Nov 22 03:30:35 compute-0 ceph-mon[75011]: pgmap v299: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:35 compute-0 ceph-mon[75011]: 11.1a deep-scrub starts
Nov 22 03:30:35 compute-0 ceph-mon[75011]: 11.1a deep-scrub ok
Nov 22 03:30:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:35 compute-0 sudo[116254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:35 compute-0 sudo[116254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:35 compute-0 sudo[116254]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:35 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Nov 22 03:30:35 compute-0 sudo[116279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:30:35 compute-0 sudo[116279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:35 compute-0 sudo[116279]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:36 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Nov 22 03:30:36 compute-0 sudo[116304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:36 compute-0 sudo[116304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:36 compute-0 sudo[116304]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:30:36
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'images', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:30:36 compute-0 sudo[116329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:30:36 compute-0 sudo[116329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:36 compute-0 sshd-session[116330]: Accepted publickey for zuul from 192.168.122.30 port 40096 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:30:36 compute-0 systemd-logind[799]: New session 38 of user zuul.
Nov 22 03:30:36 compute-0 systemd[1]: Started Session 38 of User zuul.
Nov 22 03:30:36 compute-0 sshd-session[116330]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:30:36 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Nov 22 03:30:36 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:30:36 compute-0 ceph-mon[75011]: 11.6 deep-scrub starts
Nov 22 03:30:36 compute-0 ceph-mon[75011]: 11.6 deep-scrub ok
Nov 22 03:30:36 compute-0 sudo[116329]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:30:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:30:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:30:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:30:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:30:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f1ea2d53-4085-4c52-b460-fe27dbe2e36c does not exist
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev da6bc2e8-8649-4d36-b8cd-043303798161 does not exist
Nov 22 03:30:36 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 9e6ed320-3ac6-4dfb-a5d3-2966c6c8228a does not exist
Nov 22 03:30:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:30:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:30:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:30:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:30:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:30:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:30:36 compute-0 sudo[116440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:36 compute-0 sudo[116440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:36 compute-0 sudo[116440]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:36 compute-0 sudo[116472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:30:36 compute-0 sudo[116472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:36 compute-0 sudo[116472]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:36 compute-0 sudo[116518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:36 compute-0 sudo[116518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:36 compute-0 sudo[116518]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:36 compute-0 sudo[116562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:30:36 compute-0 sudo[116562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:37 compute-0 python3.9[116637]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:30:37 compute-0 podman[116676]: 2025-11-22 03:30:37.353837966 +0000 UTC m=+0.053708121 container create f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_almeida, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:30:37 compute-0 systemd[76613]: Created slice User Background Tasks Slice.
Nov 22 03:30:37 compute-0 systemd[76613]: Starting Cleanup of User's Temporary Files and Directories...
Nov 22 03:30:37 compute-0 systemd[1]: Started libpod-conmon-f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6.scope.
Nov 22 03:30:37 compute-0 systemd[76613]: Finished Cleanup of User's Temporary Files and Directories.
Nov 22 03:30:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:30:37 compute-0 ceph-mon[75011]: pgmap v300: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:37 compute-0 ceph-mon[75011]: 9.4 scrub starts
Nov 22 03:30:37 compute-0 ceph-mon[75011]: 9.4 scrub ok
Nov 22 03:30:37 compute-0 ceph-mon[75011]: 11.2 scrub starts
Nov 22 03:30:37 compute-0 ceph-mon[75011]: 11.2 scrub ok
Nov 22 03:30:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:30:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:30:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:30:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:30:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:30:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:30:37 compute-0 podman[116676]: 2025-11-22 03:30:37.329730215 +0000 UTC m=+0.029600370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:30:37 compute-0 podman[116676]: 2025-11-22 03:30:37.437191777 +0000 UTC m=+0.137061932 container init f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:30:37 compute-0 podman[116676]: 2025-11-22 03:30:37.443696732 +0000 UTC m=+0.143566887 container start f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:37 compute-0 podman[116676]: 2025-11-22 03:30:37.448748739 +0000 UTC m=+0.148618904 container attach f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:30:37 compute-0 beautiful_almeida[116698]: 167 167
Nov 22 03:30:37 compute-0 systemd[1]: libpod-f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6.scope: Deactivated successfully.
Nov 22 03:30:37 compute-0 podman[116676]: 2025-11-22 03:30:37.451608746 +0000 UTC m=+0.151478861 container died f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_almeida, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:30:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fcf77e0677746ae4ab9433c868001c8f296b09f27c41cedf229015b05eaf81e-merged.mount: Deactivated successfully.
Nov 22 03:30:37 compute-0 podman[116676]: 2025-11-22 03:30:37.493462306 +0000 UTC m=+0.193332431 container remove f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_almeida, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:37 compute-0 systemd[1]: libpod-conmon-f24f3f7cbd3dadaac30ed494131aa6ffa07b74ada019d94e0b275b04b08e4ce6.scope: Deactivated successfully.
Nov 22 03:30:37 compute-0 podman[116746]: 2025-11-22 03:30:37.697528106 +0000 UTC m=+0.067629868 container create 5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hoover, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:30:37 compute-0 systemd[1]: Started libpod-conmon-5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd.scope.
Nov 22 03:30:37 compute-0 podman[116746]: 2025-11-22 03:30:37.672327495 +0000 UTC m=+0.042429267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:30:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07e27f05d09baa525c7ad912b63e2295f3b293fbefb351d5a867c87b20c54a5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07e27f05d09baa525c7ad912b63e2295f3b293fbefb351d5a867c87b20c54a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07e27f05d09baa525c7ad912b63e2295f3b293fbefb351d5a867c87b20c54a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07e27f05d09baa525c7ad912b63e2295f3b293fbefb351d5a867c87b20c54a5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e07e27f05d09baa525c7ad912b63e2295f3b293fbefb351d5a867c87b20c54a5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:37 compute-0 podman[116746]: 2025-11-22 03:30:37.792582321 +0000 UTC m=+0.162684053 container init 5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:30:37 compute-0 podman[116746]: 2025-11-22 03:30:37.799993162 +0000 UTC m=+0.170094874 container start 5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hoover, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:37 compute-0 podman[116746]: 2025-11-22 03:30:37.803209099 +0000 UTC m=+0.173310841 container attach 5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hoover, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:38 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Nov 22 03:30:38 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Nov 22 03:30:38 compute-0 python3.9[116893]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:30:38 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Nov 22 03:30:38 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Nov 22 03:30:38 compute-0 ceph-mon[75011]: pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:38 compute-0 determined_hoover[116813]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:30:38 compute-0 determined_hoover[116813]: --> relative data size: 1.0
Nov 22 03:30:38 compute-0 determined_hoover[116813]: --> All data devices are unavailable
Nov 22 03:30:38 compute-0 systemd[1]: libpod-5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd.scope: Deactivated successfully.
Nov 22 03:30:38 compute-0 systemd[1]: libpod-5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd.scope: Consumed 1.061s CPU time.
Nov 22 03:30:38 compute-0 podman[116746]: 2025-11-22 03:30:38.939702993 +0000 UTC m=+1.309804775 container died 5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:30:38 compute-0 sudo[117071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jygmryhcpufxsallyfwjylsemvyzzslr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782238.7482004-40-28487310862143/AnsiballZ_setup.py'
Nov 22 03:30:38 compute-0 sudo[117071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:39 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.a scrub starts
Nov 22 03:30:39 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.a scrub ok
Nov 22 03:30:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e07e27f05d09baa525c7ad912b63e2295f3b293fbefb351d5a867c87b20c54a5-merged.mount: Deactivated successfully.
Nov 22 03:30:39 compute-0 python3.9[117079]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:30:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:39 compute-0 ceph-mon[75011]: 11.18 scrub starts
Nov 22 03:30:39 compute-0 ceph-mon[75011]: 11.18 scrub ok
Nov 22 03:30:39 compute-0 ceph-mon[75011]: 10.8 scrub starts
Nov 22 03:30:39 compute-0 ceph-mon[75011]: 10.8 scrub ok
Nov 22 03:30:39 compute-0 sudo[117071]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:40 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.10 deep-scrub starts
Nov 22 03:30:40 compute-0 podman[116746]: 2025-11-22 03:30:40.067594014 +0000 UTC m=+2.437695756 container remove 5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_hoover, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Nov 22 03:30:40 compute-0 systemd[1]: libpod-conmon-5103c4206143e105a5291d6d575d799f7c2259ad8ce636beb1a0562bf0ca2efd.scope: Deactivated successfully.
Nov 22 03:30:40 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.10 deep-scrub ok
Nov 22 03:30:40 compute-0 sudo[116562]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:40 compute-0 sudo[117170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woavfuiktpvyyjladaggptteabpydqyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782238.7482004-40-28487310862143/AnsiballZ_dnf.py'
Nov 22 03:30:40 compute-0 sudo[117170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:40 compute-0 sudo[117172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:40 compute-0 sudo[117172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:40 compute-0 sudo[117172]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:40 compute-0 sudo[117198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:30:40 compute-0 sudo[117198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:40 compute-0 sudo[117198]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:40 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1f deep-scrub starts
Nov 22 03:30:40 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1f deep-scrub ok
Nov 22 03:30:40 compute-0 sudo[117223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:40 compute-0 sudo[117223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:40 compute-0 sudo[117223]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:40 compute-0 sudo[117248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:30:40 compute-0 sudo[117248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:40 compute-0 python3.9[117175]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:30:40 compute-0 ceph-mon[75011]: 9.a scrub starts
Nov 22 03:30:40 compute-0 ceph-mon[75011]: 9.a scrub ok
Nov 22 03:30:40 compute-0 ceph-mon[75011]: pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:40 compute-0 podman[117315]: 2025-11-22 03:30:40.707945793 +0000 UTC m=+0.023287230 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:30:40 compute-0 podman[117315]: 2025-11-22 03:30:40.85154059 +0000 UTC m=+0.166881967 container create f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:30:40 compute-0 systemd[1]: Started libpod-conmon-f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92.scope.
Nov 22 03:30:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:30:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Nov 22 03:30:41 compute-0 podman[117315]: 2025-11-22 03:30:41.087770688 +0000 UTC m=+0.403112085 container init f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:30:41 compute-0 podman[117315]: 2025-11-22 03:30:41.10006796 +0000 UTC m=+0.415409317 container start f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:41 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Nov 22 03:30:41 compute-0 dreamy_montalcini[117332]: 167 167
Nov 22 03:30:41 compute-0 systemd[1]: libpod-f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92.scope: Deactivated successfully.
Nov 22 03:30:41 compute-0 podman[117315]: 2025-11-22 03:30:41.133579085 +0000 UTC m=+0.448920492 container attach f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:41 compute-0 podman[117315]: 2025-11-22 03:30:41.134787377 +0000 UTC m=+0.450128744 container died f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_montalcini, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:30:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-00fb9379762e7c7de67178a201bf25f8dcd137847a2354d78d9c88165274ff12-merged.mount: Deactivated successfully.
Nov 22 03:30:41 compute-0 podman[117315]: 2025-11-22 03:30:41.245061484 +0000 UTC m=+0.560402881 container remove f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_montalcini, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:30:41 compute-0 systemd[1]: libpod-conmon-f2571f81236bd780e997354f4be5c3f86a42143e1231fd161bc146959b887d92.scope: Deactivated successfully.
Nov 22 03:30:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:41 compute-0 podman[117358]: 2025-11-22 03:30:41.492562947 +0000 UTC m=+0.074902164 container create 80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:30:41 compute-0 systemd[1]: Started libpod-conmon-80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9.scope.
Nov 22 03:30:41 compute-0 podman[117358]: 2025-11-22 03:30:41.4589686 +0000 UTC m=+0.041307867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:30:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601bebbee47e5eadc1438a12efb6370d2fd34dd0632ace3e776f86fe1cf0b19e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601bebbee47e5eadc1438a12efb6370d2fd34dd0632ace3e776f86fe1cf0b19e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601bebbee47e5eadc1438a12efb6370d2fd34dd0632ace3e776f86fe1cf0b19e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/601bebbee47e5eadc1438a12efb6370d2fd34dd0632ace3e776f86fe1cf0b19e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:41 compute-0 podman[117358]: 2025-11-22 03:30:41.588063945 +0000 UTC m=+0.170403212 container init 80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:30:41 compute-0 podman[117358]: 2025-11-22 03:30:41.601152529 +0000 UTC m=+0.183491726 container start 80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:30:41 compute-0 podman[117358]: 2025-11-22 03:30:41.604861429 +0000 UTC m=+0.187200706 container attach 80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:30:41 compute-0 ceph-mon[75011]: 9.10 deep-scrub starts
Nov 22 03:30:41 compute-0 ceph-mon[75011]: 9.10 deep-scrub ok
Nov 22 03:30:41 compute-0 ceph-mon[75011]: 11.1f deep-scrub starts
Nov 22 03:30:41 compute-0 ceph-mon[75011]: 11.1f deep-scrub ok
Nov 22 03:30:41 compute-0 sudo[117170]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:42 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Nov 22 03:30:42 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Nov 22 03:30:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:42 compute-0 sudo[117528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbefzyhfdaawkzjnmonsswijduusjouu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782242.0647733-52-255667798278490/AnsiballZ_setup.py'
Nov 22 03:30:42 compute-0 sudo[117528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:42 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Nov 22 03:30:42 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Nov 22 03:30:42 compute-0 cool_leakey[117374]: {
Nov 22 03:30:42 compute-0 cool_leakey[117374]:     "0": [
Nov 22 03:30:42 compute-0 cool_leakey[117374]:         {
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "devices": [
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "/dev/loop3"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             ],
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_name": "ceph_lv0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_size": "21470642176",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "name": "ceph_lv0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "tags": {
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cluster_name": "ceph",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.crush_device_class": "",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.encrypted": "0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osd_id": "0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.type": "block",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.vdo": "0"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             },
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "type": "block",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "vg_name": "ceph_vg0"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:         }
Nov 22 03:30:42 compute-0 cool_leakey[117374]:     ],
Nov 22 03:30:42 compute-0 cool_leakey[117374]:     "1": [
Nov 22 03:30:42 compute-0 cool_leakey[117374]:         {
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "devices": [
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "/dev/loop4"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             ],
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_name": "ceph_lv1",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_size": "21470642176",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "name": "ceph_lv1",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "tags": {
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cluster_name": "ceph",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.crush_device_class": "",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.encrypted": "0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osd_id": "1",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.type": "block",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.vdo": "0"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             },
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "type": "block",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "vg_name": "ceph_vg1"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:         }
Nov 22 03:30:42 compute-0 cool_leakey[117374]:     ],
Nov 22 03:30:42 compute-0 cool_leakey[117374]:     "2": [
Nov 22 03:30:42 compute-0 cool_leakey[117374]:         {
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "devices": [
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "/dev/loop5"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             ],
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_name": "ceph_lv2",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_size": "21470642176",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "name": "ceph_lv2",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "tags": {
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.cluster_name": "ceph",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.crush_device_class": "",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.encrypted": "0",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osd_id": "2",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.type": "block",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:                 "ceph.vdo": "0"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             },
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "type": "block",
Nov 22 03:30:42 compute-0 cool_leakey[117374]:             "vg_name": "ceph_vg2"
Nov 22 03:30:42 compute-0 cool_leakey[117374]:         }
Nov 22 03:30:42 compute-0 cool_leakey[117374]:     ]
Nov 22 03:30:42 compute-0 cool_leakey[117374]: }
Nov 22 03:30:42 compute-0 systemd[1]: libpod-80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9.scope: Deactivated successfully.
Nov 22 03:30:42 compute-0 podman[117535]: 2025-11-22 03:30:42.408605779 +0000 UTC m=+0.051471431 container died 80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:30:42 compute-0 python3.9[117532]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:30:42 compute-0 ceph-mon[75011]: 9.12 scrub starts
Nov 22 03:30:42 compute-0 ceph-mon[75011]: 9.12 scrub ok
Nov 22 03:30:42 compute-0 ceph-mon[75011]: pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-601bebbee47e5eadc1438a12efb6370d2fd34dd0632ace3e776f86fe1cf0b19e-merged.mount: Deactivated successfully.
Nov 22 03:30:42 compute-0 sudo[117528]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:43 compute-0 podman[117535]: 2025-11-22 03:30:43.043342427 +0000 UTC m=+0.686208019 container remove 80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_leakey, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:30:43 compute-0 systemd[1]: libpod-conmon-80f81466d1bdff6ac5aeeb7364d38797c793b20208cb4665f59d23d8deb9c4e9.scope: Deactivated successfully.
Nov 22 03:30:43 compute-0 sudo[117248]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:43 compute-0 sudo[117618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:43 compute-0 sudo[117618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:43 compute-0 sudo[117618]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:43 compute-0 sudo[117643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:30:43 compute-0 sudo[117643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:43 compute-0 sudo[117643]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:43 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Nov 22 03:30:43 compute-0 sudo[117672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:43 compute-0 sudo[117672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:43 compute-0 sudo[117672]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:43 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Nov 22 03:30:43 compute-0 sudo[117733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:30:43 compute-0 sudo[117733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:43 compute-0 podman[117833]: 2025-11-22 03:30:43.753674585 +0000 UTC m=+0.082908849 container create e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:43 compute-0 podman[117833]: 2025-11-22 03:30:43.696372248 +0000 UTC m=+0.025606602 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:30:43 compute-0 sudo[117897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qufslmmmjkpfazfzvdczwhxhzeuoizxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782243.456315-63-244467860757135/AnsiballZ_file.py'
Nov 22 03:30:43 compute-0 sudo[117897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:43 compute-0 systemd[1]: Started libpod-conmon-e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc.scope.
Nov 22 03:30:43 compute-0 ceph-mon[75011]: 9.14 scrub starts
Nov 22 03:30:43 compute-0 ceph-mon[75011]: 9.14 scrub ok
Nov 22 03:30:43 compute-0 ceph-mon[75011]: 10.15 scrub starts
Nov 22 03:30:43 compute-0 ceph-mon[75011]: 10.15 scrub ok
Nov 22 03:30:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:30:44 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Nov 22 03:30:44 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Nov 22 03:30:44 compute-0 python3.9[117899]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:30:44 compute-0 podman[117833]: 2025-11-22 03:30:44.080357065 +0000 UTC m=+0.409591429 container init e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:30:44 compute-0 sudo[117897]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:44 compute-0 podman[117833]: 2025-11-22 03:30:44.09277553 +0000 UTC m=+0.422009834 container start e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:30:44 compute-0 happy_swanson[117902]: 167 167
Nov 22 03:30:44 compute-0 systemd[1]: libpod-e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc.scope: Deactivated successfully.
Nov 22 03:30:44 compute-0 podman[117833]: 2025-11-22 03:30:44.282696858 +0000 UTC m=+0.611931172 container attach e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:30:44 compute-0 podman[117833]: 2025-11-22 03:30:44.283256173 +0000 UTC m=+0.612490517 container died e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-7895a030053ba02dcd33d8c5dfe2413212272113e9c1ad37f455114418abc3d3-merged.mount: Deactivated successfully.
Nov 22 03:30:44 compute-0 podman[117833]: 2025-11-22 03:30:44.602326697 +0000 UTC m=+0.931561001 container remove e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:30:44 compute-0 systemd[1]: libpod-conmon-e7931f64c1eab037498100c6e250f67997d0dbdfacf9c050b7aa90bd6cdcf7fc.scope: Deactivated successfully.
Nov 22 03:30:44 compute-0 sudo[118088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xygnlkjehovdphghyfytxbsucocdstem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782244.495298-71-240214639974502/AnsiballZ_command.py'
Nov 22 03:30:44 compute-0 sudo[118088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:44 compute-0 podman[118041]: 2025-11-22 03:30:44.7765296 +0000 UTC m=+0.038391597 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:30:45 compute-0 podman[118041]: 2025-11-22 03:30:45.003075457 +0000 UTC m=+0.264937404 container create a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:30:45 compute-0 ceph-mon[75011]: 11.1e scrub starts
Nov 22 03:30:45 compute-0 ceph-mon[75011]: 11.1e scrub ok
Nov 22 03:30:45 compute-0 ceph-mon[75011]: pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:45 compute-0 python3.9[118090]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:45 compute-0 systemd[1]: Started libpod-conmon-a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024.scope.
Nov 22 03:30:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccf02f80ae3d3661839be5f5f6387e9960de7f821674dad0f6e9bd569f28d99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccf02f80ae3d3661839be5f5f6387e9960de7f821674dad0f6e9bd569f28d99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccf02f80ae3d3661839be5f5f6387e9960de7f821674dad0f6e9bd569f28d99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ccf02f80ae3d3661839be5f5f6387e9960de7f821674dad0f6e9bd569f28d99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:30:45 compute-0 podman[118041]: 2025-11-22 03:30:45.232260025 +0000 UTC m=+0.494121932 container init a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poincare, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:30:45 compute-0 podman[118041]: 2025-11-22 03:30:45.240943269 +0000 UTC m=+0.502805226 container start a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:30:45 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.4 deep-scrub starts
Nov 22 03:30:45 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.4 deep-scrub ok
Nov 22 03:30:45 compute-0 podman[118041]: 2025-11-22 03:30:45.319404758 +0000 UTC m=+0.581266745 container attach a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:30:45 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Nov 22 03:30:45 compute-0 sudo[118088]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:45 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:30:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:30:46 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Nov 22 03:30:46 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Nov 22 03:30:46 compute-0 sudo[118271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgdxlkjjcmnmdxnyumzlaypktwayfyzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782245.739726-79-239995337068417/AnsiballZ_stat.py'
Nov 22 03:30:46 compute-0 sudo[118271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:46 compute-0 ceph-mon[75011]: 9.1a scrub starts
Nov 22 03:30:46 compute-0 ceph-mon[75011]: 9.1a scrub ok
Nov 22 03:30:46 compute-0 ceph-mon[75011]: 10.4 deep-scrub starts
Nov 22 03:30:46 compute-0 ceph-mon[75011]: 10.4 deep-scrub ok
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]: {
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "osd_id": 1,
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "type": "bluestore"
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:     },
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "osd_id": 0,
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "type": "bluestore"
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:     },
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "osd_id": 2,
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:         "type": "bluestore"
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]:     }
Nov 22 03:30:46 compute-0 heuristic_poincare[118100]: }
Nov 22 03:30:46 compute-0 systemd[1]: libpod-a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024.scope: Deactivated successfully.
Nov 22 03:30:46 compute-0 podman[118041]: 2025-11-22 03:30:46.237012033 +0000 UTC m=+1.498873960 container died a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poincare, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:30:46 compute-0 python3.9[118276]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:30:46 compute-0 sudo[118271]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ccf02f80ae3d3661839be5f5f6387e9960de7f821674dad0f6e9bd569f28d99-merged.mount: Deactivated successfully.
Nov 22 03:30:46 compute-0 podman[118041]: 2025-11-22 03:30:46.534831243 +0000 UTC m=+1.796693180 container remove a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:30:46 compute-0 systemd[1]: libpod-conmon-a8accda2cb8641367a05689b106ae4d8eb8699ea6436bb0ab0c4c0a5d417a024.scope: Deactivated successfully.
Nov 22 03:30:46 compute-0 sudo[117733]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:30:46 compute-0 sudo[118380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwrqvgazhcaclpoeiwmgwdmekcvjrrrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782245.739726-79-239995337068417/AnsiballZ_file.py'
Nov 22 03:30:46 compute-0 sudo[118380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:30:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:30:46 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:30:46 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7a834af9-824b-4cdc-bc4d-79aab7708731 does not exist
Nov 22 03:30:46 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 86025a45-dbf8-4fab-99f2-f8a7e994ae93 does not exist
Nov 22 03:30:46 compute-0 python3.9[118382]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:30:46 compute-0 sudo[118380]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:46 compute-0 sudo[118383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:30:46 compute-0 sudo[118383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:46 compute-0 sudo[118383]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:46 compute-0 sudo[118408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:30:46 compute-0 sudo[118408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:30:46 compute-0 sudo[118408]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:47 compute-0 ceph-mon[75011]: 11.1c scrub starts
Nov 22 03:30:47 compute-0 ceph-mon[75011]: 11.1c scrub ok
Nov 22 03:30:47 compute-0 ceph-mon[75011]: pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:47 compute-0 ceph-mon[75011]: 11.5 scrub starts
Nov 22 03:30:47 compute-0 ceph-mon[75011]: 11.5 scrub ok
Nov 22 03:30:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:30:47 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:30:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:47 compute-0 sudo[118582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyeagmiulfkbwsqyzxefijukyywezkei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782247.3174386-91-259432187053885/AnsiballZ_stat.py'
Nov 22 03:30:47 compute-0 sudo[118582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:47 compute-0 python3.9[118584]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:30:47 compute-0 sudo[118582]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:48 compute-0 sudo[118660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnuszmjscvkgviftwjnhmbjsjrqqfqaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782247.3174386-91-259432187053885/AnsiballZ_file.py'
Nov 22 03:30:48 compute-0 sudo[118660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:48 compute-0 python3.9[118662]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:30:48 compute-0 sudo[118660]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:48 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Nov 22 03:30:49 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Nov 22 03:30:49 compute-0 sudo[118812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uazgsykhvuucwaxyxgdgretwpkvfdubu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782248.7868283-104-178402769751561/AnsiballZ_ini_file.py'
Nov 22 03:30:49 compute-0 sudo[118812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:49 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Nov 22 03:30:49 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Nov 22 03:30:49 compute-0 python3.9[118814]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:30:49 compute-0 ceph-mon[75011]: pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:49 compute-0 sudo[118812]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:49 compute-0 sudo[118964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vltysdlgpfaigudidrecyhpokzumqbjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782249.741286-104-9279858140067/AnsiballZ_ini_file.py'
Nov 22 03:30:49 compute-0 sudo[118964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:49 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.a scrub starts
Nov 22 03:30:49 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.a scrub ok
Nov 22 03:30:50 compute-0 python3.9[118966]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:30:50 compute-0 sudo[118964]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:50 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Nov 22 03:30:50 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Nov 22 03:30:50 compute-0 ceph-mon[75011]: 11.7 scrub starts
Nov 22 03:30:50 compute-0 ceph-mon[75011]: 11.7 scrub ok
Nov 22 03:30:50 compute-0 ceph-mon[75011]: 10.7 scrub starts
Nov 22 03:30:50 compute-0 ceph-mon[75011]: 10.7 scrub ok
Nov 22 03:30:50 compute-0 ceph-mon[75011]: pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:50 compute-0 sudo[119116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jratodgwmjnlnfsyopezizilssrgwsnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782250.5807772-104-147535170273450/AnsiballZ_ini_file.py'
Nov 22 03:30:50 compute-0 sudo[119116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:50 compute-0 python3.9[119118]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:30:51 compute-0 sudo[119116]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:51 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.d scrub starts
Nov 22 03:30:51 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.d scrub ok
Nov 22 03:30:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:51 compute-0 sudo[119268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaruhzjujqnpfdxdrnkekuceakowgaby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782251.2917676-104-233103357431886/AnsiballZ_ini_file.py'
Nov 22 03:30:51 compute-0 sudo[119268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:51 compute-0 python3.9[119270]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:30:51 compute-0 sudo[119268]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:51 compute-0 ceph-mon[75011]: 11.a scrub starts
Nov 22 03:30:51 compute-0 ceph-mon[75011]: 11.a scrub ok
Nov 22 03:30:51 compute-0 ceph-mon[75011]: 10.9 scrub starts
Nov 22 03:30:51 compute-0 ceph-mon[75011]: 10.9 scrub ok
Nov 22 03:30:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:52 compute-0 sudo[119420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjvixxnuzakaotefxynbphpcpydhjvaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782252.259306-135-146376186911530/AnsiballZ_dnf.py'
Nov 22 03:30:52 compute-0 sudo[119420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:52 compute-0 ceph-mon[75011]: 10.d scrub starts
Nov 22 03:30:52 compute-0 ceph-mon[75011]: 10.d scrub ok
Nov 22 03:30:52 compute-0 ceph-mon[75011]: pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:52 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.c scrub starts
Nov 22 03:30:52 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.c scrub ok
Nov 22 03:30:53 compute-0 python3.9[119422]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:30:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:54 compute-0 ceph-mon[75011]: 11.c scrub starts
Nov 22 03:30:54 compute-0 ceph-mon[75011]: 11.c scrub ok
Nov 22 03:30:54 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Nov 22 03:30:54 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Nov 22 03:30:54 compute-0 sudo[119420]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:54 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.13 deep-scrub starts
Nov 22 03:30:54 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.13 deep-scrub ok
Nov 22 03:30:55 compute-0 ceph-mon[75011]: pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:55 compute-0 ceph-mon[75011]: 10.17 scrub starts
Nov 22 03:30:55 compute-0 ceph-mon[75011]: 10.17 scrub ok
Nov 22 03:30:55 compute-0 ceph-mon[75011]: 11.13 deep-scrub starts
Nov 22 03:30:55 compute-0 ceph-mon[75011]: 11.13 deep-scrub ok
Nov 22 03:30:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:55 compute-0 sudo[119573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuvljwgmqyqkelfjnchpupfnzhwivylz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782255.3363454-146-25738725739041/AnsiballZ_setup.py'
Nov 22 03:30:55 compute-0 sudo[119573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:55 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Nov 22 03:30:55 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Nov 22 03:30:55 compute-0 python3.9[119575]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:30:56 compute-0 sudo[119573]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:56 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.e scrub starts
Nov 22 03:30:56 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.e scrub ok
Nov 22 03:30:56 compute-0 ceph-mon[75011]: 11.16 scrub starts
Nov 22 03:30:56 compute-0 ceph-mon[75011]: 11.16 scrub ok
Nov 22 03:30:56 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Nov 22 03:30:56 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Nov 22 03:30:56 compute-0 sudo[119727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kligcqhkionmopxiqgzyfwuwgvfksrav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782256.3656204-154-167724060200601/AnsiballZ_stat.py'
Nov 22 03:30:56 compute-0 sudo[119727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:56 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Nov 22 03:30:56 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Nov 22 03:30:56 compute-0 python3.9[119729]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:30:56 compute-0 sudo[119727]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:30:57 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.e scrub starts
Nov 22 03:30:57 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.e scrub ok
Nov 22 03:30:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:57 compute-0 ceph-mon[75011]: pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:57 compute-0 ceph-mon[75011]: 10.e scrub starts
Nov 22 03:30:57 compute-0 ceph-mon[75011]: 10.e scrub ok
Nov 22 03:30:57 compute-0 ceph-mon[75011]: 9.6 scrub starts
Nov 22 03:30:57 compute-0 ceph-mon[75011]: 9.6 scrub ok
Nov 22 03:30:57 compute-0 ceph-mon[75011]: 11.1d scrub starts
Nov 22 03:30:57 compute-0 ceph-mon[75011]: 11.1d scrub ok
Nov 22 03:30:57 compute-0 sudo[119879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znizxtldsechvrjynfgywhtvpxcwjjvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782257.3495927-163-38382699257578/AnsiballZ_stat.py'
Nov 22 03:30:57 compute-0 sudo[119879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:57 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Nov 22 03:30:57 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Nov 22 03:30:57 compute-0 python3.9[119881]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:30:57 compute-0 sudo[119879]: pam_unix(sudo:session): session closed for user root
Nov 22 03:30:58 compute-0 sudo[120031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osxdfxrkftvzfgquvbtmvrtcaruzhmbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782258.3750937-173-79630642340726/AnsiballZ_command.py'
Nov 22 03:30:58 compute-0 sudo[120031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:30:58 compute-0 python3.9[120033]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:30:58 compute-0 ceph-mon[75011]: 9.e scrub starts
Nov 22 03:30:58 compute-0 ceph-mon[75011]: 9.e scrub ok
Nov 22 03:30:58 compute-0 ceph-mon[75011]: pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:58 compute-0 ceph-mon[75011]: 10.1a scrub starts
Nov 22 03:30:58 compute-0 ceph-mon[75011]: 10.1a scrub ok
Nov 22 03:30:59 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Nov 22 03:30:59 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Nov 22 03:30:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:30:59 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Nov 22 03:30:59 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Nov 22 03:31:00 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Nov 22 03:31:00 compute-0 sudo[120031]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:00 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Nov 22 03:31:00 compute-0 ceph-mon[75011]: 10.16 scrub starts
Nov 22 03:31:00 compute-0 ceph-mon[75011]: 10.16 scrub ok
Nov 22 03:31:00 compute-0 ceph-mon[75011]: 10.6 scrub starts
Nov 22 03:31:00 compute-0 ceph-mon[75011]: 10.6 scrub ok
Nov 22 03:31:01 compute-0 sudo[120184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wncynpysymylrghfxsrhzqoosktfkpkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782260.6347654-183-260157065569377/AnsiballZ_service_facts.py'
Nov 22 03:31:01 compute-0 sudo[120184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:01 compute-0 python3.9[120186]: ansible-service_facts Invoked
Nov 22 03:31:01 compute-0 network[120203]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:31:01 compute-0 network[120204]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:31:01 compute-0 network[120205]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:31:02 compute-0 ceph-mon[75011]: pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:02 compute-0 ceph-mon[75011]: 10.1 scrub starts
Nov 22 03:31:02 compute-0 ceph-mon[75011]: 10.1 scrub ok
Nov 22 03:31:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:02 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Nov 22 03:31:02 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Nov 22 03:31:03 compute-0 ceph-mon[75011]: pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:03 compute-0 ceph-mon[75011]: 9.7 scrub starts
Nov 22 03:31:03 compute-0 ceph-mon[75011]: 9.7 scrub ok
Nov 22 03:31:03 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Nov 22 03:31:03 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Nov 22 03:31:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:03 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.b deep-scrub starts
Nov 22 03:31:03 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.b deep-scrub ok
Nov 22 03:31:04 compute-0 ceph-mon[75011]: 10.1e scrub starts
Nov 22 03:31:04 compute-0 ceph-mon[75011]: 10.1e scrub ok
Nov 22 03:31:04 compute-0 ceph-mon[75011]: 10.b deep-scrub starts
Nov 22 03:31:04 compute-0 ceph-mon[75011]: 10.b deep-scrub ok
Nov 22 03:31:05 compute-0 sudo[120184]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:05 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Nov 22 03:31:05 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Nov 22 03:31:05 compute-0 ceph-mon[75011]: pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:05 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.17 deep-scrub starts
Nov 22 03:31:05 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.17 deep-scrub ok
Nov 22 03:31:05 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.f scrub starts
Nov 22 03:31:05 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.f scrub ok
Nov 22 03:31:06 compute-0 sudo[120488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmxiqbbvglessgufcirxtndayqjaeyte ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1763782265.932425-198-161301304451531/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1763782265.932425-198-161301304451531/args'
Nov 22 03:31:06 compute-0 sudo[120488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:31:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:31:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:31:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:31:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:31:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:31:06 compute-0 sudo[120488]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:06 compute-0 ceph-mon[75011]: 9.9 scrub starts
Nov 22 03:31:06 compute-0 ceph-mon[75011]: 9.9 scrub ok
Nov 22 03:31:06 compute-0 ceph-mon[75011]: 9.17 deep-scrub starts
Nov 22 03:31:06 compute-0 ceph-mon[75011]: 9.17 deep-scrub ok
Nov 22 03:31:06 compute-0 sudo[120655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyjhlladzghtkwltcxqerkjqvkgqbxzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782266.6754422-209-266102010510937/AnsiballZ_dnf.py'
Nov 22 03:31:06 compute-0 sudo[120655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:07 compute-0 python3.9[120657]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:31:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:07 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Nov 22 03:31:07 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Nov 22 03:31:07 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.f scrub starts
Nov 22 03:31:07 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.f scrub ok
Nov 22 03:31:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:07 compute-0 ceph-mon[75011]: pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:07 compute-0 ceph-mon[75011]: 10.f scrub starts
Nov 22 03:31:07 compute-0 ceph-mon[75011]: 10.f scrub ok
Nov 22 03:31:08 compute-0 sudo[120655]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:08 compute-0 ceph-mon[75011]: 9.5 scrub starts
Nov 22 03:31:08 compute-0 ceph-mon[75011]: 9.5 scrub ok
Nov 22 03:31:08 compute-0 ceph-mon[75011]: 9.f scrub starts
Nov 22 03:31:08 compute-0 ceph-mon[75011]: 9.f scrub ok
Nov 22 03:31:08 compute-0 ceph-mon[75011]: pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:09 compute-0 sudo[120808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spyhkcyjrjqedkpfksdoxftaakqyfwdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782268.8616383-222-49976586199494/AnsiballZ_package_facts.py'
Nov 22 03:31:09 compute-0 sudo[120808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:09 compute-0 python3.9[120810]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 22 03:31:09 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Nov 22 03:31:10 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Nov 22 03:31:10 compute-0 sudo[120808]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:10 compute-0 ceph-mon[75011]: pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:10 compute-0 sudo[120960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyjdscrwsethcbufhnrhxrxlsuhhvhhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782270.7299035-232-12052910523289/AnsiballZ_stat.py'
Nov 22 03:31:10 compute-0 sudo[120960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:11 compute-0 python3.9[120962]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:11 compute-0 sudo[120960]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:11 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.18 deep-scrub starts
Nov 22 03:31:11 compute-0 sudo[121038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxwxfkrgrrsxjjnjdscbualetieosumk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782270.7299035-232-12052910523289/AnsiballZ_file.py'
Nov 22 03:31:11 compute-0 sudo[121038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:11 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.18 deep-scrub ok
Nov 22 03:31:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:11 compute-0 python3.9[121040]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:11 compute-0 sudo[121038]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:11 compute-0 ceph-mon[75011]: 10.19 scrub starts
Nov 22 03:31:11 compute-0 ceph-mon[75011]: 10.19 scrub ok
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.675296) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782271675372, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7366, "num_deletes": 251, "total_data_size": 9659494, "memory_usage": 9842944, "flush_reason": "Manual Compaction"}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782271746572, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7838380, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7504, "table_properties": {"data_size": 7810500, "index_size": 18357, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 78458, "raw_average_key_size": 23, "raw_value_size": 7745391, "raw_average_value_size": 2305, "num_data_blocks": 804, "num_entries": 3359, "num_filter_entries": 3359, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781830, "oldest_key_time": 1763781830, "file_creation_time": 1763782271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 71407 microseconds, and 25227 cpu microseconds.
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.746706) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7838380 bytes OK
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.746751) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.748100) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.748113) EVENT_LOG_v1 {"time_micros": 1763782271748109, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.748130) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9627133, prev total WAL file size 9627133, number of live WAL files 2.
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.750210) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7654KB) 13(53KB) 8(1944B)]
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782271750271, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7895585, "oldest_snapshot_seqno": -1}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3175 keys, 7850954 bytes, temperature: kUnknown
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782271815541, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7850954, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7823482, "index_size": 18402, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8005, "raw_key_size": 76530, "raw_average_key_size": 24, "raw_value_size": 7759941, "raw_average_value_size": 2444, "num_data_blocks": 808, "num_entries": 3175, "num_filter_entries": 3175, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763782271, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.815885) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7850954 bytes
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.817588) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.7 rd, 120.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3465, records dropped: 290 output_compression: NoCompression
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.817643) EVENT_LOG_v1 {"time_micros": 1763782271817620, "job": 4, "event": "compaction_finished", "compaction_time_micros": 65397, "compaction_time_cpu_micros": 15247, "output_level": 6, "num_output_files": 1, "total_output_size": 7850954, "num_input_records": 3465, "num_output_records": 3175, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782271820504, "job": 4, "event": "table_file_deletion", "file_number": 19}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782271820604, "job": 4, "event": "table_file_deletion", "file_number": 13}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782271820661, "job": 4, "event": "table_file_deletion", "file_number": 8}
Nov 22 03:31:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:31:11.750128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:31:12 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.10 deep-scrub starts
Nov 22 03:31:12 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.10 deep-scrub ok
Nov 22 03:31:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:12 compute-0 sudo[121191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqiljqtghvjmnyenlmhcndpnttlxtxxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782272.0675673-244-63387532855471/AnsiballZ_stat.py'
Nov 22 03:31:12 compute-0 sudo[121191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:12 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Nov 22 03:31:12 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Nov 22 03:31:12 compute-0 python3.9[121193]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:12 compute-0 sudo[121191]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:12 compute-0 ceph-mon[75011]: 9.18 deep-scrub starts
Nov 22 03:31:12 compute-0 ceph-mon[75011]: 9.18 deep-scrub ok
Nov 22 03:31:12 compute-0 ceph-mon[75011]: pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:12 compute-0 sudo[121269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkqlopmlcqhtgynzqcdxkfgxxbvdugru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782272.0675673-244-63387532855471/AnsiballZ_file.py'
Nov 22 03:31:12 compute-0 sudo[121269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:12 compute-0 python3.9[121271]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:12 compute-0 sudo[121269]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:13 compute-0 ceph-mon[75011]: 10.10 deep-scrub starts
Nov 22 03:31:13 compute-0 ceph-mon[75011]: 10.10 deep-scrub ok
Nov 22 03:31:13 compute-0 ceph-mon[75011]: 9.11 scrub starts
Nov 22 03:31:13 compute-0 ceph-mon[75011]: 9.11 scrub ok
Nov 22 03:31:14 compute-0 sudo[121421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjdxvimwncemgbkdvjxsohfrgillltje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782273.6515343-262-235318892006162/AnsiballZ_lineinfile.py'
Nov 22 03:31:14 compute-0 sudo[121421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:14 compute-0 python3.9[121423]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:14 compute-0 sudo[121421]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:14 compute-0 ceph-mon[75011]: pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:15 compute-0 sudo[121573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejhbfsnpiemgdhfrdalsutvrvoczkquk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782275.0560138-277-106578450955911/AnsiballZ_setup.py'
Nov 22 03:31:15 compute-0 sudo[121573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:15 compute-0 python3.9[121575]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:31:15 compute-0 sudo[121573]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:16 compute-0 sudo[121657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdnihxelpdjbtvevxqjwwnqqnllzbikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782275.0560138-277-106578450955911/AnsiballZ_systemd.py'
Nov 22 03:31:16 compute-0 sudo[121657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:16 compute-0 python3.9[121659]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:31:16 compute-0 sudo[121657]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:16 compute-0 ceph-mon[75011]: pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:17 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.b scrub starts
Nov 22 03:31:17 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.b scrub ok
Nov 22 03:31:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:17 compute-0 sshd-session[116357]: Connection closed by 192.168.122.30 port 40096
Nov 22 03:31:17 compute-0 sshd-session[116330]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:31:17 compute-0 systemd-logind[799]: Session 38 logged out. Waiting for processes to exit.
Nov 22 03:31:17 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Nov 22 03:31:17 compute-0 systemd[1]: session-38.scope: Consumed 27.163s CPU time.
Nov 22 03:31:17 compute-0 systemd-logind[799]: Removed session 38.
Nov 22 03:31:17 compute-0 ceph-mon[75011]: 9.b scrub starts
Nov 22 03:31:17 compute-0 ceph-mon[75011]: 9.b scrub ok
Nov 22 03:31:18 compute-0 ceph-mon[75011]: pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:19 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Nov 22 03:31:19 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Nov 22 03:31:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:20 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Nov 22 03:31:20 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Nov 22 03:31:20 compute-0 ceph-mon[75011]: 10.13 scrub starts
Nov 22 03:31:20 compute-0 ceph-mon[75011]: 10.13 scrub ok
Nov 22 03:31:21 compute-0 ceph-mon[75011]: pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:21 compute-0 ceph-mon[75011]: 10.12 scrub starts
Nov 22 03:31:21 compute-0 ceph-mon[75011]: 10.12 scrub ok
Nov 22 03:31:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:22 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Nov 22 03:31:22 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Nov 22 03:31:22 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Nov 22 03:31:22 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Nov 22 03:31:22 compute-0 sshd-session[121686]: Accepted publickey for zuul from 192.168.122.30 port 40954 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:31:22 compute-0 systemd-logind[799]: New session 39 of user zuul.
Nov 22 03:31:22 compute-0 systemd[1]: Started Session 39 of User zuul.
Nov 22 03:31:22 compute-0 sshd-session[121686]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:31:22 compute-0 ceph-mon[75011]: pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:23 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.d scrub starts
Nov 22 03:31:23 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.d scrub ok
Nov 22 03:31:23 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.c scrub starts
Nov 22 03:31:23 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.c scrub ok
Nov 22 03:31:23 compute-0 sudo[121839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogtecubrxoqjpoqyhdjkkcoqezowzvzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782283.0209885-22-110495847593693/AnsiballZ_file.py'
Nov 22 03:31:23 compute-0 sudo[121839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:23 compute-0 python3.9[121841]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:23 compute-0 sudo[121839]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:23 compute-0 ceph-mon[75011]: 9.1b scrub starts
Nov 22 03:31:23 compute-0 ceph-mon[75011]: 9.1b scrub ok
Nov 22 03:31:23 compute-0 ceph-mon[75011]: 9.8 scrub starts
Nov 22 03:31:23 compute-0 ceph-mon[75011]: 9.8 scrub ok
Nov 22 03:31:24 compute-0 sudo[121991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwfcthivmumnggfuuzcqgvsobvaddump ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782283.9799843-34-223061510398928/AnsiballZ_stat.py'
Nov 22 03:31:24 compute-0 sudo[121991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:24 compute-0 python3.9[121993]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:24 compute-0 sudo[121991]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:24 compute-0 ceph-mon[75011]: 9.d scrub starts
Nov 22 03:31:24 compute-0 ceph-mon[75011]: 9.d scrub ok
Nov 22 03:31:24 compute-0 ceph-mon[75011]: 9.c scrub starts
Nov 22 03:31:24 compute-0 ceph-mon[75011]: 9.c scrub ok
Nov 22 03:31:24 compute-0 ceph-mon[75011]: pgmap v324: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:24 compute-0 sudo[122069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqprntivygeettjvhvznxnjgrikgnuot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782283.9799843-34-223061510398928/AnsiballZ_file.py'
Nov 22 03:31:24 compute-0 sudo[122069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:25 compute-0 python3.9[122071]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:25 compute-0 sudo[122069]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:25 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Nov 22 03:31:25 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Nov 22 03:31:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:25 compute-0 sshd-session[121689]: Connection closed by 192.168.122.30 port 40954
Nov 22 03:31:25 compute-0 sshd-session[121686]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:31:25 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Nov 22 03:31:25 compute-0 systemd[1]: session-39.scope: Consumed 1.792s CPU time.
Nov 22 03:31:25 compute-0 systemd-logind[799]: Session 39 logged out. Waiting for processes to exit.
Nov 22 03:31:25 compute-0 systemd-logind[799]: Removed session 39.
Nov 22 03:31:25 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Nov 22 03:31:26 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Nov 22 03:31:26 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1 deep-scrub starts
Nov 22 03:31:26 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1 deep-scrub ok
Nov 22 03:31:26 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Nov 22 03:31:26 compute-0 ceph-osd[90752]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Nov 22 03:31:26 compute-0 ceph-mon[75011]: 9.13 deep-scrub starts
Nov 22 03:31:26 compute-0 ceph-mon[75011]: 9.13 deep-scrub ok
Nov 22 03:31:26 compute-0 ceph-mon[75011]: pgmap v325: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:26 compute-0 ceph-mon[75011]: 10.11 scrub starts
Nov 22 03:31:26 compute-0 ceph-mon[75011]: 10.11 scrub ok
Nov 22 03:31:26 compute-0 ceph-mon[75011]: 9.1 deep-scrub starts
Nov 22 03:31:26 compute-0 ceph-mon[75011]: 9.1 deep-scrub ok
Nov 22 03:31:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:27 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Nov 22 03:31:27 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Nov 22 03:31:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:28 compute-0 ceph-mon[75011]: 9.19 scrub starts
Nov 22 03:31:28 compute-0 ceph-mon[75011]: 9.19 scrub ok
Nov 22 03:31:28 compute-0 ceph-mon[75011]: 9.3 scrub starts
Nov 22 03:31:28 compute-0 ceph-mon[75011]: 9.3 scrub ok
Nov 22 03:31:28 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Nov 22 03:31:28 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Nov 22 03:31:29 compute-0 ceph-mon[75011]: pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:29 compute-0 ceph-mon[75011]: 9.1d scrub starts
Nov 22 03:31:29 compute-0 ceph-mon[75011]: 9.1d scrub ok
Nov 22 03:31:29 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.16 deep-scrub starts
Nov 22 03:31:29 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.16 deep-scrub ok
Nov 22 03:31:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:30 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Nov 22 03:31:30 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Nov 22 03:31:30 compute-0 ceph-mon[75011]: 9.16 deep-scrub starts
Nov 22 03:31:30 compute-0 ceph-mon[75011]: 9.16 deep-scrub ok
Nov 22 03:31:30 compute-0 sshd-session[122096]: Accepted publickey for zuul from 192.168.122.30 port 40958 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:31:30 compute-0 systemd-logind[799]: New session 40 of user zuul.
Nov 22 03:31:30 compute-0 systemd[1]: Started Session 40 of User zuul.
Nov 22 03:31:30 compute-0 sshd-session[122096]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:31:31 compute-0 ceph-mon[75011]: pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:31 compute-0 ceph-mon[75011]: 10.14 scrub starts
Nov 22 03:31:31 compute-0 ceph-mon[75011]: 10.14 scrub ok
Nov 22 03:31:31 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Nov 22 03:31:31 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Nov 22 03:31:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:31 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Nov 22 03:31:31 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Nov 22 03:31:32 compute-0 ceph-mon[75011]: 9.1c scrub starts
Nov 22 03:31:32 compute-0 ceph-mon[75011]: 9.1c scrub ok
Nov 22 03:31:32 compute-0 ceph-mon[75011]: 10.2 scrub starts
Nov 22 03:31:32 compute-0 ceph-mon[75011]: 10.2 scrub ok
Nov 22 03:31:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:32 compute-0 python3.9[122249]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:31:33 compute-0 ceph-mon[75011]: pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:33 compute-0 sudo[122403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bseednycumsboohbxxdxinfsivlsvkjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782292.9052978-33-175421944180225/AnsiballZ_file.py'
Nov 22 03:31:33 compute-0 sudo[122403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:33 compute-0 python3.9[122405]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:33 compute-0 sudo[122403]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:33 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Nov 22 03:31:33 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Nov 22 03:31:34 compute-0 ceph-mon[75011]: 9.15 scrub starts
Nov 22 03:31:34 compute-0 ceph-mon[75011]: 9.15 scrub ok
Nov 22 03:31:34 compute-0 sudo[122580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scgpivpuhsgrmyhqfhnxpatlqpgydswc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782293.8336077-41-114524530014571/AnsiballZ_stat.py'
Nov 22 03:31:34 compute-0 sudo[122580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:34 compute-0 python3.9[122582]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:34 compute-0 sudo[122580]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:34 compute-0 sshd-session[122482]: Invalid user  from 43.163.97.137 port 20817
Nov 22 03:31:34 compute-0 sudo[122658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmcapsqqjeohkldwjsxzaqymghvytlce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782293.8336077-41-114524530014571/AnsiballZ_file.py'
Nov 22 03:31:34 compute-0 sudo[122658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:34 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Nov 22 03:31:34 compute-0 python3.9[122660]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.clm6woga recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:34 compute-0 sudo[122658]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:35 compute-0 ceph-osd[89585]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Nov 22 03:31:35 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Nov 22 03:31:35 compute-0 ceph-osd[88575]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Nov 22 03:31:35 compute-0 ceph-mon[75011]: pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:35 compute-0 ceph-mon[75011]: 9.1f deep-scrub starts
Nov 22 03:31:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:35 compute-0 sudo[122810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdesgydhhyxbrflijephovxaqzvqwhec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782295.4463627-61-13617204412805/AnsiballZ_stat.py'
Nov 22 03:31:35 compute-0 sudo[122810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:35 compute-0 python3.9[122812]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:35 compute-0 sudo[122810]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:31:36
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'backups']
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:31:36 compute-0 sudo[122888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxofkwfffspjawtxkhsoreflnmephnhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782295.4463627-61-13617204412805/AnsiballZ_file.py'
Nov 22 03:31:36 compute-0 sudo[122888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:31:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:31:36 compute-0 ceph-mon[75011]: 9.1f deep-scrub ok
Nov 22 03:31:36 compute-0 ceph-mon[75011]: 9.1e scrub starts
Nov 22 03:31:36 compute-0 ceph-mon[75011]: 9.1e scrub ok
Nov 22 03:31:36 compute-0 python3.9[122890]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.e8fgp249 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:36 compute-0 sudo[122888]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:37 compute-0 sudo[123040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obscvhymwohklbhmutmssgwlfbflnwtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782296.81088-74-280410583409175/AnsiballZ_file.py'
Nov 22 03:31:37 compute-0 sudo[123040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:37 compute-0 python3.9[123042]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:31:37 compute-0 sudo[123040]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:37 compute-0 ceph-mon[75011]: pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:37 compute-0 sudo[123192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okzuaqwopmleeujxxcbfapqnnvcbpgeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782297.5768793-82-58408919538316/AnsiballZ_stat.py'
Nov 22 03:31:37 compute-0 sudo[123192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:37 compute-0 python3.9[123194]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:38 compute-0 sudo[123192]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:38 compute-0 sudo[123270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhpzqnjmajsbtcezsdujfhubpjffbmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782297.5768793-82-58408919538316/AnsiballZ_file.py'
Nov 22 03:31:38 compute-0 sudo[123270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:38 compute-0 python3.9[123272]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:31:38 compute-0 sudo[123270]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:39 compute-0 sudo[123422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpjyyozdjkcwfhkbokcnwnqqssyrrrlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782298.8320546-82-227470088098553/AnsiballZ_stat.py'
Nov 22 03:31:39 compute-0 sudo[123422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:39 compute-0 python3.9[123424]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:39 compute-0 sudo[123422]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:39 compute-0 ceph-mon[75011]: pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:39 compute-0 sudo[123500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtxyencufjseyiubhoekgqckrfylbaae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782298.8320546-82-227470088098553/AnsiballZ_file.py'
Nov 22 03:31:39 compute-0 sudo[123500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:39 compute-0 python3.9[123502]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:31:39 compute-0 sudo[123500]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:40 compute-0 sudo[123652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kckksldeuodhqjqlyxovxqzxiaziggsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782299.9963796-105-158512045729858/AnsiballZ_file.py'
Nov 22 03:31:40 compute-0 sudo[123652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:40 compute-0 python3.9[123654]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:40 compute-0 sudo[123652]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:41 compute-0 sudo[123804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sryoocmfstyjushcuaqyabocbvanvscr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782300.8239207-113-263338497883392/AnsiballZ_stat.py'
Nov 22 03:31:41 compute-0 sudo[123804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:41 compute-0 python3.9[123806]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:41 compute-0 sudo[123804]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:41 compute-0 ceph-mon[75011]: pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:41 compute-0 sudo[123882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlkcdsqcbjesbfhpghzqqpcuacyrnmfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782300.8239207-113-263338497883392/AnsiballZ_file.py'
Nov 22 03:31:41 compute-0 sudo[123882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:41 compute-0 python3.9[123884]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:41 compute-0 sudo[123882]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:41 compute-0 sshd-session[122482]: Connection closed by invalid user  43.163.97.137 port 20817 [preauth]
Nov 22 03:31:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:42 compute-0 sudo[124034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqqehimnawvoxapyfhpdbciqnduommum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782302.1069155-125-31881067137687/AnsiballZ_stat.py'
Nov 22 03:31:42 compute-0 sudo[124034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:42 compute-0 python3.9[124036]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:42 compute-0 sudo[124034]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:42 compute-0 sudo[124112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzlqqqlyqvzgtorletpvojudvnyquslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782302.1069155-125-31881067137687/AnsiballZ_file.py'
Nov 22 03:31:42 compute-0 sudo[124112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:43 compute-0 python3.9[124114]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:43 compute-0 sudo[124112]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:43 compute-0 ceph-mon[75011]: pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:43 compute-0 sudo[124264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frbndflxkquyovwzrtcqnwklpojybxix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782303.3901355-137-69017253994/AnsiballZ_systemd.py'
Nov 22 03:31:43 compute-0 sudo[124264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:44 compute-0 python3.9[124266]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:31:44 compute-0 systemd[1]: Reloading.
Nov 22 03:31:44 compute-0 systemd-sysv-generator[124291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:31:44 compute-0 systemd-rc-local-generator[124288]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:31:44 compute-0 sudo[124264]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:45 compute-0 sudo[124453]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmjxzzfseititzvtgxvdbjboqqruahos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782304.9105678-145-7172916778636/AnsiballZ_stat.py'
Nov 22 03:31:45 compute-0 sudo[124453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:45 compute-0 python3.9[124455]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:45 compute-0 sudo[124453]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:45 compute-0 ceph-mon[75011]: pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:45 compute-0 sudo[124531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlomamnonwweihuuafcatsiorhqrikyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782304.9105678-145-7172916778636/AnsiballZ_file.py'
Nov 22 03:31:45 compute-0 sudo[124531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:45 compute-0 python3.9[124533]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:45 compute-0 sudo[124531]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:31:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:31:46 compute-0 sudo[124683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxexpatzaobsezqmcgqtpqmynvobxptt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782306.178957-157-241786648668086/AnsiballZ_stat.py'
Nov 22 03:31:46 compute-0 sudo[124683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:46 compute-0 ceph-mon[75011]: pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:46 compute-0 python3.9[124685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:46 compute-0 sudo[124683]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:46 compute-0 sudo[124761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtskheoiyiirmwbigellnfwihhmgrsmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782306.178957-157-241786648668086/AnsiballZ_file.py'
Nov 22 03:31:46 compute-0 sudo[124761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:47 compute-0 sudo[124764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:47 compute-0 sudo[124764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:47 compute-0 sudo[124764]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:47 compute-0 python3.9[124763]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:47 compute-0 sudo[124761]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:47 compute-0 sudo[124789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:31:47 compute-0 sudo[124789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:47 compute-0 sudo[124789]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:47 compute-0 sudo[124814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:47 compute-0 sudo[124814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:47 compute-0 sudo[124814]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:47 compute-0 sudo[124863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:31:47 compute-0 sudo[124863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:47 compute-0 sudo[125032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwxlobpnciccicwryvcvycdswetqqcqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782307.4835052-169-240260328944385/AnsiballZ_systemd.py'
Nov 22 03:31:47 compute-0 sudo[125032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:47 compute-0 sudo[124863]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:31:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:31:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:31:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:31:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:31:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:31:47 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e5983aea-be6c-4278-bcf1-e20c87f2e247 does not exist
Nov 22 03:31:47 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b38f8309-5f44-4508-a440-3da6ac38f3d8 does not exist
Nov 22 03:31:47 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5af2c64e-1d22-492b-a35c-54bbffbb17cc does not exist
Nov 22 03:31:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:31:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:31:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:31:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:31:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:31:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:31:47 compute-0 sudo[125047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:47 compute-0 sudo[125047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:47 compute-0 sudo[125047]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:48 compute-0 sudo[125072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:31:48 compute-0 sudo[125072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:48 compute-0 sudo[125072]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:48 compute-0 python3.9[125034]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:31:48 compute-0 systemd[1]: Reloading.
Nov 22 03:31:48 compute-0 systemd-rc-local-generator[125144]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:31:48 compute-0 systemd-sysv-generator[125148]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:31:48 compute-0 sudo[125098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:48 compute-0 sudo[125098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:48 compute-0 sudo[125098]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:48 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 03:31:48 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:31:48 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:31:48 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 03:31:48 compute-0 sudo[125158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:31:48 compute-0 sudo[125158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:48 compute-0 ceph-mon[75011]: pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:31:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:31:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:31:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:31:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:31:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:31:48 compute-0 sudo[125032]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:48 compute-0 podman[125304]: 2025-11-22 03:31:48.932333367 +0000 UTC m=+0.065672205 container create 85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:48 compute-0 podman[125304]: 2025-11-22 03:31:48.892255858 +0000 UTC m=+0.025594716 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:31:48 compute-0 systemd[1]: Started libpod-conmon-85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7.scope.
Nov 22 03:31:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:31:49 compute-0 podman[125304]: 2025-11-22 03:31:49.15862283 +0000 UTC m=+0.291961668 container init 85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:31:49 compute-0 podman[125304]: 2025-11-22 03:31:49.167705209 +0000 UTC m=+0.301044037 container start 85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:31:49 compute-0 affectionate_satoshi[125324]: 167 167
Nov 22 03:31:49 compute-0 systemd[1]: libpod-85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7.scope: Deactivated successfully.
Nov 22 03:31:49 compute-0 conmon[125324]: conmon 85499e99d7332ae7dfa0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7.scope/container/memory.events
Nov 22 03:31:49 compute-0 podman[125304]: 2025-11-22 03:31:49.370860119 +0000 UTC m=+0.504198977 container attach 85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:31:49 compute-0 podman[125304]: 2025-11-22 03:31:49.372683635 +0000 UTC m=+0.506022513 container died 85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:31:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:49 compute-0 python3.9[125400]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:31:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2f75defc39250155030a5290e7623dae39ca54ec6b42d4e36c37dd19f83536f-merged.mount: Deactivated successfully.
Nov 22 03:31:49 compute-0 network[125426]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:31:49 compute-0 network[125427]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:31:49 compute-0 network[125428]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:31:49 compute-0 podman[125304]: 2025-11-22 03:31:49.645221718 +0000 UTC m=+0.778560536 container remove 85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_satoshi, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:31:49 compute-0 systemd[1]: libpod-conmon-85499e99d7332ae7dfa013792304cb76c607b71b929a420d707dfecd207144e7.scope: Deactivated successfully.
Nov 22 03:31:49 compute-0 podman[125441]: 2025-11-22 03:31:49.816606543 +0000 UTC m=+0.049973143 container create 878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:31:49 compute-0 podman[125441]: 2025-11-22 03:31:49.795399652 +0000 UTC m=+0.028766292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:31:50 compute-0 systemd[1]: Started libpod-conmon-878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66.scope.
Nov 22 03:31:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61054981c60fb7e56eb9a0653e7eb01c80fa803ee0d8d0a0ce15921819ed109/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61054981c60fb7e56eb9a0653e7eb01c80fa803ee0d8d0a0ce15921819ed109/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61054981c60fb7e56eb9a0653e7eb01c80fa803ee0d8d0a0ce15921819ed109/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61054981c60fb7e56eb9a0653e7eb01c80fa803ee0d8d0a0ce15921819ed109/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e61054981c60fb7e56eb9a0653e7eb01c80fa803ee0d8d0a0ce15921819ed109/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:50 compute-0 podman[125441]: 2025-11-22 03:31:50.403744672 +0000 UTC m=+0.637111282 container init 878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:31:50 compute-0 podman[125441]: 2025-11-22 03:31:50.425314757 +0000 UTC m=+0.658681367 container start 878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_engelbart, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:31:50 compute-0 podman[125441]: 2025-11-22 03:31:50.430352318 +0000 UTC m=+0.663718968 container attach 878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:50 compute-0 ceph-mon[75011]: pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:51 compute-0 recursing_engelbart[125458]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:31:51 compute-0 recursing_engelbart[125458]: --> relative data size: 1.0
Nov 22 03:31:51 compute-0 recursing_engelbart[125458]: --> All data devices are unavailable
Nov 22 03:31:51 compute-0 systemd[1]: libpod-878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66.scope: Deactivated successfully.
Nov 22 03:31:51 compute-0 systemd[1]: libpod-878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66.scope: Consumed 1.060s CPU time.
Nov 22 03:31:51 compute-0 podman[125441]: 2025-11-22 03:31:51.558149713 +0000 UTC m=+1.791516383 container died 878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e61054981c60fb7e56eb9a0653e7eb01c80fa803ee0d8d0a0ce15921819ed109-merged.mount: Deactivated successfully.
Nov 22 03:31:51 compute-0 podman[125441]: 2025-11-22 03:31:51.663082674 +0000 UTC m=+1.896449314 container remove 878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_engelbart, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:31:51 compute-0 systemd[1]: libpod-conmon-878feac7f7a709be4b65d41ed8a0fc5e484a160d8413e54f10b1a521a0ccfd66.scope: Deactivated successfully.
Nov 22 03:31:51 compute-0 sudo[125158]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:51 compute-0 sudo[125552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:51 compute-0 sudo[125552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:51 compute-0 sudo[125552]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:51 compute-0 sudo[125581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:31:51 compute-0 sudo[125581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:51 compute-0 sudo[125581]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:51 compute-0 sudo[125610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:51 compute-0 sudo[125610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:51 compute-0 sudo[125610]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:52 compute-0 sudo[125637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:31:52 compute-0 sudo[125637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:52 compute-0 podman[125713]: 2025-11-22 03:31:52.450344003 +0000 UTC m=+0.065196768 container create 96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:31:52 compute-0 systemd[1]: Started libpod-conmon-96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7.scope.
Nov 22 03:31:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:31:52 compute-0 podman[125713]: 2025-11-22 03:31:52.431239384 +0000 UTC m=+0.046092179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:31:52 compute-0 podman[125713]: 2025-11-22 03:31:52.533654486 +0000 UTC m=+0.148507281 container init 96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:31:52 compute-0 podman[125713]: 2025-11-22 03:31:52.541531032 +0000 UTC m=+0.156383797 container start 96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:31:52 compute-0 podman[125713]: 2025-11-22 03:31:52.546566844 +0000 UTC m=+0.161419609 container attach 96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:52 compute-0 youthful_wilson[125734]: 167 167
Nov 22 03:31:52 compute-0 podman[125713]: 2025-11-22 03:31:52.548387761 +0000 UTC m=+0.163240526 container died 96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:31:52 compute-0 systemd[1]: libpod-96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7.scope: Deactivated successfully.
Nov 22 03:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-15d904866acabfdbba1fe330ee1348f59465427ac6a1036a979faadf6f039d8e-merged.mount: Deactivated successfully.
Nov 22 03:31:52 compute-0 podman[125713]: 2025-11-22 03:31:52.59899286 +0000 UTC m=+0.213845655 container remove 96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wilson, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:31:52 compute-0 systemd[1]: libpod-conmon-96b15857ad63ba8afe7abd4ea1680a07dd84569b4bf479cb11958b0eaf682cb7.scope: Deactivated successfully.
Nov 22 03:31:52 compute-0 ceph-mon[75011]: pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:52 compute-0 podman[125767]: 2025-11-22 03:31:52.838413889 +0000 UTC m=+0.106083252 container create b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:31:52 compute-0 podman[125767]: 2025-11-22 03:31:52.770748916 +0000 UTC m=+0.038418319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:31:52 compute-0 systemd[1]: Started libpod-conmon-b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075.scope.
Nov 22 03:31:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968a9141cba8029e7ecbadb6caeef7ba35e205d91dc951db254dab425a4ced2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968a9141cba8029e7ecbadb6caeef7ba35e205d91dc951db254dab425a4ced2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968a9141cba8029e7ecbadb6caeef7ba35e205d91dc951db254dab425a4ced2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/968a9141cba8029e7ecbadb6caeef7ba35e205d91dc951db254dab425a4ced2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:53 compute-0 podman[125767]: 2025-11-22 03:31:53.016327483 +0000 UTC m=+0.283996856 container init b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_austin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:31:53 compute-0 podman[125767]: 2025-11-22 03:31:53.035265956 +0000 UTC m=+0.302935319 container start b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_austin, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:31:53 compute-0 podman[125767]: 2025-11-22 03:31:53.064241819 +0000 UTC m=+0.331911172 container attach b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_austin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:31:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:53 compute-0 wizardly_austin[125792]: {
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:     "0": [
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:         {
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "devices": [
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "/dev/loop3"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             ],
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_name": "ceph_lv0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_size": "21470642176",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "name": "ceph_lv0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "tags": {
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cluster_name": "ceph",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.crush_device_class": "",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.encrypted": "0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osd_id": "0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.type": "block",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.vdo": "0"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             },
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "type": "block",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "vg_name": "ceph_vg0"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:         }
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:     ],
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:     "1": [
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:         {
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "devices": [
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "/dev/loop4"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             ],
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_name": "ceph_lv1",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_size": "21470642176",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "name": "ceph_lv1",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "tags": {
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cluster_name": "ceph",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.crush_device_class": "",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.encrypted": "0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osd_id": "1",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.type": "block",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.vdo": "0"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             },
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "type": "block",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "vg_name": "ceph_vg1"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:         }
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:     ],
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:     "2": [
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:         {
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "devices": [
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "/dev/loop5"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             ],
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_name": "ceph_lv2",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_size": "21470642176",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "name": "ceph_lv2",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "tags": {
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.cluster_name": "ceph",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.crush_device_class": "",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.encrypted": "0",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osd_id": "2",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.type": "block",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:                 "ceph.vdo": "0"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             },
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "type": "block",
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:             "vg_name": "ceph_vg2"
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:         }
Nov 22 03:31:53 compute-0 wizardly_austin[125792]:     ]
Nov 22 03:31:53 compute-0 wizardly_austin[125792]: }
Nov 22 03:31:53 compute-0 podman[125767]: 2025-11-22 03:31:53.840086554 +0000 UTC m=+1.107755907 container died b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_austin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:31:53 compute-0 systemd[1]: libpod-b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075.scope: Deactivated successfully.
Nov 22 03:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-968a9141cba8029e7ecbadb6caeef7ba35e205d91dc951db254dab425a4ced2b-merged.mount: Deactivated successfully.
Nov 22 03:31:53 compute-0 podman[125767]: 2025-11-22 03:31:53.936734527 +0000 UTC m=+1.204403860 container remove b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_austin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:31:53 compute-0 systemd[1]: libpod-conmon-b88fcc6f2344383b9b31ed658f6e6145b02fbdfdd916f556a6b45fe85f698075.scope: Deactivated successfully.
Nov 22 03:31:53 compute-0 sudo[125637]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:54 compute-0 sudo[125895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:54 compute-0 sudo[125895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:54 compute-0 sudo[125895]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:54 compute-0 sudo[125943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:31:54 compute-0 sudo[125943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:54 compute-0 sudo[125943]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:54 compute-0 sudo[125992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:54 compute-0 sudo[125992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:54 compute-0 sudo[125992]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:54 compute-0 sudo[126043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaviidssxrxanquxdyslkygqwvlrymzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782313.9800847-195-210297943951809/AnsiballZ_stat.py'
Nov 22 03:31:54 compute-0 sudo[126043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:54 compute-0 sudo[126044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:31:54 compute-0 sudo[126044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:54 compute-0 python3.9[126053]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:54 compute-0 sudo[126043]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:54 compute-0 podman[126137]: 2025-11-22 03:31:54.647997621 +0000 UTC m=+0.048613110 container create 6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wescoff, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:31:54 compute-0 systemd[1]: Started libpod-conmon-6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411.scope.
Nov 22 03:31:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:31:54 compute-0 podman[126137]: 2025-11-22 03:31:54.626708331 +0000 UTC m=+0.027323859 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:31:54 compute-0 ceph-mon[75011]: pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:54 compute-0 sudo[126205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khidouwlqcpqozcxkwaqaeklztkmpzrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782313.9800847-195-210297943951809/AnsiballZ_file.py'
Nov 22 03:31:54 compute-0 sudo[126205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:54 compute-0 podman[126137]: 2025-11-22 03:31:54.751542942 +0000 UTC m=+0.152158451 container init 6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:31:54 compute-0 podman[126137]: 2025-11-22 03:31:54.763835048 +0000 UTC m=+0.164450527 container start 6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wescoff, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:31:54 compute-0 podman[126137]: 2025-11-22 03:31:54.767130451 +0000 UTC m=+0.167745960 container attach 6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wescoff, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:31:54 compute-0 quizzical_wescoff[126187]: 167 167
Nov 22 03:31:54 compute-0 systemd[1]: libpod-6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411.scope: Deactivated successfully.
Nov 22 03:31:54 compute-0 podman[126137]: 2025-11-22 03:31:54.770831385 +0000 UTC m=+0.171446894 container died 6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wescoff, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb676c73c59e3a7be7ae0e542fc76643f26e75db0f02dd437775f53793bbbbc-merged.mount: Deactivated successfully.
Nov 22 03:31:54 compute-0 podman[126137]: 2025-11-22 03:31:54.815700281 +0000 UTC m=+0.216315780 container remove 6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_wescoff, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:31:54 compute-0 systemd[1]: libpod-conmon-6599979ea3d8fefd3b17d29b4dff459101ddcee02ebde498afa836704f5b6411.scope: Deactivated successfully.
Nov 22 03:31:54 compute-0 python3.9[126207]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:54 compute-0 sudo[126205]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:55 compute-0 podman[126230]: 2025-11-22 03:31:55.001623833 +0000 UTC m=+0.059209994 container create 443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 03:31:55 compute-0 systemd[1]: Started libpod-conmon-443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7.scope.
Nov 22 03:31:55 compute-0 podman[126230]: 2025-11-22 03:31:54.971747161 +0000 UTC m=+0.029333362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:31:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635aa46d4e853752488f6ca1490eebef4b107fdcda0c39be227fa392dde59abd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635aa46d4e853752488f6ca1490eebef4b107fdcda0c39be227fa392dde59abd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635aa46d4e853752488f6ca1490eebef4b107fdcda0c39be227fa392dde59abd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/635aa46d4e853752488f6ca1490eebef4b107fdcda0c39be227fa392dde59abd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:31:55 compute-0 podman[126230]: 2025-11-22 03:31:55.100576297 +0000 UTC m=+0.158162538 container init 443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:31:55 compute-0 podman[126230]: 2025-11-22 03:31:55.113114527 +0000 UTC m=+0.170700688 container start 443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:31:55 compute-0 podman[126230]: 2025-11-22 03:31:55.161853151 +0000 UTC m=+0.219439352 container attach 443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 22 03:31:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:55 compute-0 sudo[126401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zydmaphyrptohcwzoedfiacmatexddaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782315.360527-208-180254126865187/AnsiballZ_file.py'
Nov 22 03:31:55 compute-0 sudo[126401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:55 compute-0 python3.9[126403]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:55 compute-0 sudo[126401]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:56 compute-0 modest_tu[126271]: {
Nov 22 03:31:56 compute-0 modest_tu[126271]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "osd_id": 1,
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "type": "bluestore"
Nov 22 03:31:56 compute-0 modest_tu[126271]:     },
Nov 22 03:31:56 compute-0 modest_tu[126271]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "osd_id": 0,
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "type": "bluestore"
Nov 22 03:31:56 compute-0 modest_tu[126271]:     },
Nov 22 03:31:56 compute-0 modest_tu[126271]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "osd_id": 2,
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:31:56 compute-0 modest_tu[126271]:         "type": "bluestore"
Nov 22 03:31:56 compute-0 modest_tu[126271]:     }
Nov 22 03:31:56 compute-0 modest_tu[126271]: }
Nov 22 03:31:56 compute-0 systemd[1]: libpod-443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7.scope: Deactivated successfully.
Nov 22 03:31:56 compute-0 systemd[1]: libpod-443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7.scope: Consumed 1.028s CPU time.
Nov 22 03:31:56 compute-0 podman[126230]: 2025-11-22 03:31:56.139720605 +0000 UTC m=+1.197306776 container died 443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-635aa46d4e853752488f6ca1490eebef4b107fdcda0c39be227fa392dde59abd-merged.mount: Deactivated successfully.
Nov 22 03:31:56 compute-0 podman[126230]: 2025-11-22 03:31:56.212079265 +0000 UTC m=+1.269665465 container remove 443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_tu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:31:56 compute-0 systemd[1]: libpod-conmon-443f08fb6fdd20d49012d269b8a89b4b7154a91f741026979ec6e9d5a8fc6ca7.scope: Deactivated successfully.
Nov 22 03:31:56 compute-0 sudo[126044]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:31:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:31:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:31:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:31:56 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c2ab839b-5dcf-416f-9028-d9d212431bb0 does not exist
Nov 22 03:31:56 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6750f194-2d60-4dad-8790-13878a9c89fe does not exist
Nov 22 03:31:56 compute-0 sudo[126554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:31:56 compute-0 sudo[126554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:56 compute-0 sudo[126554]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:56 compute-0 sudo[126637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfdsycmmsxigwewzcuctocldpanpoqgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782316.195066-216-83035042015103/AnsiballZ_stat.py'
Nov 22 03:31:56 compute-0 sudo[126601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:31:56 compute-0 sudo[126637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:56 compute-0 sudo[126601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:31:56 compute-0 sudo[126601]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:56 compute-0 python3.9[126644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:31:56 compute-0 sudo[126637]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:56 compute-0 sudo[126720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dulmfvgsrgefhvpktxjifjoqpktxaimr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782316.195066-216-83035042015103/AnsiballZ_file.py'
Nov 22 03:31:56 compute-0 sudo[126720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:57 compute-0 python3.9[126722]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:57 compute-0 sudo[126720]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:31:57 compute-0 ceph-mon[75011]: pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:31:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:31:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:57 compute-0 sudo[126872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sowpxgxbksanydepzhybxbqjcmloqbxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782317.5915008-231-107547624215717/AnsiballZ_timezone.py'
Nov 22 03:31:57 compute-0 sudo[126872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:58 compute-0 python3.9[126874]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 22 03:31:58 compute-0 systemd[1]: Starting Time & Date Service...
Nov 22 03:31:58 compute-0 systemd[1]: Started Time & Date Service.
Nov 22 03:31:58 compute-0 sudo[126872]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:59 compute-0 sudo[127028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrowvadzvickanhgfqwufsedcvvismdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782318.8083577-240-30592280110990/AnsiballZ_file.py'
Nov 22 03:31:59 compute-0 sudo[127028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:31:59 compute-0 python3.9[127030]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:31:59 compute-0 sudo[127028]: pam_unix(sudo:session): session closed for user root
Nov 22 03:31:59 compute-0 ceph-mon[75011]: pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:31:59 compute-0 sudo[127180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjthzzykotlrrjixubnknynhjjrgondi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782319.6071713-248-197267863253975/AnsiballZ_stat.py'
Nov 22 03:31:59 compute-0 sudo[127180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:00 compute-0 python3.9[127182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:00 compute-0 sudo[127180]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:00 compute-0 sudo[127258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qozmlkclcccapdosguasjucpfswmyhlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782319.6071713-248-197267863253975/AnsiballZ_file.py'
Nov 22 03:32:00 compute-0 sudo[127258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:00 compute-0 python3.9[127260]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:00 compute-0 sudo[127258]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:01 compute-0 sudo[127411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qocjrpxwimqhwwhntxnlazahsigftdgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782320.9565415-260-265903717313596/AnsiballZ_stat.py'
Nov 22 03:32:01 compute-0 sudo[127411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:01 compute-0 ceph-mon[75011]: pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:01 compute-0 python3.9[127413]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:01 compute-0 sudo[127411]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:01 compute-0 sudo[127489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcwoiabcxwxapcesrlepfwhgcmhboyqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782320.9565415-260-265903717313596/AnsiballZ_file.py'
Nov 22 03:32:01 compute-0 sudo[127489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:01 compute-0 python3.9[127491]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.urovn479 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:01 compute-0 sudo[127489]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:02 compute-0 sudo[127641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zafdkxyazfmsohaewavxfjfmqtphjxij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782322.2454963-272-13371528704901/AnsiballZ_stat.py'
Nov 22 03:32:02 compute-0 sudo[127641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:02 compute-0 python3.9[127643]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:02 compute-0 sudo[127641]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:02 compute-0 sudo[127719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewxrqoezxorsuyorosyshpguimfcukvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782322.2454963-272-13371528704901/AnsiballZ_file.py'
Nov 22 03:32:03 compute-0 sudo[127719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:03 compute-0 python3.9[127721]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:03 compute-0 sudo[127719]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:03 compute-0 ceph-mon[75011]: pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:03 compute-0 sudo[127871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwyzvgjfonuwmzrwwdyxsxeirhiaumlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782323.572977-285-20783711091225/AnsiballZ_command.py'
Nov 22 03:32:03 compute-0 sudo[127871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:04 compute-0 python3.9[127873]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:04 compute-0 sudo[127871]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:04 compute-0 sudo[128024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwthnjyuozdfvqysglkdgwgxzsakpmdd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763782324.489091-293-279775043692433/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 03:32:04 compute-0 sudo[128024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:05 compute-0 python3[128026]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 03:32:05 compute-0 sudo[128024]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:05 compute-0 ceph-mon[75011]: pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:05 compute-0 sudo[128176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobhbtdumjefoibjyxjcrienhtvqxeyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782325.4654982-301-56345112111521/AnsiballZ_stat.py'
Nov 22 03:32:05 compute-0 sudo[128176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:05 compute-0 python3.9[128178]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:05 compute-0 sudo[128176]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:06 compute-0 sudo[128254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzjmsfrbjdswwskxpsanarshontdqghm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782325.4654982-301-56345112111521/AnsiballZ_file.py'
Nov 22 03:32:06 compute-0 sudo[128254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:06 compute-0 python3.9[128256]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:06 compute-0 sudo[128254]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:06 compute-0 sudo[128406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghzvhqroclfsgnkervbjxwjuqtjzrkjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782326.7407157-313-119271094293345/AnsiballZ_stat.py'
Nov 22 03:32:06 compute-0 sudo[128406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:07 compute-0 python3.9[128408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:07 compute-0 sudo[128406]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:07 compute-0 ceph-mon[75011]: pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:07 compute-0 sudo[128484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tftpjmrynvcpvvobsoisnbokiiuzsuut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782326.7407157-313-119271094293345/AnsiballZ_file.py'
Nov 22 03:32:07 compute-0 sudo[128484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:07 compute-0 python3.9[128486]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:07 compute-0 sudo[128484]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:08 compute-0 sudo[128636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdppjbdzgpjmnbwkvxvzadigmzwhcmsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782328.079686-325-34916209679667/AnsiballZ_stat.py'
Nov 22 03:32:08 compute-0 sudo[128636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:08 compute-0 python3.9[128638]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:08 compute-0 sudo[128636]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:08 compute-0 sudo[128714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouqsuwchrkbyzblofbtgqnrefswkopzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782328.079686-325-34916209679667/AnsiballZ_file.py'
Nov 22 03:32:08 compute-0 sudo[128714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:08 compute-0 python3.9[128716]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:08 compute-0 sudo[128714]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:09 compute-0 ceph-mon[75011]: pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:09 compute-0 sudo[128866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdbvbulfjgydwsjrdoqyztdexddulfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782329.2793446-337-199814977707754/AnsiballZ_stat.py'
Nov 22 03:32:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:09 compute-0 sudo[128866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:09 compute-0 python3.9[128868]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:09 compute-0 sudo[128866]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:10 compute-0 sudo[128944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvdwiwbkvtcbcctqxdvyqkzpisuantfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782329.2793446-337-199814977707754/AnsiballZ_file.py'
Nov 22 03:32:10 compute-0 sudo[128944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:10 compute-0 python3.9[128946]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:10 compute-0 sudo[128944]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:10 compute-0 sudo[129096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpgbdqqugjtolsfskrixuifmappfffdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782330.6420784-349-8640378923822/AnsiballZ_stat.py'
Nov 22 03:32:10 compute-0 sudo[129096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:10 compute-0 python3.9[129098]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:11 compute-0 sudo[129096]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:11 compute-0 sudo[129174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvlurxpfbvqogwzydjwfwsflsquzilth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782330.6420784-349-8640378923822/AnsiballZ_file.py'
Nov 22 03:32:11 compute-0 sudo[129174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:11 compute-0 ceph-mon[75011]: pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:11 compute-0 python3.9[129176]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:11 compute-0 sudo[129174]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:11 compute-0 sudo[129326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bteoyqkdwkpylefhijckwjhwqbgsrurm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782331.8844602-362-177657430905873/AnsiballZ_command.py'
Nov 22 03:32:11 compute-0 sudo[129326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:12 compute-0 python3.9[129328]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:12 compute-0 sudo[129326]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:12 compute-0 sudo[129481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikcxqrvnhomdhjhxojhjllwxcbrnwzut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782332.5791497-370-108064389363792/AnsiballZ_blockinfile.py'
Nov 22 03:32:12 compute-0 sudo[129481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:13 compute-0 python3.9[129483]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:13 compute-0 sudo[129481]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:13 compute-0 ceph-mon[75011]: pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:13 compute-0 sudo[129633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxcadgbetmetfmrsmdfogsijipkkvave ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782333.507985-379-18557026558854/AnsiballZ_file.py'
Nov 22 03:32:13 compute-0 sudo[129633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:13 compute-0 python3.9[129635]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:13 compute-0 sudo[129633]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:14 compute-0 sudo[129785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhvybruckrpfgjtluhntyoajdqssvjqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782334.2693567-379-225940147045169/AnsiballZ_file.py'
Nov 22 03:32:14 compute-0 sudo[129785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:14 compute-0 python3.9[129787]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:14 compute-0 sudo[129785]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:15 compute-0 ceph-mon[75011]: pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:15 compute-0 sudo[129937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hboqzjdmtvayovveaubdkzejtysrgqjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782335.0424464-394-2358308473711/AnsiballZ_mount.py'
Nov 22 03:32:15 compute-0 sudo[129937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:15 compute-0 python3.9[129939]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 03:32:15 compute-0 sudo[129937]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:16 compute-0 sudo[130089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twsynmgqdygxozdlxksgevbojyrfoimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782336.0599194-394-163016551616727/AnsiballZ_mount.py'
Nov 22 03:32:16 compute-0 sudo[130089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:16 compute-0 python3.9[130091]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 22 03:32:16 compute-0 sudo[130089]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:16 compute-0 sshd-session[122099]: Connection closed by 192.168.122.30 port 40958
Nov 22 03:32:16 compute-0 sshd-session[122096]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:32:16 compute-0 systemd-logind[799]: Session 40 logged out. Waiting for processes to exit.
Nov 22 03:32:16 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Nov 22 03:32:16 compute-0 systemd[1]: session-40.scope: Consumed 34.427s CPU time.
Nov 22 03:32:16 compute-0 systemd-logind[799]: Removed session 40.
Nov 22 03:32:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:17 compute-0 ceph-mon[75011]: pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:18 compute-0 ceph-mon[75011]: pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:20 compute-0 ceph-mon[75011]: pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:22 compute-0 sshd-session[130116]: Accepted publickey for zuul from 192.168.122.30 port 34364 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:32:22 compute-0 systemd-logind[799]: New session 41 of user zuul.
Nov 22 03:32:22 compute-0 systemd[1]: Started Session 41 of User zuul.
Nov 22 03:32:22 compute-0 sshd-session[130116]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:32:22 compute-0 ceph-mon[75011]: pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:22 compute-0 sudo[130269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kipzttkhlgwhkdndepwdrtdanqtrymmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782342.5076802-16-25355364939783/AnsiballZ_tempfile.py'
Nov 22 03:32:22 compute-0 sudo[130269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:23 compute-0 python3.9[130271]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 22 03:32:23 compute-0 sudo[130269]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:23 compute-0 sudo[130421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecfccgdhhdnwtjfbwiffderelmktfmqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782343.4755065-28-231161125546525/AnsiballZ_stat.py'
Nov 22 03:32:23 compute-0 sudo[130421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:23 compute-0 python3.9[130423]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:32:24 compute-0 sudo[130421]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:24 compute-0 ceph-mon[75011]: pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:24 compute-0 sudo[130575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dylxkjfcljlncgotmidbhblvsvmkpksv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782344.3188004-36-22724177462826/AnsiballZ_slurp.py'
Nov 22 03:32:24 compute-0 sudo[130575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:24 compute-0 python3.9[130577]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Nov 22 03:32:24 compute-0 sudo[130575]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:25 compute-0 sudo[130727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-toicejiolrmmxunbylfeiaagareoeeda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782345.1643245-44-89650815970653/AnsiballZ_stat.py'
Nov 22 03:32:25 compute-0 sudo[130727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:25 compute-0 python3.9[130729]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.9jgvy8nm follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:32:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:25 compute-0 sudo[130727]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:26 compute-0 sudo[130852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiowenfncxhfvqaqvbdqipzorharakdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782345.1643245-44-89650815970653/AnsiballZ_copy.py'
Nov 22 03:32:26 compute-0 sudo[130852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:26 compute-0 python3.9[130854]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.9jgvy8nm mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782345.1643245-44-89650815970653/.source.9jgvy8nm _original_basename=.2j4l_jyj follow=False checksum=3af3e3faf43e921b0dda76e8564359f8d62fd424 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:26 compute-0 sudo[130852]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:26 compute-0 ceph-mon[75011]: pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:27 compute-0 sudo[131004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trgnljqxxhalpfzlxjmmwzxydwnhadfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782346.7006834-59-9547040157445/AnsiballZ_setup.py'
Nov 22 03:32:27 compute-0 sudo[131004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:27 compute-0 python3.9[131006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:32:27 compute-0 sudo[131004]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:28 compute-0 sudo[131156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omlraliwhflyjohjekgjfgrppxgckqaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782347.8649948-68-54849621212208/AnsiballZ_blockinfile.py'
Nov 22 03:32:28 compute-0 sudo[131156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:28 compute-0 python3.9[131158]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0IeFJb7kQxb2DeQqcdzq1GSLJDpNy68eOR3ZDXuWh33pI4eZUi0JHU3XECYjd3u5pQXNDYzizaDrOCFS2XxTlU6DDLuyibyynXnzR2ly7eQ8G1/oGdqUyc4BszdHthFoULrL5JRW0J8TELiUiV6bMj50x3rMa7zIxoC97SunNaUnpWEj+Ubw1Nu0xbcBsYLa44UTaQEAZlVquM6SowLCqvgeMllgv23QNftiAsZrfPsc1rZ5eJb3MQZkGIWnqC3DFNLh9g9KpGd9E8tgyGWgEU+Xen3UQgKmWy1i6xF89YHD2VdaFIozAhwSf0kt9jGXAwuZ3Q21accnFB94mFTcEGqeP/Zlo7G4XB7fgSQN1kbhOsJUm+7JuHZeSK1WUbhqFog/8SNnQgjnth1o9uesTnW/dJ1306v4DPzUqx3gH8S1pU7LJsZo+KeTsUhfNYaskZlo6XnKQvbxALvPdXjoSHcvCQ0k5NFFrxXZ8jX6v9sSCN4hzWqUcM+NdgrOVAdc=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICQJxsece1Cz2OzI46uQPE380Q0ilq7yAVzYoL0Elw+/
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKjZC6F/EUiZDzyFyU/vKwZTwRGbAqbS375B1AM5JcXnV0pA/6kr/noDVECTxeGpQEDmFInFRHuDu1kYtCCJmE8=
                                              create=True mode=0644 path=/tmp/ansible.9jgvy8nm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:28 compute-0 sudo[131156]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:28 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 22 03:32:28 compute-0 ceph-mon[75011]: pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:29 compute-0 sudo[131310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpllbyluoilgdfhmddeomnaexpwxmmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782348.7221084-76-44675323442854/AnsiballZ_command.py'
Nov 22 03:32:29 compute-0 sudo[131310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:29 compute-0 python3.9[131312]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.9jgvy8nm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:29 compute-0 sudo[131310]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:30 compute-0 sudo[131464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brmkbbfshbakcjeqkpmptnescfinansm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782349.6899147-84-218185115921211/AnsiballZ_file.py'
Nov 22 03:32:30 compute-0 sudo[131464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:30 compute-0 python3.9[131466]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.9jgvy8nm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:30 compute-0 sudo[131464]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:30 compute-0 ceph-mon[75011]: pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:30 compute-0 sshd-session[130119]: Connection closed by 192.168.122.30 port 34364
Nov 22 03:32:30 compute-0 sshd-session[130116]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:32:30 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Nov 22 03:32:30 compute-0 systemd[1]: session-41.scope: Consumed 5.723s CPU time.
Nov 22 03:32:30 compute-0 systemd-logind[799]: Session 41 logged out. Waiting for processes to exit.
Nov 22 03:32:30 compute-0 systemd-logind[799]: Removed session 41.
Nov 22 03:32:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:32 compute-0 ceph-mon[75011]: pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:33 compute-0 sshd-session[71267]: Received disconnect from 38.102.83.143 port 60052:11: disconnected by user
Nov 22 03:32:33 compute-0 sshd-session[71267]: Disconnected from user zuul 38.102.83.143 port 60052
Nov 22 03:32:33 compute-0 sshd-session[71264]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:32:33 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Nov 22 03:32:33 compute-0 systemd[1]: session-18.scope: Consumed 1min 32.250s CPU time.
Nov 22 03:32:33 compute-0 systemd-logind[799]: Session 18 logged out. Waiting for processes to exit.
Nov 22 03:32:33 compute-0 systemd-logind[799]: Removed session 18.
Nov 22 03:32:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:34 compute-0 ceph-mon[75011]: pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:32:36
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'volumes', 'images', 'cephfs.cephfs.meta']
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:32:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:32:36 compute-0 sshd-session[131491]: Accepted publickey for zuul from 192.168.122.30 port 50972 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:32:36 compute-0 systemd-logind[799]: New session 42 of user zuul.
Nov 22 03:32:36 compute-0 systemd[1]: Started Session 42 of User zuul.
Nov 22 03:32:36 compute-0 sshd-session[131491]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:32:36 compute-0 ceph-mon[75011]: pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:37 compute-0 python3.9[131644]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:32:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:38 compute-0 sudo[131798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhtkizgryasezfsovpevssqyvvzpwlhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782358.0362117-32-235066241344366/AnsiballZ_systemd.py'
Nov 22 03:32:38 compute-0 sudo[131798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:38 compute-0 python3.9[131800]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 03:32:38 compute-0 ceph-mon[75011]: pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:38 compute-0 sudo[131798]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:39 compute-0 sudo[131952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmxgcdgmbaqumyufdyrydobixjgjfexo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782359.2643533-40-56954505085494/AnsiballZ_systemd.py'
Nov 22 03:32:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:39 compute-0 sudo[131952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:39 compute-0 python3.9[131954]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:32:39 compute-0 sudo[131952]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:40 compute-0 sudo[132105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asbvrqohipxzlksyuvklcfxcbqujnpob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782360.3480976-49-99311878257037/AnsiballZ_command.py'
Nov 22 03:32:40 compute-0 sudo[132105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:40 compute-0 python3.9[132107]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:40 compute-0 ceph-mon[75011]: pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:40 compute-0 sudo[132105]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:41 compute-0 sudo[132258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyhmvvlglkrctsuwmulbhhegeujqvund ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782361.2985373-57-225050496535625/AnsiballZ_stat.py'
Nov 22 03:32:41 compute-0 sudo[132258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:41 compute-0 python3.9[132260]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:32:41 compute-0 sudo[132258]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:42 compute-0 sudo[132410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkanzkuyqygwptrqxilucvmmcnmnqgjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782362.303396-66-47714176017731/AnsiballZ_file.py'
Nov 22 03:32:42 compute-0 sudo[132410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:42 compute-0 python3.9[132412]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:32:42 compute-0 sudo[132410]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:42 compute-0 ceph-mon[75011]: pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:43 compute-0 sshd-session[131494]: Connection closed by 192.168.122.30 port 50972
Nov 22 03:32:43 compute-0 sshd-session[131491]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:32:43 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Nov 22 03:32:43 compute-0 systemd[1]: session-42.scope: Consumed 4.717s CPU time.
Nov 22 03:32:43 compute-0 systemd-logind[799]: Session 42 logged out. Waiting for processes to exit.
Nov 22 03:32:43 compute-0 systemd-logind[799]: Removed session 42.
Nov 22 03:32:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:44 compute-0 ceph-mon[75011]: pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:32:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:32:46 compute-0 ceph-mon[75011]: pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:48 compute-0 ceph-mon[75011]: pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:48 compute-0 sshd-session[132438]: Accepted publickey for zuul from 192.168.122.30 port 47378 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:32:48 compute-0 systemd-logind[799]: New session 43 of user zuul.
Nov 22 03:32:48 compute-0 systemd[1]: Started Session 43 of User zuul.
Nov 22 03:32:48 compute-0 sshd-session[132438]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:32:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:50 compute-0 python3.9[132591]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:32:50 compute-0 ceph-mon[75011]: pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:50 compute-0 sudo[132745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gywkaloezmubbdjwitzyvxekhzhzsnyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782370.7097716-34-234501203133794/AnsiballZ_setup.py'
Nov 22 03:32:50 compute-0 sudo[132745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:51 compute-0 python3.9[132747]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:32:51 compute-0 sudo[132745]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:51 compute-0 sudo[132829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hevkcvyycdpmpwczlvuavhndptprihcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782370.7097716-34-234501203133794/AnsiballZ_dnf.py'
Nov 22 03:32:51 compute-0 sudo[132829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:32:52 compute-0 python3.9[132831]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 22 03:32:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:52 compute-0 ceph-mon[75011]: pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:53 compute-0 sudo[132829]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:54 compute-0 python3.9[132982]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:32:54 compute-0 ceph-mon[75011]: pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:55 compute-0 python3.9[133133]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:32:56 compute-0 python3.9[133283]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:32:56 compute-0 sudo[133284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:32:56 compute-0 sudo[133284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:56 compute-0 sudo[133284]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:56 compute-0 sudo[133333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:32:56 compute-0 sudo[133333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:56 compute-0 sudo[133333]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:56 compute-0 sudo[133366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:32:56 compute-0 sudo[133366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:56 compute-0 sudo[133366]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:56 compute-0 sudo[133419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:32:56 compute-0 sudo[133419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:56 compute-0 ceph-mon[75011]: pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:57 compute-0 python3.9[133547]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:32:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:32:57 compute-0 sudo[133419]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:32:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:32:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:32:57 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:32:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:32:57 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:32:57 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3e1703bf-0a55-4d0c-a7ab-7bcbf066fb86 does not exist
Nov 22 03:32:57 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b1357020-23d6-4a72-8747-7cc5c83ae3d6 does not exist
Nov 22 03:32:57 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 9e60e119-f2c8-4f47-a563-44250aaee61e does not exist
Nov 22 03:32:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:32:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:32:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:32:57 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:32:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:32:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:32:57 compute-0 sudo[133589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:32:57 compute-0 sudo[133589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:57 compute-0 sudo[133589]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:57 compute-0 sudo[133614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:32:57 compute-0 sudo[133614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:57 compute-0 sudo[133614]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:57 compute-0 sudo[133639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:32:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:57 compute-0 sudo[133639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:57 compute-0 sudo[133639]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:57 compute-0 sshd-session[132441]: Connection closed by 192.168.122.30 port 47378
Nov 22 03:32:57 compute-0 sshd-session[132438]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:32:57 compute-0 sudo[133664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:32:57 compute-0 sudo[133664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:57 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Nov 22 03:32:57 compute-0 systemd[1]: session-43.scope: Consumed 6.407s CPU time.
Nov 22 03:32:57 compute-0 systemd-logind[799]: Session 43 logged out. Waiting for processes to exit.
Nov 22 03:32:57 compute-0 systemd-logind[799]: Removed session 43.
Nov 22 03:32:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:32:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:32:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:32:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:32:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:32:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:32:58 compute-0 podman[133728]: 2025-11-22 03:32:58.026567803 +0000 UTC m=+0.057874861 container create 83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:32:58 compute-0 systemd[1]: Started libpod-conmon-83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314.scope.
Nov 22 03:32:58 compute-0 podman[133728]: 2025-11-22 03:32:58.00610608 +0000 UTC m=+0.037413148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:32:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:32:58 compute-0 podman[133728]: 2025-11-22 03:32:58.140662929 +0000 UTC m=+0.171970067 container init 83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wing, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:58 compute-0 podman[133728]: 2025-11-22 03:32:58.153750526 +0000 UTC m=+0.185057604 container start 83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:32:58 compute-0 podman[133728]: 2025-11-22 03:32:58.157930911 +0000 UTC m=+0.189238009 container attach 83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wing, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:32:58 compute-0 determined_wing[133745]: 167 167
Nov 22 03:32:58 compute-0 systemd[1]: libpod-83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314.scope: Deactivated successfully.
Nov 22 03:32:58 compute-0 conmon[133745]: conmon 83098302fe1595bf0dea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314.scope/container/memory.events
Nov 22 03:32:58 compute-0 podman[133728]: 2025-11-22 03:32:58.164154097 +0000 UTC m=+0.195461175 container died 83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wing, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:32:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-87af6674be7a13e15724a4e7fa5312062cbb8c406ee7c6d6f7201f9d746bd50d-merged.mount: Deactivated successfully.
Nov 22 03:32:58 compute-0 podman[133728]: 2025-11-22 03:32:58.219166755 +0000 UTC m=+0.250473813 container remove 83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:32:58 compute-0 systemd[1]: libpod-conmon-83098302fe1595bf0dea368f6a8e315e450256d83e57db182e5334909a9ca314.scope: Deactivated successfully.
Nov 22 03:32:58 compute-0 podman[133770]: 2025-11-22 03:32:58.461921088 +0000 UTC m=+0.064556927 container create 24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:32:58 compute-0 systemd[1]: Started libpod-conmon-24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf.scope.
Nov 22 03:32:58 compute-0 podman[133770]: 2025-11-22 03:32:58.433962043 +0000 UTC m=+0.036597941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:32:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965f41c0e9c4de5452885ec39e96822eea7bc5211a5ecd01b574a0fec7c65fbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965f41c0e9c4de5452885ec39e96822eea7bc5211a5ecd01b574a0fec7c65fbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965f41c0e9c4de5452885ec39e96822eea7bc5211a5ecd01b574a0fec7c65fbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965f41c0e9c4de5452885ec39e96822eea7bc5211a5ecd01b574a0fec7c65fbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/965f41c0e9c4de5452885ec39e96822eea7bc5211a5ecd01b574a0fec7c65fbf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:32:58 compute-0 podman[133770]: 2025-11-22 03:32:58.568799004 +0000 UTC m=+0.171434913 container init 24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:32:58 compute-0 podman[133770]: 2025-11-22 03:32:58.584008802 +0000 UTC m=+0.186644641 container start 24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:32:58 compute-0 podman[133770]: 2025-11-22 03:32:58.588355631 +0000 UTC m=+0.190991540 container attach 24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:32:59 compute-0 ceph-mon[75011]: pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:32:59 compute-0 heuristic_goldwasser[133786]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:32:59 compute-0 heuristic_goldwasser[133786]: --> relative data size: 1.0
Nov 22 03:32:59 compute-0 heuristic_goldwasser[133786]: --> All data devices are unavailable
Nov 22 03:32:59 compute-0 systemd[1]: libpod-24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf.scope: Deactivated successfully.
Nov 22 03:32:59 compute-0 podman[133770]: 2025-11-22 03:32:59.720500597 +0000 UTC m=+1.323136446 container died 24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldwasser, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Nov 22 03:32:59 compute-0 systemd[1]: libpod-24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf.scope: Consumed 1.082s CPU time.
Nov 22 03:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-965f41c0e9c4de5452885ec39e96822eea7bc5211a5ecd01b574a0fec7c65fbf-merged.mount: Deactivated successfully.
Nov 22 03:32:59 compute-0 podman[133770]: 2025-11-22 03:32:59.801221847 +0000 UTC m=+1.403857665 container remove 24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:32:59 compute-0 systemd[1]: libpod-conmon-24bd0cdedb0c95178830bca2adfbc2599d7ccdec1969e07e53125ada496e88bf.scope: Deactivated successfully.
Nov 22 03:32:59 compute-0 sudo[133664]: pam_unix(sudo:session): session closed for user root
Nov 22 03:32:59 compute-0 sudo[133829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:32:59 compute-0 sudo[133829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:32:59 compute-0 sudo[133829]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:00 compute-0 sudo[133854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:33:00 compute-0 sudo[133854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:00 compute-0 sudo[133854]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:00 compute-0 sudo[133879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:33:00 compute-0 sudo[133879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:00 compute-0 sudo[133879]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:00 compute-0 sudo[133904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:33:00 compute-0 sudo[133904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:00 compute-0 podman[133970]: 2025-11-22 03:33:00.624369248 +0000 UTC m=+0.080760601 container create 58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:00 compute-0 systemd[1]: Started libpod-conmon-58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2.scope.
Nov 22 03:33:00 compute-0 podman[133970]: 2025-11-22 03:33:00.578508388 +0000 UTC m=+0.034899791 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:33:00 compute-0 podman[133970]: 2025-11-22 03:33:00.724930202 +0000 UTC m=+0.181321625 container init 58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:33:00 compute-0 podman[133970]: 2025-11-22 03:33:00.734141985 +0000 UTC m=+0.190533338 container start 58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:00 compute-0 relaxed_dijkstra[133987]: 167 167
Nov 22 03:33:00 compute-0 systemd[1]: libpod-58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2.scope: Deactivated successfully.
Nov 22 03:33:00 compute-0 podman[133970]: 2025-11-22 03:33:00.742379489 +0000 UTC m=+0.198770852 container attach 58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:00 compute-0 podman[133970]: 2025-11-22 03:33:00.742826004 +0000 UTC m=+0.199217367 container died 58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c92a9517fc052cd4e876dcd600964dc13d8b2dc7c414cc64fa51b8b2af5c14c-merged.mount: Deactivated successfully.
Nov 22 03:33:00 compute-0 podman[133970]: 2025-11-22 03:33:00.792290346 +0000 UTC m=+0.248681699 container remove 58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:33:00 compute-0 systemd[1]: libpod-conmon-58557897ef01c5457256f796e1b010c8c3f529b600234efb864771414af98ca2.scope: Deactivated successfully.
Nov 22 03:33:01 compute-0 podman[134010]: 2025-11-22 03:33:01.037372877 +0000 UTC m=+0.070279506 container create 72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:33:01 compute-0 systemd[1]: Started libpod-conmon-72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2.scope.
Nov 22 03:33:01 compute-0 podman[134010]: 2025-11-22 03:33:01.007018337 +0000 UTC m=+0.039925016 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65077dcf51a14856401b364a9805697b9b90d5431296c619a48bdd3a29a64ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65077dcf51a14856401b364a9805697b9b90d5431296c619a48bdd3a29a64ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65077dcf51a14856401b364a9805697b9b90d5431296c619a48bdd3a29a64ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f65077dcf51a14856401b364a9805697b9b90d5431296c619a48bdd3a29a64ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:01 compute-0 podman[134010]: 2025-11-22 03:33:01.13990658 +0000 UTC m=+0.172813229 container init 72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:01 compute-0 podman[134010]: 2025-11-22 03:33:01.151145573 +0000 UTC m=+0.184052162 container start 72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:33:01 compute-0 podman[134010]: 2025-11-22 03:33:01.154649519 +0000 UTC m=+0.187556148 container attach 72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:01 compute-0 ceph-mon[75011]: pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]: {
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:     "0": [
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:         {
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "devices": [
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "/dev/loop3"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             ],
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_name": "ceph_lv0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_size": "21470642176",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "name": "ceph_lv0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "tags": {
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cluster_name": "ceph",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.crush_device_class": "",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.encrypted": "0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osd_id": "0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.type": "block",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.vdo": "0"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             },
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "type": "block",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "vg_name": "ceph_vg0"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:         }
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:     ],
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:     "1": [
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:         {
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "devices": [
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "/dev/loop4"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             ],
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_name": "ceph_lv1",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_size": "21470642176",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "name": "ceph_lv1",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "tags": {
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cluster_name": "ceph",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.crush_device_class": "",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.encrypted": "0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osd_id": "1",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.type": "block",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.vdo": "0"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             },
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "type": "block",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "vg_name": "ceph_vg1"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:         }
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:     ],
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:     "2": [
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:         {
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "devices": [
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "/dev/loop5"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             ],
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_name": "ceph_lv2",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_size": "21470642176",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "name": "ceph_lv2",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "tags": {
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.cluster_name": "ceph",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.crush_device_class": "",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.encrypted": "0",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osd_id": "2",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.type": "block",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:                 "ceph.vdo": "0"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             },
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "type": "block",
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:             "vg_name": "ceph_vg2"
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:         }
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]:     ]
Nov 22 03:33:01 compute-0 vigilant_engelbart[134026]: }
Nov 22 03:33:01 compute-0 systemd[1]: libpod-72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2.scope: Deactivated successfully.
Nov 22 03:33:01 compute-0 podman[134010]: 2025-11-22 03:33:01.945855084 +0000 UTC m=+0.978761703 container died 72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-f65077dcf51a14856401b364a9805697b9b90d5431296c619a48bdd3a29a64ab-merged.mount: Deactivated successfully.
Nov 22 03:33:02 compute-0 podman[134010]: 2025-11-22 03:33:02.027811598 +0000 UTC m=+1.060718217 container remove 72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_engelbart, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:02 compute-0 systemd[1]: libpod-conmon-72247c4dab3d546b6aa17b54495be77829f07c5cbcf7ddabc97d5b666e7560d2.scope: Deactivated successfully.
Nov 22 03:33:02 compute-0 sudo[133904]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:02 compute-0 sudo[134047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:33:02 compute-0 sudo[134047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:02 compute-0 sudo[134047]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:02 compute-0 sudo[134072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:33:02 compute-0 sudo[134072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:02 compute-0 sudo[134072]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:02 compute-0 sudo[134097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:33:02 compute-0 sudo[134097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:02 compute-0 sudo[134097]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:02 compute-0 sudo[134122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:33:02 compute-0 sudo[134122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:02 compute-0 sshd-session[134147]: Accepted publickey for zuul from 192.168.122.30 port 56134 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:33:02 compute-0 systemd-logind[799]: New session 44 of user zuul.
Nov 22 03:33:02 compute-0 systemd[1]: Started Session 44 of User zuul.
Nov 22 03:33:02 compute-0 sshd-session[134147]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:33:02 compute-0 podman[134234]: 2025-11-22 03:33:02.905395932 +0000 UTC m=+0.052020316 container create 2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:33:02 compute-0 systemd[1]: Started libpod-conmon-2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e.scope.
Nov 22 03:33:02 compute-0 podman[134234]: 2025-11-22 03:33:02.889668835 +0000 UTC m=+0.036293229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:33:03 compute-0 podman[134234]: 2025-11-22 03:33:03.011992746 +0000 UTC m=+0.158617210 container init 2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_shannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:33:03 compute-0 podman[134234]: 2025-11-22 03:33:03.024291463 +0000 UTC m=+0.170915887 container start 2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_shannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:33:03 compute-0 podman[134234]: 2025-11-22 03:33:03.028484313 +0000 UTC m=+0.175108727 container attach 2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_shannon, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:33:03 compute-0 distracted_shannon[134260]: 167 167
Nov 22 03:33:03 compute-0 systemd[1]: libpod-2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e.scope: Deactivated successfully.
Nov 22 03:33:03 compute-0 conmon[134260]: conmon 2a7785dc2afceb6c8f1d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e.scope/container/memory.events
Nov 22 03:33:03 compute-0 podman[134234]: 2025-11-22 03:33:03.033583575 +0000 UTC m=+0.180207999 container died 2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_shannon, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-10320da4407e15a652ee0f18f987cddb7ac88fb180ec9f7c0fbfccd632b584cb-merged.mount: Deactivated successfully.
Nov 22 03:33:03 compute-0 podman[134234]: 2025-11-22 03:33:03.085473703 +0000 UTC m=+0.232098097 container remove 2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_shannon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:33:03 compute-0 systemd[1]: libpod-conmon-2a7785dc2afceb6c8f1db21626897cd34724b93fa761ef1193f1ab6d69f8f41e.scope: Deactivated successfully.
Nov 22 03:33:03 compute-0 podman[134307]: 2025-11-22 03:33:03.281891638 +0000 UTC m=+0.054628165 container create 613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:33:03 compute-0 ceph-mon[75011]: pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:03 compute-0 systemd[1]: Started libpod-conmon-613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb.scope.
Nov 22 03:33:03 compute-0 podman[134307]: 2025-11-22 03:33:03.25280785 +0000 UTC m=+0.025544437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:33:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b655b25f80093efe29ad8f37b117b0833f765df8ff4e66aec6378f64c9895b71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b655b25f80093efe29ad8f37b117b0833f765df8ff4e66aec6378f64c9895b71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b655b25f80093efe29ad8f37b117b0833f765df8ff4e66aec6378f64c9895b71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b655b25f80093efe29ad8f37b117b0833f765df8ff4e66aec6378f64c9895b71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:33:03 compute-0 podman[134307]: 2025-11-22 03:33:03.383091659 +0000 UTC m=+0.155828226 container init 613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:33:03 compute-0 podman[134307]: 2025-11-22 03:33:03.403085943 +0000 UTC m=+0.175822470 container start 613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_panini, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:33:03 compute-0 podman[134307]: 2025-11-22 03:33:03.407319292 +0000 UTC m=+0.180055819 container attach 613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:33:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:03 compute-0 python3.9[134403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:33:04 compute-0 condescending_panini[134348]: {
Nov 22 03:33:04 compute-0 condescending_panini[134348]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "osd_id": 1,
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "type": "bluestore"
Nov 22 03:33:04 compute-0 condescending_panini[134348]:     },
Nov 22 03:33:04 compute-0 condescending_panini[134348]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "osd_id": 0,
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "type": "bluestore"
Nov 22 03:33:04 compute-0 condescending_panini[134348]:     },
Nov 22 03:33:04 compute-0 condescending_panini[134348]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "osd_id": 2,
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:33:04 compute-0 condescending_panini[134348]:         "type": "bluestore"
Nov 22 03:33:04 compute-0 condescending_panini[134348]:     }
Nov 22 03:33:04 compute-0 condescending_panini[134348]: }
Nov 22 03:33:04 compute-0 systemd[1]: libpod-613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb.scope: Deactivated successfully.
Nov 22 03:33:04 compute-0 systemd[1]: libpod-613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb.scope: Consumed 1.151s CPU time.
Nov 22 03:33:04 compute-0 podman[134307]: 2025-11-22 03:33:04.549220594 +0000 UTC m=+1.321957111 container died 613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b655b25f80093efe29ad8f37b117b0833f765df8ff4e66aec6378f64c9895b71-merged.mount: Deactivated successfully.
Nov 22 03:33:04 compute-0 podman[134307]: 2025-11-22 03:33:04.609379815 +0000 UTC m=+1.382116302 container remove 613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_panini, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:33:04 compute-0 systemd[1]: libpod-conmon-613fba6037ee8a2bc279d0f3a19c250891e71e2a2ff6d76571bc45e11ffeaceb.scope: Deactivated successfully.
Nov 22 03:33:04 compute-0 sudo[134122]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:33:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:33:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:33:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:33:04 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f5997798-ad1b-4d23-8c19-58472fb9b91c does not exist
Nov 22 03:33:04 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6771be4b-b367-4bae-b640-deba5a19a7f5 does not exist
Nov 22 03:33:04 compute-0 sudo[134473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:33:04 compute-0 sudo[134473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:04 compute-0 sudo[134473]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:04 compute-0 sudo[134498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:33:04 compute-0 sudo[134498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:33:04 compute-0 sudo[134498]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:05 compute-0 ceph-mon[75011]: pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:33:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:33:05 compute-0 sudo[134648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utsvsuxfrcsfachhfwavyyftadxttwbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782385.1296766-50-141175129900236/AnsiballZ_file.py'
Nov 22 03:33:05 compute-0 sudo[134648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:05 compute-0 python3.9[134650]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:05 compute-0 sudo[134648]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:06 compute-0 sudo[134800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxgzkzycdifhithtvprdkapicbathzzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782385.9106827-50-15329533637734/AnsiballZ_file.py'
Nov 22 03:33:06 compute-0 sudo[134800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:06 compute-0 python3.9[134802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:06 compute-0 sudo[134800]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:07 compute-0 sudo[134952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipejamwrbedcilqkgmcncrrskplgncba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782386.7351956-65-58804091907719/AnsiballZ_stat.py'
Nov 22 03:33:07 compute-0 sudo[134952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:07 compute-0 python3.9[134954]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:07 compute-0 ceph-mon[75011]: pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:07 compute-0 sudo[134952]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:07 compute-0 sudo[135075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsejwlgahxblactbhmgrhbtoekszfucv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782386.7351956-65-58804091907719/AnsiballZ_copy.py'
Nov 22 03:33:07 compute-0 sudo[135075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:08 compute-0 python3.9[135077]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782386.7351956-65-58804091907719/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=1510a68c7477151a60471a214ddc181db949d65e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:08 compute-0 sudo[135075]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:08 compute-0 sudo[135227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulmakfdpqtkcluxocsjctvlsbrlbaxqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782388.4730158-65-149598603518193/AnsiballZ_stat.py'
Nov 22 03:33:08 compute-0 sudo[135227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:08 compute-0 python3.9[135229]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:08 compute-0 sudo[135227]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:09 compute-0 sudo[135350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lilrkdyjyaiklufxjpilgiowhhtpvcem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782388.4730158-65-149598603518193/AnsiballZ_copy.py'
Nov 22 03:33:09 compute-0 sudo[135350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:09 compute-0 ceph-mon[75011]: pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:09 compute-0 python3.9[135352]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782388.4730158-65-149598603518193/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6db76b0f1c872db680fd75062dda3b7582a14427 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:09 compute-0 sudo[135350]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:10 compute-0 sudo[135502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofjweoslbyzielbsfrhdowtdqvuiwgjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782389.8808105-65-95011641802741/AnsiballZ_stat.py'
Nov 22 03:33:10 compute-0 sudo[135502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:10 compute-0 python3.9[135504]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:10 compute-0 sudo[135502]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:10 compute-0 sudo[135625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgtalrtnfxekaoqjexsjysgaoroupohi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782389.8808105-65-95011641802741/AnsiballZ_copy.py'
Nov 22 03:33:10 compute-0 sudo[135625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:10 compute-0 python3.9[135627]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782389.8808105-65-95011641802741/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9c860f724df502c093ac7636692714584a869c23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:10 compute-0 sudo[135625]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:11 compute-0 ceph-mon[75011]: pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:11 compute-0 sudo[135777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drgmkhfunlquxzysjjfjequtdshlzdzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782391.2360172-109-12216735721321/AnsiballZ_file.py'
Nov 22 03:33:11 compute-0 sudo[135777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:11 compute-0 python3.9[135779]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:11 compute-0 sudo[135777]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:12 compute-0 sudo[135929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnflxvhzfervrhuemmjebibztypyppux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782391.9399514-109-158861183026518/AnsiballZ_file.py'
Nov 22 03:33:12 compute-0 sudo[135929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:12 compute-0 python3.9[135931]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:12 compute-0 sudo[135929]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:12 compute-0 sudo[136081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raootkraithxqvnshpumgnjhwhjwsduj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782392.6700673-124-41986461348888/AnsiballZ_stat.py'
Nov 22 03:33:12 compute-0 sudo[136081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:12 compute-0 python3.9[136083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:13 compute-0 sudo[136081]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:13 compute-0 sudo[136204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckwelzurstpbrnsdltlwprdbefkvypni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782392.6700673-124-41986461348888/AnsiballZ_copy.py'
Nov 22 03:33:13 compute-0 sudo[136204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:13 compute-0 ceph-mon[75011]: pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:13 compute-0 python3.9[136206]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782392.6700673-124-41986461348888/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c0d3fe1a4fb2fa0c5c4c74a3ebf7d22655667d67 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:13 compute-0 sudo[136204]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:14 compute-0 sudo[136356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovnwurbqkatjwsgnzuvolbbfonwefnlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782394.001827-124-209985131617430/AnsiballZ_stat.py'
Nov 22 03:33:14 compute-0 sudo[136356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:14 compute-0 python3.9[136358]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:14 compute-0 sudo[136356]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:14 compute-0 sudo[136479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izdygpxtzltwzcsetsqxuiafpicrfkte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782394.001827-124-209985131617430/AnsiballZ_copy.py'
Nov 22 03:33:14 compute-0 sudo[136479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:14 compute-0 python3.9[136481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782394.001827-124-209985131617430/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a1a490b31b4e9b76bd2baecd5864d520f58e1cd1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:14 compute-0 sudo[136479]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:15 compute-0 ceph-mon[75011]: pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:15 compute-0 sudo[136631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyeoncdvaobyqbkubecaqucevaxovvna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782395.3366446-124-12420526792462/AnsiballZ_stat.py'
Nov 22 03:33:15 compute-0 sudo[136631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:15 compute-0 python3.9[136633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:15 compute-0 sudo[136631]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:16 compute-0 sudo[136754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrmhwvlgbasoyxwdippzcltopfcxjpjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782395.3366446-124-12420526792462/AnsiballZ_copy.py'
Nov 22 03:33:16 compute-0 sudo[136754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:16 compute-0 python3.9[136756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782395.3366446-124-12420526792462/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=f13f5581c3e15b768863e3bd164fa2ca1c767ede backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:16 compute-0 sudo[136754]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:16 compute-0 ceph-mon[75011]: pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:16 compute-0 sudo[136906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzbiytscvdgkbqagfrvdzwtgxzhonxnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782396.792679-168-27633997688732/AnsiballZ_file.py'
Nov 22 03:33:16 compute-0 sudo[136906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:17 compute-0 python3.9[136908]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:17 compute-0 sudo[136906]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:17 compute-0 sudo[137058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apgabcodrcqzxvolppvfifpcmvlyqxpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782397.5687432-168-209065757777253/AnsiballZ_file.py'
Nov 22 03:33:17 compute-0 sudo[137058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:17 compute-0 python3.9[137060]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:17 compute-0 sudo[137058]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:18 compute-0 sudo[137210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vckhdocqoupalofknviuxzlxhofwpfoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782398.3893054-183-64291950294854/AnsiballZ_stat.py'
Nov 22 03:33:18 compute-0 sudo[137210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:18 compute-0 ceph-mon[75011]: pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:18 compute-0 python3.9[137212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:18 compute-0 sudo[137210]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:19 compute-0 sudo[137333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciigntctwnauqzmmjlzksxbqyzzzaiap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782398.3893054-183-64291950294854/AnsiballZ_copy.py'
Nov 22 03:33:19 compute-0 sudo[137333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:19 compute-0 python3.9[137335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782398.3893054-183-64291950294854/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=8d96b8ffcd77ad8d9e05f012ea035c9a47655d78 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:19 compute-0 sudo[137333]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:19 compute-0 sudo[137485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcuyjrswwyjpnclbuiyzizzekozlnsuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782399.7862659-183-182947865955220/AnsiballZ_stat.py'
Nov 22 03:33:19 compute-0 sudo[137485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:20 compute-0 python3.9[137487]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:20 compute-0 sudo[137485]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:20 compute-0 sudo[137608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afoyzctwijvsnhrdwfpibtwthhfmyuer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782399.7862659-183-182947865955220/AnsiballZ_copy.py'
Nov 22 03:33:20 compute-0 sudo[137608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:20 compute-0 ceph-mon[75011]: pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:20 compute-0 python3.9[137610]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782399.7862659-183-182947865955220/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a1a490b31b4e9b76bd2baecd5864d520f58e1cd1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:20 compute-0 sudo[137608]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:21 compute-0 sudo[137760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exudsnwhkpiomslnupapsslxaosdhjrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782401.0706422-183-127256778233474/AnsiballZ_stat.py'
Nov 22 03:33:21 compute-0 sudo[137760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:21 compute-0 python3.9[137762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:21 compute-0 sudo[137760]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:21 compute-0 sudo[137883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jldcduzzsplgqkarnccdqyfjhrxayhmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782401.0706422-183-127256778233474/AnsiballZ_copy.py'
Nov 22 03:33:21 compute-0 sudo[137883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:21 compute-0 python3.9[137885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782401.0706422-183-127256778233474/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=500f31f26a7f9998aaaa2855dd8afac091340439 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:21 compute-0 sudo[137883]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:22 compute-0 ceph-mon[75011]: pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:22 compute-0 sudo[138035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctivakryifeusakvgtnatwbuktgbhsjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782402.841045-243-64058533529215/AnsiballZ_file.py'
Nov 22 03:33:22 compute-0 sudo[138035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:23 compute-0 python3.9[138037]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:23 compute-0 sudo[138035]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:23 compute-0 sudo[138187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgjqiyavmoqtxokpvkuoiiqujuubpylk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782403.6112993-251-39647984029054/AnsiballZ_stat.py'
Nov 22 03:33:23 compute-0 sudo[138187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:23 compute-0 python3.9[138189]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:23 compute-0 sudo[138187]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:24 compute-0 sudo[138310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbnmthottlhhhncgqyijudlbzqarmddu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782403.6112993-251-39647984029054/AnsiballZ_copy.py'
Nov 22 03:33:24 compute-0 sudo[138310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:24 compute-0 python3.9[138312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782403.6112993-251-39647984029054/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=45ba49572078f7d059c2266cdeaaa0793c1b0c16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:24 compute-0 sudo[138310]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:24 compute-0 ceph-mon[75011]: pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:25 compute-0 sudo[138462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvxfddihusfxhgxfbaabdmowamjceglc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782404.9845066-267-234094229614789/AnsiballZ_file.py'
Nov 22 03:33:25 compute-0 sudo[138462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:25 compute-0 python3.9[138464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:25 compute-0 sudo[138462]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:25 compute-0 sudo[138614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsdobpwqfazdotldctudoqbpeqbzafpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782405.7317436-275-151810719709075/AnsiballZ_stat.py'
Nov 22 03:33:25 compute-0 sudo[138614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:26 compute-0 python3.9[138616]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:26 compute-0 sudo[138614]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:26 compute-0 sudo[138737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brombonigndmknyrxwbiuaayszgbflcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782405.7317436-275-151810719709075/AnsiballZ_copy.py'
Nov 22 03:33:26 compute-0 sudo[138737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:26 compute-0 ceph-mon[75011]: pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:26 compute-0 python3.9[138739]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782405.7317436-275-151810719709075/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=45ba49572078f7d059c2266cdeaaa0793c1b0c16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:26 compute-0 sudo[138737]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:27 compute-0 sudo[138889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocytfqgcrlxhumtjygerekeemmtlrqhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782407.277062-291-107484139870972/AnsiballZ_file.py'
Nov 22 03:33:27 compute-0 sudo[138889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:27 compute-0 python3.9[138891]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:27 compute-0 sudo[138889]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:28 compute-0 sudo[139041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccqqaydlpmrmoqsdcnyaicjospqefczo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782407.981475-299-81929863096046/AnsiballZ_stat.py'
Nov 22 03:33:28 compute-0 sudo[139041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:28 compute-0 python3.9[139043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:28 compute-0 sudo[139041]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:28 compute-0 sudo[139164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zohfcciyzrkqlvlecrwpnrhvurovaczd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782407.981475-299-81929863096046/AnsiballZ_copy.py'
Nov 22 03:33:28 compute-0 sudo[139164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:28 compute-0 ceph-mon[75011]: pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:28 compute-0 python3.9[139166]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782407.981475-299-81929863096046/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=45ba49572078f7d059c2266cdeaaa0793c1b0c16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:28 compute-0 sudo[139164]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:29 compute-0 sudo[139316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqbkuhnhabvhfjtzoiaslavxjozhxzwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782409.3830469-315-55788148040528/AnsiballZ_file.py'
Nov 22 03:33:29 compute-0 sudo[139316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:29 compute-0 python3.9[139318]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:29 compute-0 sudo[139316]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:30 compute-0 sudo[139468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rluaxkrzlxwblyrqfpwhcwyoykihluif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782410.2064965-323-20703444402780/AnsiballZ_stat.py'
Nov 22 03:33:30 compute-0 sudo[139468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:30 compute-0 python3.9[139470]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:30 compute-0 sudo[139468]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:30 compute-0 ceph-mon[75011]: pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:31 compute-0 sudo[139591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoqutlsvpqzcvyatynfddhowntrwchjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782410.2064965-323-20703444402780/AnsiballZ_copy.py'
Nov 22 03:33:31 compute-0 sudo[139591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:31 compute-0 python3.9[139593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782410.2064965-323-20703444402780/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=45ba49572078f7d059c2266cdeaaa0793c1b0c16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:31 compute-0 sudo[139591]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:31 compute-0 sudo[139743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tucchixujmanjxrjdytrbucdbndwnjpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782411.77386-339-13988880469473/AnsiballZ_file.py'
Nov 22 03:33:31 compute-0 sudo[139743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:32 compute-0 python3.9[139745]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:32 compute-0 sudo[139743]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:32 compute-0 sudo[139895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajegslaasxbvtpzjrsigrzuyjehtarfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782412.5353463-347-52109876561614/AnsiballZ_stat.py'
Nov 22 03:33:32 compute-0 sudo[139895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:32 compute-0 ceph-mon[75011]: pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:32 compute-0 python3.9[139897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:32 compute-0 sudo[139895]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:33 compute-0 sudo[140018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yacakcxxphamwsyjvqkgwnhoockyodfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782412.5353463-347-52109876561614/AnsiballZ_copy.py'
Nov 22 03:33:33 compute-0 sudo[140018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:33 compute-0 python3.9[140020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782412.5353463-347-52109876561614/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=45ba49572078f7d059c2266cdeaaa0793c1b0c16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:33 compute-0 sudo[140018]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:34 compute-0 sudo[140170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnfkcnuedusfwocwgxrmgridpakoccoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782414.0703487-363-9247696040459/AnsiballZ_file.py'
Nov 22 03:33:34 compute-0 sudo[140170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:34 compute-0 python3.9[140172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:34 compute-0 sudo[140170]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:34 compute-0 ceph-mon[75011]: pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:34 compute-0 sudo[140322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfoscprwlwikqkbxdqoeygqaaesjjlbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782414.8185942-371-193138646173430/AnsiballZ_stat.py'
Nov 22 03:33:34 compute-0 sudo[140322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:35 compute-0 python3.9[140324]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:35 compute-0 sudo[140322]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:35 compute-0 sudo[140445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtpdpacnediwcucjlvmwxxentfgwoveu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782414.8185942-371-193138646173430/AnsiballZ_copy.py'
Nov 22 03:33:35 compute-0 sudo[140445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:35 compute-0 python3.9[140447]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782414.8185942-371-193138646173430/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=45ba49572078f7d059c2266cdeaaa0793c1b0c16 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:35 compute-0 sudo[140445]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:33:36
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control']
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:33:36 compute-0 sshd-session[134161]: Connection closed by 192.168.122.30 port 56134
Nov 22 03:33:36 compute-0 sshd-session[134147]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:33:36 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Nov 22 03:33:36 compute-0 systemd[1]: session-44.scope: Consumed 26.436s CPU time.
Nov 22 03:33:36 compute-0 systemd-logind[799]: Session 44 logged out. Waiting for processes to exit.
Nov 22 03:33:36 compute-0 systemd-logind[799]: Removed session 44.
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:33:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:33:36 compute-0 ceph-mon[75011]: pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.177346) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782417177503, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1439, "num_deletes": 251, "total_data_size": 2153052, "memory_usage": 2187304, "flush_reason": "Manual Compaction"}
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782417202285, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1254134, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7505, "largest_seqno": 8943, "table_properties": {"data_size": 1249249, "index_size": 2153, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13259, "raw_average_key_size": 20, "raw_value_size": 1238123, "raw_average_value_size": 1884, "num_data_blocks": 102, "num_entries": 657, "num_filter_entries": 657, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763782272, "oldest_key_time": 1763782272, "file_creation_time": 1763782417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 24976 microseconds, and 7728 cpu microseconds.
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.202359) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1254134 bytes OK
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.202385) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.204045) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.204072) EVENT_LOG_v1 {"time_micros": 1763782417204063, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.204096) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2146616, prev total WAL file size 2146616, number of live WAL files 2.
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.205180) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1224KB)], [20(7666KB)]
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782417205230, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9105088, "oldest_snapshot_seqno": -1}
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3384 keys, 7097161 bytes, temperature: kUnknown
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782417267158, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7097161, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7070723, "index_size": 16856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81328, "raw_average_key_size": 24, "raw_value_size": 7005768, "raw_average_value_size": 2070, "num_data_blocks": 746, "num_entries": 3384, "num_filter_entries": 3384, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763782417, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.267601) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7097161 bytes
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.270568) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.8 rd, 114.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 7.5 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(12.9) write-amplify(5.7) OK, records in: 3832, records dropped: 448 output_compression: NoCompression
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.270613) EVENT_LOG_v1 {"time_micros": 1763782417270587, "job": 6, "event": "compaction_finished", "compaction_time_micros": 62044, "compaction_time_cpu_micros": 22917, "output_level": 6, "num_output_files": 1, "total_output_size": 7097161, "num_input_records": 3832, "num_output_records": 3384, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782417271200, "job": 6, "event": "table_file_deletion", "file_number": 22}
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782417274209, "job": 6, "event": "table_file_deletion", "file_number": 20}
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.205073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.274298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.274304) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.274305) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.274307) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:33:37 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:33:37.274308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:33:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:39 compute-0 ceph-mon[75011]: pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:41 compute-0 ceph-mon[75011]: pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:42 compute-0 sshd-session[140472]: Accepted publickey for zuul from 192.168.122.30 port 33876 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:33:42 compute-0 systemd-logind[799]: New session 45 of user zuul.
Nov 22 03:33:42 compute-0 systemd[1]: Started Session 45 of User zuul.
Nov 22 03:33:42 compute-0 sshd-session[140472]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:33:43 compute-0 sudo[140625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdepqsdbwoeudbkqwylbesynajphcjbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782422.61004-22-50339439711605/AnsiballZ_file.py'
Nov 22 03:33:43 compute-0 sudo[140625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:43 compute-0 ceph-mon[75011]: pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:43 compute-0 python3.9[140627]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:43 compute-0 sudo[140625]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:44 compute-0 sudo[140777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifzfkdgpyexbveamycfeyyklnslkwswp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782423.6541128-34-81770029723903/AnsiballZ_stat.py'
Nov 22 03:33:44 compute-0 sudo[140777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:44 compute-0 python3.9[140779]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:44 compute-0 sudo[140777]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:44 compute-0 sudo[140900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqsrxgmfwloyfqzezlowfxjahcumcjpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782423.6541128-34-81770029723903/AnsiballZ_copy.py'
Nov 22 03:33:44 compute-0 sudo[140900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:44 compute-0 python3.9[140902]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782423.6541128-34-81770029723903/.source.conf _original_basename=ceph.conf follow=False checksum=5caf15561e7aec7a7f8078555e1423fe4b622cf2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:44 compute-0 sudo[140900]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:45 compute-0 ceph-mon[75011]: pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:45 compute-0 sudo[141052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztcnafzcqoxtquefcwpvnioqhrmbbfem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782425.3563879-34-218176914432092/AnsiballZ_stat.py'
Nov 22 03:33:45 compute-0 sudo[141052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:45 compute-0 python3.9[141054]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:33:45 compute-0 sudo[141052]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:33:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:33:46 compute-0 sudo[141175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbyfaiuafymhyhravoyzxoyymaexvjeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782425.3563879-34-218176914432092/AnsiballZ_copy.py'
Nov 22 03:33:46 compute-0 sudo[141175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:46 compute-0 python3.9[141177]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782425.3563879-34-218176914432092/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f53db1b7562575e7551c1a2d8d7268945ce42dda backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:33:46 compute-0 sudo[141175]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:46 compute-0 sshd-session[140475]: Connection closed by 192.168.122.30 port 33876
Nov 22 03:33:46 compute-0 sshd-session[140472]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:33:46 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Nov 22 03:33:46 compute-0 systemd[1]: session-45.scope: Consumed 3.049s CPU time.
Nov 22 03:33:46 compute-0 systemd-logind[799]: Session 45 logged out. Waiting for processes to exit.
Nov 22 03:33:46 compute-0 systemd-logind[799]: Removed session 45.
Nov 22 03:33:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:47 compute-0 ceph-mon[75011]: pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:49 compute-0 ceph-mon[75011]: pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:33:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 1969 writes, 8889 keys, 1969 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 1969 writes, 1969 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1969 writes, 8889 keys, 1969 commit groups, 1.0 writes per commit group, ingest: 11.39 MB, 0.02 MB/s
                                           Interval WAL: 1969 writes, 1969 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     88.3      0.10              0.03         3    0.033       0      0       0.0       0.0
                                             L6      1/0    6.77 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    127.2    111.9      0.13              0.04         2    0.064    7297    738       0.0       0.0
                                            Sum      1/0    6.77 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     71.7    101.6      0.23              0.07         5    0.045    7297    738       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     72.4    102.4      0.22              0.07         4    0.056    7297    738       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    127.2    111.9      0.13              0.04         2    0.064    7297    738       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.0      0.10              0.03         2    0.048       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5574942991f0#2 capacity: 308.00 MB usage: 549.33 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(36,458.11 KB,0.145251%) FilterBlock(6,28.30 KB,0.00897197%) IndexBlock(6,62.92 KB,0.0199504%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 03:33:50 compute-0 ceph-mon[75011]: pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:52 compute-0 sshd-session[141202]: Accepted publickey for zuul from 192.168.122.30 port 43512 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:33:52 compute-0 systemd-logind[799]: New session 46 of user zuul.
Nov 22 03:33:52 compute-0 systemd[1]: Started Session 46 of User zuul.
Nov 22 03:33:52 compute-0 sshd-session[141202]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:33:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:52 compute-0 ceph-mon[75011]: pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:53 compute-0 python3.9[141355]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:33:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:54 compute-0 sudo[141509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzexqokcajmtocdayzdayyuxsfvmwpqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782434.0616136-34-7286074792635/AnsiballZ_file.py'
Nov 22 03:33:54 compute-0 sudo[141509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:54 compute-0 python3.9[141511]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:54 compute-0 sudo[141509]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:54 compute-0 ceph-mon[75011]: pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:55 compute-0 sudo[141661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csrfqabdsmksdzuwsjhcgyolfscwdbiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782435.0093255-34-225665386141169/AnsiballZ_file.py'
Nov 22 03:33:55 compute-0 sudo[141661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:55 compute-0 python3.9[141663]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:33:55 compute-0 sudo[141661]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:56 compute-0 python3.9[141813]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:33:56 compute-0 sudo[141963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umfhkkrhliqetyuevyyojihvzipcqfzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782436.7568445-57-229301558853522/AnsiballZ_seboolean.py'
Nov 22 03:33:56 compute-0 sudo[141963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:57 compute-0 ceph-mon[75011]: pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:33:57 compute-0 python3.9[141965]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 03:33:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:58 compute-0 sudo[141963]: pam_unix(sudo:session): session closed for user root
Nov 22 03:33:59 compute-0 ceph-mon[75011]: pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:59 compute-0 sudo[142119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laggwuhhdvxlbnnvznlrvtpbbirvrcza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782439.2911718-67-213222196094645/AnsiballZ_setup.py'
Nov 22 03:33:59 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 22 03:33:59 compute-0 sudo[142119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:33:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:33:59 compute-0 python3.9[142121]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:33:59 compute-0 sudo[142119]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:00 compute-0 sudo[142203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grexqrxzboeltyuhtenvcyubpzqbdzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782439.2911718-67-213222196094645/AnsiballZ_dnf.py'
Nov 22 03:34:00 compute-0 sudo[142203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:00 compute-0 python3.9[142205]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:34:01 compute-0 ceph-mon[75011]: pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:01 compute-0 sudo[142203]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:02 compute-0 sudo[142356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enxvmmubnqhiaiogvkhjawxaywiwjuwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782442.3448234-79-198425173820093/AnsiballZ_systemd.py'
Nov 22 03:34:02 compute-0 sudo[142356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:03 compute-0 python3.9[142358]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:34:03 compute-0 sudo[142356]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:03 compute-0 ceph-mon[75011]: pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:03 compute-0 sudo[142511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbgipenizdxhizfnbcphshnjvzcxihpr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763782443.4919775-87-131348715368774/AnsiballZ_edpm_nftables_snippet.py'
Nov 22 03:34:03 compute-0 sudo[142511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:03 compute-0 python3[142513]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 22 03:34:03 compute-0 sudo[142511]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:04 compute-0 sudo[142663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtdbkqapayfjzgjvzsnqjllvynijgdmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782444.302717-96-261244898620811/AnsiballZ_file.py'
Nov 22 03:34:04 compute-0 sudo[142663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:04 compute-0 python3.9[142665]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:04 compute-0 sudo[142663]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:04 compute-0 sudo[142732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:04 compute-0 sudo[142732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:04 compute-0 sudo[142732]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:04 compute-0 sudo[142767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:34:04 compute-0 sudo[142767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:04 compute-0 sudo[142767]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:05 compute-0 sudo[142792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:05 compute-0 sudo[142792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:05 compute-0 sudo[142792]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:05 compute-0 sudo[142840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:34:05 compute-0 sudo[142840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:05 compute-0 sudo[142915]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwfjrhlzznchdilpsyewesxjfhlkjjom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782445.056555-104-251203266841084/AnsiballZ_stat.py'
Nov 22 03:34:05 compute-0 sudo[142915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:05 compute-0 ceph-mon[75011]: pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:05 compute-0 python3.9[142917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:05 compute-0 sudo[142915]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:05 compute-0 sudo[142840]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:34:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:34:05 compute-0 sudo[143023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwyxkkajeptzpmrfcrreufkbxctaxmfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782445.056555-104-251203266841084/AnsiballZ_file.py'
Nov 22 03:34:05 compute-0 sudo[143023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:34:05 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 40b0138b-a85c-4a9b-adbd-234f6774c18c does not exist
Nov 22 03:34:05 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7741e33e-bff3-4859-84ed-36a99936c4f0 does not exist
Nov 22 03:34:05 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ffa9c117-0202-45c8-9bb5-6107c9abb9ad does not exist
Nov 22 03:34:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:34:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:34:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:34:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:34:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:05 compute-0 sudo[143026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:05 compute-0 sudo[143026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:05 compute-0 sudo[143026]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:05 compute-0 sudo[143051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:34:05 compute-0 sudo[143051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:05 compute-0 sudo[143051]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:05 compute-0 python3.9[143025]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:05 compute-0 sudo[143023]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:05 compute-0 sudo[143076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:05 compute-0 sudo[143076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:05 compute-0 sudo[143076]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:06 compute-0 sudo[143114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:34:06 compute-0 sudo[143114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:06 compute-0 sudo[143330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvzgmpnqfzgmmpramhfewdlpugbslhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782446.363434-116-24714045274589/AnsiballZ_stat.py'
Nov 22 03:34:06 compute-0 podman[143271]: 2025-11-22 03:34:06.386282298 +0000 UTC m=+0.023510442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:06 compute-0 sudo[143330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:06 compute-0 podman[143271]: 2025-11-22 03:34:06.517693026 +0000 UTC m=+0.154921200 container create d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:34:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:34:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:34:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:34:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:34:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:34:06 compute-0 systemd[1]: Started libpod-conmon-d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843.scope.
Nov 22 03:34:06 compute-0 python3.9[143332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:34:06 compute-0 sudo[143330]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:06 compute-0 podman[143271]: 2025-11-22 03:34:06.884116705 +0000 UTC m=+0.521344889 container init d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:34:06 compute-0 podman[143271]: 2025-11-22 03:34:06.895883464 +0000 UTC m=+0.533111618 container start d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:34:06 compute-0 loving_jones[143335]: 167 167
Nov 22 03:34:06 compute-0 systemd[1]: libpod-d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843.scope: Deactivated successfully.
Nov 22 03:34:06 compute-0 podman[143271]: 2025-11-22 03:34:06.957278142 +0000 UTC m=+0.594506376 container attach d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:34:06 compute-0 podman[143271]: 2025-11-22 03:34:06.957832753 +0000 UTC m=+0.595060927 container died d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:34:07 compute-0 sudo[143430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnduiwwarxrcztmjhakbojltfvrecdrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782446.363434-116-24714045274589/AnsiballZ_file.py'
Nov 22 03:34:07 compute-0 sudo[143430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-75d43797ba070864b600f319477e03744aa1e6667ad53ba0d34c4c212cce85ba-merged.mount: Deactivated successfully.
Nov 22 03:34:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:07 compute-0 podman[143271]: 2025-11-22 03:34:07.190296718 +0000 UTC m=+0.827524872 container remove d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:34:07 compute-0 systemd[1]: libpod-conmon-d20c3a6d8d413f8e2240e0af0941fa5944f037dfe6c57e52428aab41021ae843.scope: Deactivated successfully.
Nov 22 03:34:07 compute-0 python3.9[143432]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zna28l1o recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:07 compute-0 sudo[143430]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:07 compute-0 podman[143440]: 2025-11-22 03:34:07.43163875 +0000 UTC m=+0.070069330 container create 9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:34:07 compute-0 systemd[1]: Started libpod-conmon-9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858.scope.
Nov 22 03:34:07 compute-0 podman[143440]: 2025-11-22 03:34:07.389822812 +0000 UTC m=+0.028253382 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697fd299f88d6e8eaf77898eaf0177a1973d736343caa5b667615d354e493922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697fd299f88d6e8eaf77898eaf0177a1973d736343caa5b667615d354e493922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697fd299f88d6e8eaf77898eaf0177a1973d736343caa5b667615d354e493922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697fd299f88d6e8eaf77898eaf0177a1973d736343caa5b667615d354e493922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/697fd299f88d6e8eaf77898eaf0177a1973d736343caa5b667615d354e493922/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:07 compute-0 podman[143440]: 2025-11-22 03:34:07.528485381 +0000 UTC m=+0.166915961 container init 9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lewin, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:34:07 compute-0 podman[143440]: 2025-11-22 03:34:07.536576177 +0000 UTC m=+0.175006717 container start 9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lewin, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:34:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:07 compute-0 podman[143440]: 2025-11-22 03:34:07.552526128 +0000 UTC m=+0.190956708 container attach 9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:34:07 compute-0 ceph-mon[75011]: pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:07 compute-0 sudo[143610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcuapibcdcuryeitzwboagqrjqtgnurx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782447.7104018-128-274437436488799/AnsiballZ_stat.py'
Nov 22 03:34:07 compute-0 sudo[143610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:07 compute-0 python3.9[143612]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:08 compute-0 sudo[143610]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:08 compute-0 sudo[143688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtrdbadvwjpuxwrzffitiznxlwxagibv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782447.7104018-128-274437436488799/AnsiballZ_file.py'
Nov 22 03:34:08 compute-0 sudo[143688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:08 compute-0 python3.9[143690]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:08 compute-0 sudo[143688]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:08 compute-0 thirsty_lewin[143503]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:34:08 compute-0 thirsty_lewin[143503]: --> relative data size: 1.0
Nov 22 03:34:08 compute-0 thirsty_lewin[143503]: --> All data devices are unavailable
Nov 22 03:34:08 compute-0 systemd[1]: libpod-9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858.scope: Deactivated successfully.
Nov 22 03:34:08 compute-0 systemd[1]: libpod-9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858.scope: Consumed 1.018s CPU time.
Nov 22 03:34:08 compute-0 podman[143440]: 2025-11-22 03:34:08.620280431 +0000 UTC m=+1.258710981 container died 9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lewin, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-697fd299f88d6e8eaf77898eaf0177a1973d736343caa5b667615d354e493922-merged.mount: Deactivated successfully.
Nov 22 03:34:08 compute-0 podman[143440]: 2025-11-22 03:34:08.700587499 +0000 UTC m=+1.339018059 container remove 9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 03:34:08 compute-0 systemd[1]: libpod-conmon-9440beb029279fa9d2c5d157988e51b6c4727b7dc2d4f440fec320a7e5eb3858.scope: Deactivated successfully.
Nov 22 03:34:08 compute-0 sudo[143114]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:08 compute-0 ceph-mon[75011]: pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:08 compute-0 sudo[143802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:08 compute-0 sudo[143802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:08 compute-0 sudo[143802]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:08 compute-0 sudo[143850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:34:08 compute-0 sudo[143850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:08 compute-0 sudo[143850]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:08 compute-0 sudo[143899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:08 compute-0 sudo[143899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:08 compute-0 sudo[143899]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:09 compute-0 sudo[143949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsrdloihwqtfhfvqjnxlcldefsdtleyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782448.811171-141-165503799258557/AnsiballZ_command.py'
Nov 22 03:34:09 compute-0 sudo[143949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:09 compute-0 sudo[143951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:34:09 compute-0 sudo[143951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:09 compute-0 python3.9[143957]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:09 compute-0 sudo[143949]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:09 compute-0 podman[144040]: 2025-11-22 03:34:09.318651197 +0000 UTC m=+0.020340986 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:09 compute-0 podman[144040]: 2025-11-22 03:34:09.520908439 +0000 UTC m=+0.222598208 container create 90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:09 compute-0 systemd[1]: Started libpod-conmon-90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587.scope.
Nov 22 03:34:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:34:09 compute-0 podman[144040]: 2025-11-22 03:34:09.727354639 +0000 UTC m=+0.429044498 container init 90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:09 compute-0 podman[144040]: 2025-11-22 03:34:09.739556068 +0000 UTC m=+0.441245867 container start 90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:09 compute-0 boring_lalande[144109]: 167 167
Nov 22 03:34:09 compute-0 systemd[1]: libpod-90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587.scope: Deactivated successfully.
Nov 22 03:34:09 compute-0 sudo[144198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqnxcjvtzqanwqitxiubesaqqwclznrz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763782449.6047084-149-53062371719254/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 03:34:09 compute-0 sudo[144198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:10 compute-0 podman[144040]: 2025-11-22 03:34:10.038134075 +0000 UTC m=+0.739823864 container attach 90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:34:10 compute-0 podman[144040]: 2025-11-22 03:34:10.039440183 +0000 UTC m=+0.741129972 container died 90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:34:10 compute-0 python3[144200]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 03:34:10 compute-0 sudo[144198]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-abb3b226e09bc68326d038539fd95d1c5f938b7cc3a74bdeca69bab75bf929a7-merged.mount: Deactivated successfully.
Nov 22 03:34:10 compute-0 podman[144040]: 2025-11-22 03:34:10.547651674 +0000 UTC m=+1.249341453 container remove 90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_lalande, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:34:10 compute-0 systemd[1]: libpod-conmon-90e3bcda8ec838b33e0910b9ef03275f9a08371825d8baf467e3a9768503a587.scope: Deactivated successfully.
Nov 22 03:34:10 compute-0 podman[144326]: 2025-11-22 03:34:10.717624139 +0000 UTC m=+0.045914755 container create 1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mclean, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:10 compute-0 systemd[1]: Started libpod-conmon-1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b.scope.
Nov 22 03:34:10 compute-0 sudo[144375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfjezpybkvnvjplpoxktlctctsuwjxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782450.5828614-157-250402885009882/AnsiballZ_stat.py'
Nov 22 03:34:10 compute-0 sudo[144375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:34:10 compute-0 podman[144326]: 2025-11-22 03:34:10.697066357 +0000 UTC m=+0.025357003 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95b999c0c82f7ef8bad79c732fb74fbe6875ab00f4df8a9dac9f46334f99994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95b999c0c82f7ef8bad79c732fb74fbe6875ab00f4df8a9dac9f46334f99994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95b999c0c82f7ef8bad79c732fb74fbe6875ab00f4df8a9dac9f46334f99994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95b999c0c82f7ef8bad79c732fb74fbe6875ab00f4df8a9dac9f46334f99994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:10 compute-0 podman[144326]: 2025-11-22 03:34:10.802119165 +0000 UTC m=+0.130409811 container init 1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mclean, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:34:10 compute-0 podman[144326]: 2025-11-22 03:34:10.81200535 +0000 UTC m=+0.140295956 container start 1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:34:10 compute-0 podman[144326]: 2025-11-22 03:34:10.816074759 +0000 UTC m=+0.144365385 container attach 1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mclean, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:34:10 compute-0 python3.9[144379]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:10 compute-0 sudo[144375]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:11 compute-0 ceph-mon[75011]: pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]: {
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:     "0": [
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:         {
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "devices": [
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "/dev/loop3"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             ],
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_name": "ceph_lv0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_size": "21470642176",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "name": "ceph_lv0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "tags": {
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cluster_name": "ceph",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.crush_device_class": "",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.encrypted": "0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osd_id": "0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.type": "block",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.vdo": "0"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             },
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "type": "block",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "vg_name": "ceph_vg0"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:         }
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:     ],
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:     "1": [
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:         {
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "devices": [
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "/dev/loop4"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             ],
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_name": "ceph_lv1",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_size": "21470642176",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "name": "ceph_lv1",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "tags": {
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cluster_name": "ceph",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.crush_device_class": "",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.encrypted": "0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osd_id": "1",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.type": "block",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.vdo": "0"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             },
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "type": "block",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "vg_name": "ceph_vg1"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:         }
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:     ],
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:     "2": [
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:         {
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "devices": [
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "/dev/loop5"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             ],
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_name": "ceph_lv2",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_size": "21470642176",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "name": "ceph_lv2",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "tags": {
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.cluster_name": "ceph",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.crush_device_class": "",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.encrypted": "0",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osd_id": "2",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.type": "block",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:                 "ceph.vdo": "0"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             },
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "type": "block",
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:             "vg_name": "ceph_vg2"
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:         }
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]:     ]
Nov 22 03:34:11 compute-0 wizardly_mclean[144377]: }
Nov 22 03:34:11 compute-0 systemd[1]: libpod-1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b.scope: Deactivated successfully.
Nov 22 03:34:11 compute-0 podman[144326]: 2025-11-22 03:34:11.603890069 +0000 UTC m=+0.932180705 container died 1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:34:11 compute-0 sudo[144520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlsmagapjnrjngmumclqzofsfzfbfoej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782450.5828614-157-250402885009882/AnsiballZ_copy.py'
Nov 22 03:34:11 compute-0 sudo[144520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:11 compute-0 python3.9[144522]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782450.5828614-157-250402885009882/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e95b999c0c82f7ef8bad79c732fb74fbe6875ab00f4df8a9dac9f46334f99994-merged.mount: Deactivated successfully.
Nov 22 03:34:11 compute-0 sudo[144520]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:12 compute-0 podman[144326]: 2025-11-22 03:34:12.35117093 +0000 UTC m=+1.679461536 container remove 1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:34:12 compute-0 sudo[144673]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjngaodanxolaavdgcssaoucpjikjoza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782452.2816448-172-10073141081042/AnsiballZ_stat.py'
Nov 22 03:34:12 compute-0 sudo[144673]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:12 compute-0 systemd[1]: libpod-conmon-1e35c02b1125e75504071bbdd2000d7e9b31ae08d020f9cd855d23515972543b.scope: Deactivated successfully.
Nov 22 03:34:12 compute-0 sudo[143951]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:12 compute-0 sudo[144676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:12 compute-0 sudo[144676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:12 compute-0 sudo[144676]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:12 compute-0 sudo[144701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:34:12 compute-0 sudo[144701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:12 compute-0 sudo[144701]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:12 compute-0 sudo[144726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:12 compute-0 sudo[144726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:12 compute-0 sudo[144726]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:12 compute-0 python3.9[144675]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:12 compute-0 sudo[144751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:34:12 compute-0 sudo[144751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:12 compute-0 sudo[144673]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:13 compute-0 sudo[144954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjfgvzphqkcybzjnmltvjipnjepktxlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782452.2816448-172-10073141081042/AnsiballZ_copy.py'
Nov 22 03:34:13 compute-0 sudo[144954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:13 compute-0 podman[144910]: 2025-11-22 03:34:13.016124554 +0000 UTC m=+0.025618070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:13 compute-0 podman[144910]: 2025-11-22 03:34:13.195344168 +0000 UTC m=+0.204837714 container create 78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elgamal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:13 compute-0 python3.9[144956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782452.2816448-172-10073141081042/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:13 compute-0 sudo[144954]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:13 compute-0 systemd[1]: Started libpod-conmon-78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8.scope.
Nov 22 03:34:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:34:13 compute-0 ceph-mon[75011]: pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:13 compute-0 podman[144910]: 2025-11-22 03:34:13.615282664 +0000 UTC m=+0.624776180 container init 78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:34:13 compute-0 podman[144910]: 2025-11-22 03:34:13.622775578 +0000 UTC m=+0.632269084 container start 78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:34:13 compute-0 systemd[1]: libpod-78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8.scope: Deactivated successfully.
Nov 22 03:34:13 compute-0 elated_elgamal[144972]: 167 167
Nov 22 03:34:13 compute-0 podman[144910]: 2025-11-22 03:34:13.82416326 +0000 UTC m=+0.833656816 container attach 78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elgamal, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:34:13 compute-0 podman[144910]: 2025-11-22 03:34:13.825619013 +0000 UTC m=+0.835112509 container died 78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elgamal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:34:13 compute-0 sudo[145125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtcwvgzbjhsuhhmogsgktcdmntrbsbxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782453.8105252-187-131092968204458/AnsiballZ_stat.py'
Nov 22 03:34:13 compute-0 sudo[145125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:14 compute-0 python3.9[145127]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:14 compute-0 sudo[145125]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8bfba3350a45a5ca5a283108e745f2cc0611d2ecc7095b2902c82ee556fdfc6-merged.mount: Deactivated successfully.
Nov 22 03:34:14 compute-0 sudo[145251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bggohjuiyauujcawdggptadrntisaxik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782453.8105252-187-131092968204458/AnsiballZ_copy.py'
Nov 22 03:34:14 compute-0 sudo[145251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:14 compute-0 ceph-mon[75011]: pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:15 compute-0 python3.9[145253]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782453.8105252-187-131092968204458/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:15 compute-0 sudo[145251]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:15 compute-0 podman[144910]: 2025-11-22 03:34:15.057146648 +0000 UTC m=+2.066640144 container remove 78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:34:15 compute-0 systemd[1]: libpod-conmon-78b9575e1f1c5447224106e5e1ebb8909586f1c97c3a369e48423633aaedd7f8.scope: Deactivated successfully.
Nov 22 03:34:15 compute-0 podman[145285]: 2025-11-22 03:34:15.282320887 +0000 UTC m=+0.108670621 container create 4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:34:15 compute-0 podman[145285]: 2025-11-22 03:34:15.199282884 +0000 UTC m=+0.025632708 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:34:15 compute-0 systemd[1]: Started libpod-conmon-4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787.scope.
Nov 22 03:34:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec7a110cb572447983a060154e143a61fef331e4dd639725d16d43c05285f2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec7a110cb572447983a060154e143a61fef331e4dd639725d16d43c05285f2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec7a110cb572447983a060154e143a61fef331e4dd639725d16d43c05285f2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fec7a110cb572447983a060154e143a61fef331e4dd639725d16d43c05285f2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:34:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:15 compute-0 sudo[145429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkoafixanquvsihskiypyqtvyyxcdjrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782455.4490662-202-162243960255860/AnsiballZ_stat.py'
Nov 22 03:34:15 compute-0 sudo[145429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:15 compute-0 podman[145285]: 2025-11-22 03:34:15.64816933 +0000 UTC m=+0.474519084 container init 4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:34:15 compute-0 podman[145285]: 2025-11-22 03:34:15.656204484 +0000 UTC m=+0.482554218 container start 4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:34:15 compute-0 podman[145285]: 2025-11-22 03:34:15.773996374 +0000 UTC m=+0.600346138 container attach 4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:34:15 compute-0 python3.9[145431]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:15 compute-0 sudo[145429]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:16 compute-0 sudo[145556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sidtikysllbgfkdeqwoozyfnqgnumwow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782455.4490662-202-162243960255860/AnsiballZ_copy.py'
Nov 22 03:34:16 compute-0 sudo[145556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:16 compute-0 python3.9[145558]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782455.4490662-202-162243960255860/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:16 compute-0 sudo[145556]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]: {
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "osd_id": 1,
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "type": "bluestore"
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:     },
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "osd_id": 0,
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "type": "bluestore"
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:     },
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "osd_id": 2,
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:         "type": "bluestore"
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]:     }
Nov 22 03:34:16 compute-0 intelligent_cohen[145378]: }
Nov 22 03:34:16 compute-0 systemd[1]: libpod-4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787.scope: Deactivated successfully.
Nov 22 03:34:16 compute-0 podman[145285]: 2025-11-22 03:34:16.711857015 +0000 UTC m=+1.538206749 container died 4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:34:16 compute-0 systemd[1]: libpod-4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787.scope: Consumed 1.052s CPU time.
Nov 22 03:34:17 compute-0 sudo[145750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuwxmazknsignvnbvnbvgejcwukrikws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782456.8479245-217-200785270754121/AnsiballZ_stat.py'
Nov 22 03:34:17 compute-0 sudo[145750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-fec7a110cb572447983a060154e143a61fef331e4dd639725d16d43c05285f2a-merged.mount: Deactivated successfully.
Nov 22 03:34:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:17 compute-0 ceph-mon[75011]: pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:17 compute-0 python3.9[145752]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:17 compute-0 sudo[145750]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:17 compute-0 podman[145285]: 2025-11-22 03:34:17.330668772 +0000 UTC m=+2.157018546 container remove 4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 22 03:34:17 compute-0 systemd[1]: libpod-conmon-4e62807d5aa1a1da15d4252ce343fa16348582c08c71f775dd2d4d822f086787.scope: Deactivated successfully.
Nov 22 03:34:17 compute-0 sudo[144751]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:34:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:34:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:34:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:34:17 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a64c21e7-f389-461b-9e0d-0b1e11aa3909 does not exist
Nov 22 03:34:17 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 00787d10-5ddf-4261-9d4b-593b18394daa does not exist
Nov 22 03:34:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:17 compute-0 sudo[145805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:34:17 compute-0 sudo[145805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:17 compute-0 sudo[145805]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:17 compute-0 sudo[145850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:34:17 compute-0 sudo[145850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:34:17 compute-0 sudo[145850]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:17 compute-0 sudo[145925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdcqsiehyiukbnoddnkvimkdsyhiopdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782456.8479245-217-200785270754121/AnsiballZ_copy.py'
Nov 22 03:34:17 compute-0 sudo[145925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:18 compute-0 python3.9[145927]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782456.8479245-217-200785270754121/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:18 compute-0 sudo[145925]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:18 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:34:18 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:34:18 compute-0 ceph-mon[75011]: pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:18 compute-0 sudo[146077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsmcsujjeczsxhigqwrumzfiyvqheohl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782458.4617221-232-98427218476999/AnsiballZ_file.py'
Nov 22 03:34:18 compute-0 sudo[146077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:18 compute-0 python3.9[146079]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:18 compute-0 sudo[146077]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:19 compute-0 sudo[146229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huzvsposgrhgotuwzxexjjtqfbikvheo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782459.216931-240-49780357854990/AnsiballZ_command.py'
Nov 22 03:34:19 compute-0 sudo[146229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:19 compute-0 python3.9[146231]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:19 compute-0 sudo[146229]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:20 compute-0 sudo[146384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxpjhofqreassscpdocexcucmitsyuvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782460.071536-248-171304632372165/AnsiballZ_blockinfile.py'
Nov 22 03:34:20 compute-0 sudo[146384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:20 compute-0 ceph-mon[75011]: pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:20 compute-0 python3.9[146386]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:20 compute-0 sudo[146384]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:21 compute-0 sudo[146536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciknuequnwpfipglmkhgbljqudxxqceo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782461.2404544-257-263085089617059/AnsiballZ_command.py'
Nov 22 03:34:21 compute-0 sudo[146536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:21 compute-0 python3.9[146538]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:21 compute-0 sudo[146536]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:22 compute-0 sudo[146689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vktzildmpcdljgtilromftmmgjzxenmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782462.1836834-265-1155978763617/AnsiballZ_stat.py'
Nov 22 03:34:22 compute-0 sudo[146689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:22 compute-0 python3.9[146691]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:34:22 compute-0 sudo[146689]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:22 compute-0 ceph-mon[75011]: pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:23 compute-0 sudo[146843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frpmnupivhayuvabygjlmjmqtecvafsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782463.067118-273-234563363451667/AnsiballZ_command.py'
Nov 22 03:34:23 compute-0 sudo[146843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:23 compute-0 python3.9[146845]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:23 compute-0 sudo[146843]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:24 compute-0 sudo[146998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxjiicjyewybhkwefpmckkxzsukvigte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782464.0544963-281-105985947202064/AnsiballZ_file.py'
Nov 22 03:34:24 compute-0 sudo[146998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:24 compute-0 python3.9[147000]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:24 compute-0 sudo[146998]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:25 compute-0 ceph-mon[75011]: pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:26 compute-0 python3.9[147150]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:34:26 compute-0 ceph-mon[75011]: pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:26 compute-0 sudo[147301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-actlyfztdkssweyrpbgpypijvuqiqvhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782466.8497677-321-127642595552147/AnsiballZ_command.py'
Nov 22 03:34:26 compute-0 sudo[147301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:27 compute-0 python3.9[147303]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:93:45:69:49" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:27 compute-0 ovs-vsctl[147304]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:93:45:69:49 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 22 03:34:27 compute-0 sudo[147301]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:27 compute-0 sudo[147454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qztpgouwzznacmfekwtmscuphcojbcer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782467.7771683-330-15786936428559/AnsiballZ_command.py'
Nov 22 03:34:27 compute-0 sudo[147454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:28 compute-0 python3.9[147456]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:28 compute-0 sudo[147454]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:28 compute-0 sudo[147609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgmzqhabkfylcyovixkjovbuxlhtrkzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782468.6937478-338-46159852872589/AnsiballZ_command.py'
Nov 22 03:34:28 compute-0 sudo[147609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:29 compute-0 python3.9[147611]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:34:29 compute-0 ovs-vsctl[147612]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 22 03:34:29 compute-0 sudo[147609]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:29 compute-0 ceph-mon[75011]: pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:30 compute-0 python3.9[147762]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:34:30 compute-0 ceph-mon[75011]: pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:30 compute-0 sudo[147914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfkvaufsmiaccbmrsuweizxfojhubaby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782470.6288855-355-124812950487487/AnsiballZ_file.py'
Nov 22 03:34:30 compute-0 sudo[147914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:31 compute-0 python3.9[147916]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:34:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:31 compute-0 sudo[147914]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:32 compute-0 sudo[148066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejelnvkvlyqattogjylmjqryobzmigva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782472.3742118-363-78983441338456/AnsiballZ_stat.py'
Nov 22 03:34:32 compute-0 sudo[148066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:32 compute-0 python3.9[148068]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:32 compute-0 sudo[148066]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:33 compute-0 sudo[148144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyfchoaarzxcswdgfxkrmxecxbtbydgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782472.3742118-363-78983441338456/AnsiballZ_file.py'
Nov 22 03:34:33 compute-0 sudo[148144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:33 compute-0 ceph-mon[75011]: pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:33 compute-0 python3.9[148146]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:34:33 compute-0 sudo[148144]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:33 compute-0 sudo[148296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjbqzdnwgvgulbduiukxggzzwnxuzqjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782473.888787-363-177892693646350/AnsiballZ_stat.py'
Nov 22 03:34:33 compute-0 sudo[148296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:34 compute-0 python3.9[148298]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:34 compute-0 sudo[148296]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:34 compute-0 sudo[148374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vytrepjkxyjoozyfxqealfaaerzxdnnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782473.888787-363-177892693646350/AnsiballZ_file.py'
Nov 22 03:34:34 compute-0 sudo[148374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:34 compute-0 python3.9[148376]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:34:34 compute-0 sudo[148374]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:35 compute-0 sudo[148526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkmeijmzmmcjanyslmsaskvyxtwcltwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782475.151958-386-178016455241090/AnsiballZ_file.py'
Nov 22 03:34:35 compute-0 sudo[148526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:35 compute-0 ceph-mon[75011]: pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:35 compute-0 python3.9[148528]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:35 compute-0 sudo[148526]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:36 compute-0 sudo[148678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzurxjraobyfqcxrpyybxhzspazsyxah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782476.0334673-394-231892837972409/AnsiballZ_stat.py'
Nov 22 03:34:36 compute-0 sudo[148678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:34:36
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'images', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log']
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:34:36 compute-0 python3.9[148680]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:34:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:34:36 compute-0 sudo[148678]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:36 compute-0 sudo[148756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfcettrqeiqzaqjxvhjharwqvwvzedyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782476.0334673-394-231892837972409/AnsiballZ_file.py'
Nov 22 03:34:36 compute-0 sudo[148756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:36 compute-0 python3.9[148758]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:36 compute-0 sudo[148756]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:37 compute-0 sudo[148908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbajmlbknhshoyjzblbntylgayspsjmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782477.2383971-406-17410385896390/AnsiballZ_stat.py'
Nov 22 03:34:37 compute-0 sudo[148908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:37 compute-0 ceph-mon[75011]: pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:37 compute-0 python3.9[148910]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:37 compute-0 sudo[148908]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:37 compute-0 sudo[148986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueupcpypgggihszltjonkmgtttmmmkyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782477.2383971-406-17410385896390/AnsiballZ_file.py'
Nov 22 03:34:37 compute-0 sudo[148986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:38 compute-0 python3.9[148988]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:38 compute-0 sudo[148986]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:38 compute-0 sudo[149138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsitskniifqnboerljqeemslmwubivpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782478.4670308-418-199697299650653/AnsiballZ_systemd.py'
Nov 22 03:34:38 compute-0 sudo[149138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:38 compute-0 python3.9[149140]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:34:38 compute-0 systemd[1]: Reloading.
Nov 22 03:34:39 compute-0 systemd-rc-local-generator[149164]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:34:39 compute-0 systemd-sysv-generator[149172]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:34:39 compute-0 sudo[149138]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:39 compute-0 ceph-mon[75011]: pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:39 compute-0 sudo[149327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctrapcrqqorhylqrdthkmpihwjuwsthi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782479.7711232-426-98680756008758/AnsiballZ_stat.py'
Nov 22 03:34:39 compute-0 sudo[149327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:40 compute-0 python3.9[149329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:40 compute-0 sudo[149327]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:40 compute-0 sudo[149405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taghzriigynqqhrarricnbvwdrrkmywu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782479.7711232-426-98680756008758/AnsiballZ_file.py'
Nov 22 03:34:40 compute-0 sudo[149405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:40 compute-0 ceph-mon[75011]: pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:40 compute-0 python3.9[149407]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:40 compute-0 sudo[149405]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:41 compute-0 sudo[149557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stxfpngtzhcopsomrbuwxqhdexajfoej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782481.1535006-438-100348475906311/AnsiballZ_stat.py'
Nov 22 03:34:41 compute-0 sudo[149557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:41 compute-0 python3.9[149559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:41 compute-0 sudo[149557]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:41 compute-0 sudo[149635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwtnbvaklhbdyjtlnumlsxgwapyqgokg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782481.1535006-438-100348475906311/AnsiballZ_file.py'
Nov 22 03:34:41 compute-0 sudo[149635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:42 compute-0 python3.9[149637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:42 compute-0 sudo[149635]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:42 compute-0 ceph-mon[75011]: pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:42 compute-0 sudo[149787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdfgagononhuimmiixgikyvaslvgmqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782482.6143312-450-12008898073423/AnsiballZ_systemd.py'
Nov 22 03:34:42 compute-0 sudo[149787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:43 compute-0 python3.9[149789]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:34:43 compute-0 systemd[1]: Reloading.
Nov 22 03:34:43 compute-0 systemd-sysv-generator[149821]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:34:43 compute-0 systemd-rc-local-generator[149817]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:34:43 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 03:34:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:43 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:34:43 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:34:43 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 03:34:43 compute-0 sudo[149787]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:44 compute-0 sudo[149981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kostyjajrdowdvvpnpemvdnbouxnxhdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782484.2133-460-26256631091403/AnsiballZ_file.py'
Nov 22 03:34:44 compute-0 sudo[149981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:44 compute-0 python3.9[149983]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:34:44 compute-0 sudo[149981]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:45 compute-0 ceph-mon[75011]: pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:45 compute-0 sudo[150133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhkgiwnvgjpektqbexbhahrkqzarwnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782485.158141-468-38885479215744/AnsiballZ_stat.py'
Nov 22 03:34:45 compute-0 sudo[150133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:45 compute-0 python3.9[150135]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:45 compute-0 sudo[150133]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:34:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:34:45 compute-0 sudo[150256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qachmvktwsgbknprlofytllirqdnuipw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782485.158141-468-38885479215744/AnsiballZ_copy.py'
Nov 22 03:34:45 compute-0 sudo[150256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:46 compute-0 python3.9[150258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782485.158141-468-38885479215744/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:34:46 compute-0 sudo[150256]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:46 compute-0 ceph-mon[75011]: pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:47 compute-0 sudo[150408]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsmhivemcxudzexsafxudakjbuwuwgmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782486.8814952-485-163865041020029/AnsiballZ_file.py'
Nov 22 03:34:47 compute-0 sudo[150408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:47 compute-0 python3.9[150410]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:34:47 compute-0 sudo[150408]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:48 compute-0 sudo[150560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obfonkighydibhndozhyckealywwbijd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782487.7549682-493-274495463294168/AnsiballZ_stat.py'
Nov 22 03:34:48 compute-0 sudo[150560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:48 compute-0 python3.9[150562]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:34:49 compute-0 sudo[150560]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:49 compute-0 ceph-mon[75011]: pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:49 compute-0 sudo[150683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jakkwcmwrowrbkfvrzkhpqfuavagwgtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782487.7549682-493-274495463294168/AnsiballZ_copy.py'
Nov 22 03:34:49 compute-0 sudo[150683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:49 compute-0 python3.9[150685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782487.7549682-493-274495463294168/.source.json _original_basename=.patdbik0 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:49 compute-0 sudo[150683]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:50 compute-0 sudo[150835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dysiszhvptxokrlgkydyjacirqmmigdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782490.3004885-508-161131471266212/AnsiballZ_file.py'
Nov 22 03:34:50 compute-0 sudo[150835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:50 compute-0 python3.9[150837]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:34:50 compute-0 sudo[150835]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:51 compute-0 sudo[150987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnwkyqvilhmqwgrxedlhnmognqcckifd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782491.3391488-516-242220931500827/AnsiballZ_stat.py'
Nov 22 03:34:51 compute-0 sudo[150987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:51 compute-0 ceph-mon[75011]: pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:51 compute-0 sudo[150987]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:52 compute-0 sudo[151110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-breujodyzgppgksuqqeemczpbenwunav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782491.3391488-516-242220931500827/AnsiballZ_copy.py'
Nov 22 03:34:52 compute-0 sudo[151110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:52 compute-0 sudo[151110]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:52 compute-0 ceph-mon[75011]: pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:53 compute-0 sudo[151262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pumabijsdovikaeypmehzavivhjcmpdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782493.0289145-533-273735767034310/AnsiballZ_container_config_data.py'
Nov 22 03:34:53 compute-0 sudo[151262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:53 compute-0 python3.9[151264]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 22 03:34:53 compute-0 sudo[151262]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:54 compute-0 sudo[151414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkgsxclimvxolrjxttojrxhyhxmdbssp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782494.1049733-542-179550228344937/AnsiballZ_container_config_hash.py'
Nov 22 03:34:54 compute-0 sudo[151414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:54 compute-0 python3.9[151416]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:34:54 compute-0 sudo[151414]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:55 compute-0 ceph-mon[75011]: pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:55 compute-0 sudo[151566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exvftgvcvkzbqxbrojounxcohmlwxesp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782495.3416493-551-76568383544649/AnsiballZ_podman_container_info.py'
Nov 22 03:34:55 compute-0 sudo[151566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:55 compute-0 python3.9[151568]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 03:34:56 compute-0 sudo[151566]: pam_unix(sudo:session): session closed for user root
Nov 22 03:34:56 compute-0 ceph-mon[75011]: pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:34:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:57 compute-0 sudo[151744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnritsatjlqwuurgcdghtrgiznzyhqgp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763782497.1968532-564-195413077725304/AnsiballZ_edpm_container_manage.py'
Nov 22 03:34:57 compute-0 sudo[151744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:34:57 compute-0 python3[151746]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:34:58 compute-0 ceph-mon[75011]: pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:34:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:00 compute-0 ceph-mon[75011]: pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.773273) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782500773309, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 881, "num_deletes": 251, "total_data_size": 1231361, "memory_usage": 1250160, "flush_reason": "Manual Compaction"}
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782500784204, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1220211, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8944, "largest_seqno": 9824, "table_properties": {"data_size": 1215824, "index_size": 2039, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9159, "raw_average_key_size": 18, "raw_value_size": 1207092, "raw_average_value_size": 2458, "num_data_blocks": 95, "num_entries": 491, "num_filter_entries": 491, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763782418, "oldest_key_time": 1763782418, "file_creation_time": 1763782500, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 11257 microseconds, and 3770 cpu microseconds.
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.784526) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1220211 bytes OK
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.784649) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.787880) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.787897) EVENT_LOG_v1 {"time_micros": 1763782500787892, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.787912) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1227079, prev total WAL file size 1227079, number of live WAL files 2.
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.788917) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1191KB)], [23(6930KB)]
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782500788959, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8317372, "oldest_snapshot_seqno": -1}
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3361 keys, 6475199 bytes, temperature: kUnknown
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782500876465, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6475199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6450063, "index_size": 15619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 81562, "raw_average_key_size": 24, "raw_value_size": 6386614, "raw_average_value_size": 1900, "num_data_blocks": 680, "num_entries": 3361, "num_filter_entries": 3361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763782500, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.876697) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6475199 bytes
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.909300) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.0 rd, 74.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 6.8 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(12.1) write-amplify(5.3) OK, records in: 3875, records dropped: 514 output_compression: NoCompression
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.909341) EVENT_LOG_v1 {"time_micros": 1763782500909326, "job": 8, "event": "compaction_finished", "compaction_time_micros": 87558, "compaction_time_cpu_micros": 15031, "output_level": 6, "num_output_files": 1, "total_output_size": 6475199, "num_input_records": 3875, "num_output_records": 3361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782500909733, "job": 8, "event": "table_file_deletion", "file_number": 25}
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782500911136, "job": 8, "event": "table_file_deletion", "file_number": 23}
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.788838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.911348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.911354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.911356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.911358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:35:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:35:00.911360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:35:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:03 compute-0 ceph-mon[75011]: pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:03 compute-0 podman[151759]: 2025-11-22 03:35:03.296129428 +0000 UTC m=+5.367385555 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 03:35:03 compute-0 podman[151878]: 2025-11-22 03:35:03.425175638 +0000 UTC m=+0.042679884 container create 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Nov 22 03:35:03 compute-0 podman[151878]: 2025-11-22 03:35:03.402878215 +0000 UTC m=+0.020382451 image pull 197857ba4b35dfe0da58eb2e9c37f91c8a1d2b66c0967b4c66656aa6329b870c quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 03:35:03 compute-0 python3[151746]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 22 03:35:03 compute-0 sudo[151744]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:04 compute-0 sudo[152063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjkoyszntentenxelddhqthambiwntuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782503.9856858-572-174411115200530/AnsiballZ_stat.py'
Nov 22 03:35:04 compute-0 sudo[152063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:04 compute-0 python3.9[152065]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:35:04 compute-0 sudo[152063]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:04 compute-0 sudo[152217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdpzanthkhzqjcigkuamwovtrczejitv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782504.7741-581-144202290137718/AnsiballZ_file.py'
Nov 22 03:35:04 compute-0 sudo[152217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:05 compute-0 ceph-mon[75011]: pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:05 compute-0 python3.9[152219]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:05 compute-0 sudo[152217]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:05 compute-0 sudo[152293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyolqlkygfywagasnzkmtkwigccezmrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782504.7741-581-144202290137718/AnsiballZ_stat.py'
Nov 22 03:35:05 compute-0 sudo[152293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:05 compute-0 python3.9[152295]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:35:05 compute-0 sudo[152293]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:06 compute-0 sudo[152444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxiwnrqqozxztrjdzaqhvgygvgxoaieb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782505.918368-581-113558576215043/AnsiballZ_copy.py'
Nov 22 03:35:06 compute-0 sudo[152444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:06 compute-0 python3.9[152446]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763782505.918368-581-113558576215043/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:06 compute-0 sudo[152444]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:06 compute-0 sudo[152520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcxofyrhudqhpsgvzsyxfdferjtqqdzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782505.918368-581-113558576215043/AnsiballZ_systemd.py'
Nov 22 03:35:06 compute-0 sudo[152520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:07 compute-0 python3.9[152522]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:35:07 compute-0 systemd[1]: Reloading.
Nov 22 03:35:07 compute-0 ceph-mon[75011]: pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:07 compute-0 systemd-sysv-generator[152553]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:07 compute-0 systemd-rc-local-generator[152549]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:07 compute-0 sudo[152520]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:08 compute-0 sudo[152633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuxwqffiqgbsqrvukficsjvfreuvzoxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782505.918368-581-113558576215043/AnsiballZ_systemd.py'
Nov 22 03:35:08 compute-0 sudo[152633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:08 compute-0 python3.9[152635]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:35:08 compute-0 systemd[1]: Reloading.
Nov 22 03:35:08 compute-0 systemd-rc-local-generator[152665]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:08 compute-0 systemd-sysv-generator[152668]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:08 compute-0 systemd[1]: Starting ovn_controller container...
Nov 22 03:35:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:35:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ac5a5d8943208f49accfb785832ec28c310f6e77a62e64b55365181c1e4deae/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:08 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6.
Nov 22 03:35:08 compute-0 podman[152676]: 2025-11-22 03:35:08.919654565 +0000 UTC m=+0.193582234 container init 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=ovn_controller)
Nov 22 03:35:08 compute-0 ovn_controller[152691]: + sudo -E kolla_set_configs
Nov 22 03:35:08 compute-0 podman[152676]: 2025-11-22 03:35:08.96025211 +0000 UTC m=+0.234179799 container start 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:08 compute-0 edpm-start-podman-container[152676]: ovn_controller
Nov 22 03:35:08 compute-0 systemd[1]: Created slice User Slice of UID 0.
Nov 22 03:35:09 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 22 03:35:09 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 22 03:35:09 compute-0 systemd[1]: Starting User Manager for UID 0...
Nov 22 03:35:09 compute-0 edpm-start-podman-container[152675]: Creating additional drop-in dependency for "ovn_controller" (995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6)
Nov 22 03:35:09 compute-0 systemd[152732]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Nov 22 03:35:09 compute-0 podman[152698]: 2025-11-22 03:35:09.077249656 +0000 UTC m=+0.099615669 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 03:35:09 compute-0 systemd[1]: 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6-6320da968db5c43b.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 03:35:09 compute-0 systemd[1]: 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6-6320da968db5c43b.service: Failed with result 'exit-code'.
Nov 22 03:35:09 compute-0 systemd[1]: Reloading.
Nov 22 03:35:09 compute-0 systemd-rc-local-generator[152783]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:09 compute-0 systemd-sysv-generator[152786]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:09 compute-0 systemd[152732]: Queued start job for default target Main User Target.
Nov 22 03:35:09 compute-0 systemd[152732]: Created slice User Application Slice.
Nov 22 03:35:09 compute-0 systemd[152732]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 22 03:35:09 compute-0 systemd[152732]: Started Daily Cleanup of User's Temporary Directories.
Nov 22 03:35:09 compute-0 systemd[152732]: Reached target Paths.
Nov 22 03:35:09 compute-0 systemd[152732]: Reached target Timers.
Nov 22 03:35:09 compute-0 systemd[152732]: Starting D-Bus User Message Bus Socket...
Nov 22 03:35:09 compute-0 systemd[152732]: Starting Create User's Volatile Files and Directories...
Nov 22 03:35:09 compute-0 systemd[152732]: Finished Create User's Volatile Files and Directories.
Nov 22 03:35:09 compute-0 systemd[152732]: Listening on D-Bus User Message Bus Socket.
Nov 22 03:35:09 compute-0 systemd[152732]: Reached target Sockets.
Nov 22 03:35:09 compute-0 systemd[152732]: Reached target Basic System.
Nov 22 03:35:09 compute-0 systemd[152732]: Reached target Main User Target.
Nov 22 03:35:09 compute-0 systemd[152732]: Startup finished in 205ms.
Nov 22 03:35:09 compute-0 systemd[1]: Started User Manager for UID 0.
Nov 22 03:35:09 compute-0 systemd[1]: Started ovn_controller container.
Nov 22 03:35:09 compute-0 ceph-mon[75011]: pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:09 compute-0 systemd[1]: Started Session c1 of User root.
Nov 22 03:35:09 compute-0 sudo[152633]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:09 compute-0 ovn_controller[152691]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:35:09 compute-0 ovn_controller[152691]: INFO:__main__:Validating config file
Nov 22 03:35:09 compute-0 ovn_controller[152691]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:35:09 compute-0 ovn_controller[152691]: INFO:__main__:Writing out command to execute
Nov 22 03:35:09 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 22 03:35:09 compute-0 ovn_controller[152691]: ++ cat /run_command
Nov 22 03:35:09 compute-0 ovn_controller[152691]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 03:35:09 compute-0 ovn_controller[152691]: + ARGS=
Nov 22 03:35:09 compute-0 ovn_controller[152691]: + sudo kolla_copy_cacerts
Nov 22 03:35:09 compute-0 systemd[1]: Started Session c2 of User root.
Nov 22 03:35:09 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 22 03:35:09 compute-0 ovn_controller[152691]: + [[ ! -n '' ]]
Nov 22 03:35:09 compute-0 ovn_controller[152691]: + . kolla_extend_start
Nov 22 03:35:09 compute-0 ovn_controller[152691]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 22 03:35:09 compute-0 ovn_controller[152691]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 22 03:35:09 compute-0 ovn_controller[152691]: + umask 0022
Nov 22 03:35:09 compute-0 ovn_controller[152691]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 22 03:35:09 compute-0 NetworkManager[48916]: <info>  [1763782509.5604] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Nov 22 03:35:09 compute-0 NetworkManager[48916]: <info>  [1763782509.5615] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:35:09 compute-0 NetworkManager[48916]: <info>  [1763782509.5629] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 22 03:35:09 compute-0 NetworkManager[48916]: <info>  [1763782509.5636] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Nov 22 03:35:09 compute-0 NetworkManager[48916]: <info>  [1763782509.5641] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 03:35:09 compute-0 kernel: br-int: entered promiscuous mode
Nov 22 03:35:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 22 03:35:09 compute-0 NetworkManager[48916]: <info>  [1763782509.5874] manager: (ovn-9723d3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 22 03:35:09 compute-0 systemd-udevd[152830]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:35:09 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Nov 22 03:35:09 compute-0 NetworkManager[48916]: <info>  [1763782509.6099] device (genev_sys_6081): carrier: link connected
Nov 22 03:35:09 compute-0 NetworkManager[48916]: <info>  [1763782509.6103] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Nov 22 03:35:09 compute-0 systemd-udevd[152834]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 03:35:09 compute-0 ovn_controller[152691]: 2025-11-22T03:35:09Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 22 03:35:09 compute-0 sudo[152959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqnspaqrxzmmsurvxnudrglvecqfcnkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782509.8796911-609-70097530045884/AnsiballZ_command.py'
Nov 22 03:35:09 compute-0 sudo[152959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:10 compute-0 python3.9[152961]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:10 compute-0 ovs-vsctl[152962]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 22 03:35:10 compute-0 sudo[152959]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:10 compute-0 sudo[153112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlbojthabdmguqzdmjztkdwonknlfehr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782510.652965-617-264567989984454/AnsiballZ_command.py'
Nov 22 03:35:10 compute-0 sudo[153112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:10 compute-0 python3.9[153114]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:10 compute-0 ovs-vsctl[153116]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 22 03:35:10 compute-0 sudo[153112]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:11 compute-0 ceph-mon[75011]: pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:11 compute-0 sudo[153267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clqooxyiuugknxzednmkcbpwmuutvgnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782511.7753353-631-224557658665825/AnsiballZ_command.py'
Nov 22 03:35:11 compute-0 sudo[153267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:12 compute-0 python3.9[153269]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:35:12 compute-0 ovs-vsctl[153270]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 22 03:35:12 compute-0 sudo[153267]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:12 compute-0 sshd-session[141205]: Connection closed by 192.168.122.30 port 43512
Nov 22 03:35:12 compute-0 sshd-session[141202]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:35:12 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Nov 22 03:35:12 compute-0 systemd[1]: session-46.scope: Consumed 1min 5.244s CPU time.
Nov 22 03:35:12 compute-0 systemd-logind[799]: Session 46 logged out. Waiting for processes to exit.
Nov 22 03:35:12 compute-0 systemd-logind[799]: Removed session 46.
Nov 22 03:35:12 compute-0 ceph-mon[75011]: pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:14 compute-0 ceph-mon[75011]: pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:16 compute-0 ceph-mon[75011]: pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:17 compute-0 sudo[153295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:17 compute-0 sudo[153295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:17 compute-0 sudo[153295]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:17 compute-0 sudo[153320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:35:17 compute-0 sudo[153320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:17 compute-0 sudo[153320]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:17 compute-0 sudo[153347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:17 compute-0 sudo[153347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:17 compute-0 sudo[153347]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:17 compute-0 sshd-session[153343]: Accepted publickey for zuul from 192.168.122.30 port 38066 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:35:17 compute-0 systemd-logind[799]: New session 48 of user zuul.
Nov 22 03:35:17 compute-0 systemd[1]: Started Session 48 of User zuul.
Nov 22 03:35:17 compute-0 sshd-session[153343]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:35:18 compute-0 sudo[153373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 03:35:18 compute-0 sudo[153373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:18 compute-0 sudo[153373]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:18 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:18 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:18 compute-0 sudo[153471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:18 compute-0 sudo[153471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:18 compute-0 sudo[153471]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:18 compute-0 sudo[153496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:35:18 compute-0 sudo[153496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:18 compute-0 sudo[153496]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:18 compute-0 sudo[153521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:18 compute-0 sudo[153521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:18 compute-0 sudo[153521]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:18 compute-0 sudo[153570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:35:18 compute-0 sudo[153570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:19 compute-0 python3.9[153676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:35:19 compute-0 sudo[153570]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:35:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:35:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:35:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:35:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:19 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 2ae9fb35-f532-47a0-9e34-3a9fa17a46db does not exist
Nov 22 03:35:19 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d7e06b9e-394c-404c-9266-23538f9bcff3 does not exist
Nov 22 03:35:19 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8b4ae459-d9e7-4912-a435-907488afd4bf does not exist
Nov 22 03:35:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:35:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:35:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:35:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:35:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:35:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:19 compute-0 sudo[153728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:19 compute-0 sudo[153728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:19 compute-0 systemd[1]: Stopping User Manager for UID 0...
Nov 22 03:35:19 compute-0 systemd[152732]: Activating special unit Exit the Session...
Nov 22 03:35:19 compute-0 systemd[152732]: Stopped target Main User Target.
Nov 22 03:35:19 compute-0 systemd[152732]: Stopped target Basic System.
Nov 22 03:35:19 compute-0 systemd[152732]: Stopped target Paths.
Nov 22 03:35:19 compute-0 systemd[152732]: Stopped target Sockets.
Nov 22 03:35:19 compute-0 systemd[152732]: Stopped target Timers.
Nov 22 03:35:19 compute-0 systemd[152732]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 22 03:35:19 compute-0 sudo[153728]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:19 compute-0 systemd[152732]: Closed D-Bus User Message Bus Socket.
Nov 22 03:35:19 compute-0 systemd[152732]: Stopped Create User's Volatile Files and Directories.
Nov 22 03:35:19 compute-0 systemd[152732]: Removed slice User Application Slice.
Nov 22 03:35:19 compute-0 systemd[152732]: Reached target Shutdown.
Nov 22 03:35:19 compute-0 systemd[152732]: Finished Exit the Session.
Nov 22 03:35:19 compute-0 systemd[152732]: Reached target Exit the Session.
Nov 22 03:35:19 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Nov 22 03:35:19 compute-0 ceph-mon[75011]: pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:19 compute-0 systemd[1]: Stopped User Manager for UID 0.
Nov 22 03:35:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:35:19 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 22 03:35:19 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 22 03:35:19 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 22 03:35:19 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 22 03:35:19 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Nov 22 03:35:19 compute-0 sudo[153759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:35:19 compute-0 sudo[153759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:19 compute-0 sudo[153759]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:19 compute-0 sudo[153824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:19 compute-0 sudo[153824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:19 compute-0 sudo[153824]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:19 compute-0 sudo[153857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:35:19 compute-0 sudo[153857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:20 compute-0 sudo[153983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlqefclxfdfwkbkulnndrvfgdlbzpjzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782519.9223883-34-163676070140379/AnsiballZ_file.py'
Nov 22 03:35:20 compute-0 sudo[153983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:20 compute-0 podman[153995]: 2025-11-22 03:35:20.248284594 +0000 UTC m=+0.028232406 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:20 compute-0 python3.9[153993]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:20 compute-0 sudo[153983]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:20 compute-0 podman[153995]: 2025-11-22 03:35:20.425241506 +0000 UTC m=+0.205189298 container create dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:35:20 compute-0 systemd[1]: Started libpod-conmon-dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55.scope.
Nov 22 03:35:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:35:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:35:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:35:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:35:20 compute-0 ceph-mon[75011]: pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:20 compute-0 podman[153995]: 2025-11-22 03:35:20.752190002 +0000 UTC m=+0.532137884 container init dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:20 compute-0 podman[153995]: 2025-11-22 03:35:20.767225184 +0000 UTC m=+0.547173006 container start dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:20 compute-0 sad_davinci[154058]: 167 167
Nov 22 03:35:20 compute-0 systemd[1]: libpod-dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55.scope: Deactivated successfully.
Nov 22 03:35:20 compute-0 podman[153995]: 2025-11-22 03:35:20.922904471 +0000 UTC m=+0.702852303 container attach dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:35:20 compute-0 podman[153995]: 2025-11-22 03:35:20.923474763 +0000 UTC m=+0.703422665 container died dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:35:20 compute-0 sudo[154176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipsanpkyyfqvahmqjhojlnzuwebqjbct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782520.8272948-34-75992911022326/AnsiballZ_file.py'
Nov 22 03:35:20 compute-0 sudo[154176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f54f98bd7a20bbb0399f0a348865ebaf6d08bbb8b5f6ca1c2fdd114a6948b30-merged.mount: Deactivated successfully.
Nov 22 03:35:21 compute-0 python3.9[154178]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:21 compute-0 sudo[154176]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:21 compute-0 podman[153995]: 2025-11-22 03:35:21.440267476 +0000 UTC m=+1.220215268 container remove dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:35:21 compute-0 systemd[1]: libpod-conmon-dda3afc50fe0d5178b8f465ba5db2b069e2a35a715ef1935f6e1d650b0674c55.scope: Deactivated successfully.
Nov 22 03:35:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:21 compute-0 sudo[154350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idmvubgixaxktmlwhcptcqcoqigumupb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782521.5781322-34-63232912225246/AnsiballZ_file.py'
Nov 22 03:35:21 compute-0 sudo[154350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:21 compute-0 podman[154299]: 2025-11-22 03:35:21.591040675 +0000 UTC m=+0.030442644 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:21 compute-0 podman[154299]: 2025-11-22 03:35:21.722522041 +0000 UTC m=+0.161924030 container create b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:35:21 compute-0 systemd[1]: Started libpod-conmon-b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8.scope.
Nov 22 03:35:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa7b37338ec78ad4df8d5d33c66a95c811f96e2a005a25d66c4813b3fc86868/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa7b37338ec78ad4df8d5d33c66a95c811f96e2a005a25d66c4813b3fc86868/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa7b37338ec78ad4df8d5d33c66a95c811f96e2a005a25d66c4813b3fc86868/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa7b37338ec78ad4df8d5d33c66a95c811f96e2a005a25d66c4813b3fc86868/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa7b37338ec78ad4df8d5d33c66a95c811f96e2a005a25d66c4813b3fc86868/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:21 compute-0 python3.9[154352]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:21 compute-0 sudo[154350]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:22 compute-0 podman[154299]: 2025-11-22 03:35:22.080641042 +0000 UTC m=+0.520043041 container init b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:35:22 compute-0 podman[154299]: 2025-11-22 03:35:22.093096181 +0000 UTC m=+0.532498180 container start b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:35:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:22 compute-0 podman[154299]: 2025-11-22 03:35:22.294984453 +0000 UTC m=+0.734386442 container attach b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:35:22 compute-0 sudo[154510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijlkhhlilyqltrsasfbnzazbpafcrzgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782522.2884343-34-168994140515743/AnsiballZ_file.py'
Nov 22 03:35:22 compute-0 sudo[154510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:22 compute-0 python3.9[154512]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:22 compute-0 sudo[154510]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:23 compute-0 sleepy_goldstine[154356]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:35:23 compute-0 sleepy_goldstine[154356]: --> relative data size: 1.0
Nov 22 03:35:23 compute-0 sleepy_goldstine[154356]: --> All data devices are unavailable
Nov 22 03:35:23 compute-0 systemd[1]: libpod-b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8.scope: Deactivated successfully.
Nov 22 03:35:23 compute-0 systemd[1]: libpod-b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8.scope: Consumed 1.014s CPU time.
Nov 22 03:35:23 compute-0 podman[154299]: 2025-11-22 03:35:23.186125276 +0000 UTC m=+1.625527245 container died b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:23 compute-0 sudo[154697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgfqiwrptlxaobsmxicfpflomjwafqcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782523.1521182-34-67855618627186/AnsiballZ_file.py'
Nov 22 03:35:23 compute-0 sudo[154697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:23 compute-0 ceph-mon[75011]: pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfa7b37338ec78ad4df8d5d33c66a95c811f96e2a005a25d66c4813b3fc86868-merged.mount: Deactivated successfully.
Nov 22 03:35:23 compute-0 python3.9[154699]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:23 compute-0 sudo[154697]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:23 compute-0 podman[154299]: 2025-11-22 03:35:23.832526283 +0000 UTC m=+2.271928242 container remove b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:23 compute-0 systemd[1]: libpod-conmon-b732d0a3656cc05030a9fe3c1664e9cda1c118f430ef9bd7d8bf1a28ff0b7ff8.scope: Deactivated successfully.
Nov 22 03:35:23 compute-0 sudo[153857]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:23 compute-0 sudo[154725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:23 compute-0 sudo[154725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:23 compute-0 sudo[154725]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:24 compute-0 sudo[154750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:35:24 compute-0 sudo[154750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:24 compute-0 sudo[154750]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:24 compute-0 sudo[154801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:24 compute-0 sudo[154801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:24 compute-0 sudo[154801]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:24 compute-0 sudo[154855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:35:24 compute-0 sudo[154855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:24 compute-0 python3.9[154950]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:35:24 compute-0 podman[154994]: 2025-11-22 03:35:24.501139301 +0000 UTC m=+0.028429157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:24 compute-0 podman[154994]: 2025-11-22 03:35:24.618817249 +0000 UTC m=+0.146107005 container create 6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:35:24 compute-0 ceph-mon[75011]: pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:24 compute-0 systemd[1]: Started libpod-conmon-6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e.scope.
Nov 22 03:35:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:35:24 compute-0 podman[154994]: 2025-11-22 03:35:24.914744802 +0000 UTC m=+0.442034618 container init 6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:35:24 compute-0 podman[154994]: 2025-11-22 03:35:24.924849637 +0000 UTC m=+0.452139433 container start 6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_easley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:35:24 compute-0 frosty_easley[155055]: 167 167
Nov 22 03:35:24 compute-0 systemd[1]: libpod-6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e.scope: Deactivated successfully.
Nov 22 03:35:25 compute-0 podman[154994]: 2025-11-22 03:35:25.040493635 +0000 UTC m=+0.567783431 container attach 6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:35:25 compute-0 podman[154994]: 2025-11-22 03:35:25.041746388 +0000 UTC m=+0.569036174 container died 6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:35:25 compute-0 sudo[155175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkcdyfnwhbkupfvtzzekcpcvzoyvxdbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782525.0096743-78-34053356584269/AnsiballZ_seboolean.py'
Nov 22 03:35:25 compute-0 sudo[155175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-03321e6056296d6f11dc43767ea10b194be6fe5d435bf37261ca7aab2aa7cc12-merged.mount: Deactivated successfully.
Nov 22 03:35:25 compute-0 python3.9[155178]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 22 03:35:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:25 compute-0 podman[154994]: 2025-11-22 03:35:25.785147857 +0000 UTC m=+1.312437643 container remove 6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 22 03:35:25 compute-0 systemd[1]: libpod-conmon-6ab1b4dec2621081544da2a55edddda0893266b542a50f91d086f268ca4c728e.scope: Deactivated successfully.
Nov 22 03:35:26 compute-0 podman[155186]: 2025-11-22 03:35:25.998336042 +0000 UTC m=+0.027956970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:26 compute-0 podman[155186]: 2025-11-22 03:35:26.158911882 +0000 UTC m=+0.188532800 container create 55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:35:26 compute-0 sudo[155175]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:26 compute-0 systemd[1]: Started libpod-conmon-55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382.scope.
Nov 22 03:35:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343392f95e6db138e9077dd3f7c7c4fc4913c6cf78004edc2cc1ffc13b987f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343392f95e6db138e9077dd3f7c7c4fc4913c6cf78004edc2cc1ffc13b987f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343392f95e6db138e9077dd3f7c7c4fc4913c6cf78004edc2cc1ffc13b987f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d343392f95e6db138e9077dd3f7c7c4fc4913c6cf78004edc2cc1ffc13b987f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:26 compute-0 podman[155186]: 2025-11-22 03:35:26.37489795 +0000 UTC m=+0.404518888 container init 55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:35:26 compute-0 podman[155186]: 2025-11-22 03:35:26.386366226 +0000 UTC m=+0.415987104 container start 55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Nov 22 03:35:26 compute-0 podman[155186]: 2025-11-22 03:35:26.390062123 +0000 UTC m=+0.419683101 container attach 55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:35:26 compute-0 ceph-mon[75011]: pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:27 compute-0 confident_sanderson[155207]: {
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:     "0": [
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:         {
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "devices": [
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "/dev/loop3"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             ],
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_name": "ceph_lv0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_size": "21470642176",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "name": "ceph_lv0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "tags": {
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cluster_name": "ceph",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.crush_device_class": "",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.encrypted": "0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osd_id": "0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.type": "block",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.vdo": "0"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             },
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "type": "block",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "vg_name": "ceph_vg0"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:         }
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:     ],
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:     "1": [
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:         {
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "devices": [
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "/dev/loop4"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             ],
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_name": "ceph_lv1",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_size": "21470642176",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "name": "ceph_lv1",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "tags": {
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cluster_name": "ceph",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.crush_device_class": "",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.encrypted": "0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osd_id": "1",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.type": "block",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.vdo": "0"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             },
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "type": "block",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "vg_name": "ceph_vg1"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:         }
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:     ],
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:     "2": [
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:         {
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "devices": [
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "/dev/loop5"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             ],
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_name": "ceph_lv2",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_size": "21470642176",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "name": "ceph_lv2",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "tags": {
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.cluster_name": "ceph",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.crush_device_class": "",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.encrypted": "0",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osd_id": "2",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.type": "block",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:                 "ceph.vdo": "0"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             },
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "type": "block",
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:             "vg_name": "ceph_vg2"
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:         }
Nov 22 03:35:27 compute-0 confident_sanderson[155207]:     ]
Nov 22 03:35:27 compute-0 confident_sanderson[155207]: }
Nov 22 03:35:27 compute-0 python3.9[155356]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:27 compute-0 systemd[1]: libpod-55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382.scope: Deactivated successfully.
Nov 22 03:35:27 compute-0 podman[155186]: 2025-11-22 03:35:27.172162325 +0000 UTC m=+1.201783203 container died 55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:35:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d343392f95e6db138e9077dd3f7c7c4fc4913c6cf78004edc2cc1ffc13b987f9-merged.mount: Deactivated successfully.
Nov 22 03:35:27 compute-0 podman[155186]: 2025-11-22 03:35:27.240005142 +0000 UTC m=+1.269626030 container remove 55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:35:27 compute-0 systemd[1]: libpod-conmon-55977d443f324667e38cd1d98583b58ab66422ed02115b6145a62d94cc636382.scope: Deactivated successfully.
Nov 22 03:35:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:27 compute-0 sudo[154855]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:27 compute-0 sudo[155413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:27 compute-0 sudo[155413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:27 compute-0 sudo[155413]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:27 compute-0 sudo[155445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:35:27 compute-0 sudo[155445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:27 compute-0 sudo[155445]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:27 compute-0 sudo[155470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:27 compute-0 sudo[155470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:27 compute-0 sudo[155470]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:27 compute-0 sudo[155495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:35:27 compute-0 sudo[155495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:27 compute-0 python3.9[155594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782526.7236502-86-170343431854936/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:27 compute-0 podman[155636]: 2025-11-22 03:35:27.961800576 +0000 UTC m=+0.100090594 container create 106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_perlman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:35:27 compute-0 podman[155636]: 2025-11-22 03:35:27.880926121 +0000 UTC m=+0.019216159 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:27 compute-0 systemd[1]: Started libpod-conmon-106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb.scope.
Nov 22 03:35:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:35:28 compute-0 podman[155636]: 2025-11-22 03:35:28.043013487 +0000 UTC m=+0.181303545 container init 106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:35:28 compute-0 podman[155636]: 2025-11-22 03:35:28.051787963 +0000 UTC m=+0.190077981 container start 106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_perlman, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:35:28 compute-0 nervous_perlman[155676]: 167 167
Nov 22 03:35:28 compute-0 systemd[1]: libpod-106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb.scope: Deactivated successfully.
Nov 22 03:35:28 compute-0 podman[155636]: 2025-11-22 03:35:28.056663335 +0000 UTC m=+0.194953363 container attach 106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_perlman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 22 03:35:28 compute-0 podman[155636]: 2025-11-22 03:35:28.057507623 +0000 UTC m=+0.195797651 container died 106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:35:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-36ce5ef1d3c29914c9677f8992b9dc3bf75549b695588fc3ee565f311a436635-merged.mount: Deactivated successfully.
Nov 22 03:35:28 compute-0 podman[155636]: 2025-11-22 03:35:28.097750159 +0000 UTC m=+0.236040177 container remove 106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_perlman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:28 compute-0 systemd[1]: libpod-conmon-106fa9be502b3a40004c8ba69fbd30d8488963483eabdeab580d5da340ff3fcb.scope: Deactivated successfully.
Nov 22 03:35:28 compute-0 podman[155786]: 2025-11-22 03:35:28.305526383 +0000 UTC m=+0.063144045 container create f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:35:28 compute-0 systemd[1]: Started libpod-conmon-f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb.scope.
Nov 22 03:35:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:35:28 compute-0 podman[155786]: 2025-11-22 03:35:28.279326869 +0000 UTC m=+0.036944591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df664d297b4f56504863a579422f8afedff4bae98da00619df7ca75098e0b83c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df664d297b4f56504863a579422f8afedff4bae98da00619df7ca75098e0b83c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df664d297b4f56504863a579422f8afedff4bae98da00619df7ca75098e0b83c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df664d297b4f56504863a579422f8afedff4bae98da00619df7ca75098e0b83c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:35:28 compute-0 podman[155786]: 2025-11-22 03:35:28.397877011 +0000 UTC m=+0.155494683 container init f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendeleev, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:35:28 compute-0 podman[155786]: 2025-11-22 03:35:28.410026859 +0000 UTC m=+0.167644501 container start f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:35:28 compute-0 podman[155786]: 2025-11-22 03:35:28.413560368 +0000 UTC m=+0.171178020 container attach f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendeleev, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:35:28 compute-0 python3.9[155838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:28 compute-0 ceph-mon[75011]: pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:29 compute-0 python3.9[155966]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782528.2972107-101-113726095637518/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]: {
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "osd_id": 1,
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "type": "bluestore"
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:     },
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "osd_id": 0,
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "type": "bluestore"
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:     },
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "osd_id": 2,
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:         "type": "bluestore"
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]:     }
Nov 22 03:35:29 compute-0 zealous_mendeleev[155841]: }
Nov 22 03:35:29 compute-0 systemd[1]: libpod-f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb.scope: Deactivated successfully.
Nov 22 03:35:29 compute-0 systemd[1]: libpod-f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb.scope: Consumed 1.092s CPU time.
Nov 22 03:35:29 compute-0 podman[155786]: 2025-11-22 03:35:29.496336006 +0000 UTC m=+1.253953688 container died f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:35:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-df664d297b4f56504863a579422f8afedff4bae98da00619df7ca75098e0b83c-merged.mount: Deactivated successfully.
Nov 22 03:35:29 compute-0 podman[155786]: 2025-11-22 03:35:29.566169051 +0000 UTC m=+1.323786703 container remove f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_mendeleev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:35:29 compute-0 systemd[1]: libpod-conmon-f19121cc3a1f405feb6c6edac95c82adbf2c7d913e22d043489e5f62b1d0e4cb.scope: Deactivated successfully.
Nov 22 03:35:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:29 compute-0 sudo[155495]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:35:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:35:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:29 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev eb2419ff-dfa5-4180-88b6-1c083e21604c does not exist
Nov 22 03:35:29 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a6590eef-ba05-4401-9fbc-95b7cd66f830 does not exist
Nov 22 03:35:29 compute-0 sudo[156106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:35:29 compute-0 sudo[156106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:29 compute-0 sudo[156106]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:29 compute-0 sudo[156152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:35:29 compute-0 sudo[156152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:35:29 compute-0 sudo[156152]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:29 compute-0 sudo[156206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noisoienaugfqzgnliziitmswphlrelh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782529.714566-118-131151697988312/AnsiballZ_setup.py'
Nov 22 03:35:29 compute-0 sudo[156206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:30 compute-0 python3.9[156208]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:35:30 compute-0 sudo[156206]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:30 compute-0 ceph-mon[75011]: pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:35:30 compute-0 sudo[156290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwteudkjlkjztrjvlcguvyqfrqvyjmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782529.714566-118-131151697988312/AnsiballZ_dnf.py'
Nov 22 03:35:30 compute-0 sudo[156290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:30 compute-0 python3.9[156292]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:35:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:32 compute-0 sudo[156290]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:32 compute-0 ceph-mon[75011]: pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:33 compute-0 sudo[156443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swixttbormbfwdyajdgtbvbdhjcbkgqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782532.7119186-130-92374430638537/AnsiballZ_systemd.py'
Nov 22 03:35:33 compute-0 sudo[156443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:34 compute-0 ceph-mon[75011]: pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:35 compute-0 python3.9[156445]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:35:35 compute-0 sudo[156443]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:35:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5546 writes, 24K keys, 5546 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5546 writes, 826 syncs, 6.71 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5546 writes, 24K keys, 5546 commit groups, 1.0 writes per commit group, ingest: 18.88 MB, 0.03 MB/s
                                           Interval WAL: 5546 writes, 826 syncs, 6.71 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:35:35 compute-0 python3.9[156601]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:35:36
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['images', '.rgw.root', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'backups']
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:35:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:35:36 compute-0 python3.9[156722]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782535.6249866-138-38029410953745/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:36 compute-0 ceph-mon[75011]: pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:37 compute-0 python3.9[156872]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:37 compute-0 python3.9[156993]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782536.9500237-138-150858797151527/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:38 compute-0 ceph-mon[75011]: pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:39 compute-0 python3.9[157143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:39 compute-0 ovn_controller[152691]: 2025-11-22T03:35:39Z|00025|memory|INFO|16000 kB peak resident set size after 29.9 seconds
Nov 22 03:35:39 compute-0 ovn_controller[152691]: 2025-11-22T03:35:39Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 22 03:35:39 compute-0 podman[157214]: 2025-11-22 03:35:39.442882394 +0000 UTC m=+0.121823081 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:35:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:39 compute-0 python3.9[157288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782538.7966714-182-115848473841331/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:40 compute-0 python3.9[157439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:40 compute-0 ceph-mon[75011]: pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:35:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6806 writes, 29K keys, 6806 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6806 writes, 1131 syncs, 6.02 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6806 writes, 29K keys, 6806 commit groups, 1.0 writes per commit group, ingest: 19.88 MB, 0.03 MB/s
                                           Interval WAL: 6806 writes, 1131 syncs, 6.02 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:35:40 compute-0 python3.9[157560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782540.0750291-182-264517212891131/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:41 compute-0 python3.9[157711]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:35:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:42 compute-0 sudo[157863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olvaklqpvthoownkysrqhaoyfkriqhhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782542.2867515-220-128416910284084/AnsiballZ_file.py'
Nov 22 03:35:42 compute-0 sudo[157863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:42 compute-0 python3.9[157865]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:42 compute-0 sudo[157863]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:42 compute-0 ceph-mon[75011]: pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:43 compute-0 sudo[158015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fviqasjfdnldqqoifqxjkiasvrimlggi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782543.1405187-228-7391680097198/AnsiballZ_stat.py'
Nov 22 03:35:43 compute-0 sudo[158015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:43 compute-0 python3.9[158017]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:43 compute-0 sudo[158015]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:43 compute-0 sudo[158093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whclsylpehatdsdpyorqpjsgmazbuzik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782543.1405187-228-7391680097198/AnsiballZ_file.py'
Nov 22 03:35:43 compute-0 sudo[158093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:43 compute-0 python3.9[158095]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:43 compute-0 sudo[158093]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:44 compute-0 sudo[158245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxiwtqwwuckvfhsvqqnjxkevwlcjphml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782544.2499936-228-276007790651666/AnsiballZ_stat.py'
Nov 22 03:35:44 compute-0 sudo[158245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:44 compute-0 python3.9[158247]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:44 compute-0 sudo[158245]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:44 compute-0 ceph-mon[75011]: pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:44 compute-0 sudo[158323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylzgosemrwriexrswtvuufdczikiurna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782544.2499936-228-276007790651666/AnsiballZ_file.py'
Nov 22 03:35:44 compute-0 sudo[158323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:45 compute-0 python3.9[158325]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:45 compute-0 sudo[158323]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:45 compute-0 sudo[158475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whztceiwbfjzwrobeskjsubzjhmrqyzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782545.5251198-251-77151526053812/AnsiballZ_file.py'
Nov 22 03:35:45 compute-0 sudo[158475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:45 compute-0 python3.9[158477]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:45 compute-0 sudo[158475]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:35:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:35:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:35:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 5456 writes, 23K keys, 5456 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5456 writes, 779 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5456 writes, 23K keys, 5456 commit groups, 1.0 writes per commit group, ingest: 18.55 MB, 0.03 MB/s
                                           Interval WAL: 5456 writes, 779 syncs, 7.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:35:46 compute-0 sudo[158627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cglzxfkmihnjumojwwdajfssygpospsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782546.2577126-259-84897143352720/AnsiballZ_stat.py'
Nov 22 03:35:46 compute-0 sudo[158627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:46 compute-0 python3.9[158629]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:46 compute-0 sudo[158627]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:46 compute-0 sudo[158705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnpgwplujjsyyhzcmhpaqeahrapyuube ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782546.2577126-259-84897143352720/AnsiballZ_file.py'
Nov 22 03:35:46 compute-0 sudo[158705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:46 compute-0 ceph-mon[75011]: pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:47 compute-0 python3.9[158707]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:47 compute-0 sudo[158705]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:47 compute-0 sudo[158857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkryaxylvrcbjllzlbtdxctiqoqnesly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782547.5553856-271-118005978311529/AnsiballZ_stat.py'
Nov 22 03:35:47 compute-0 sudo[158857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:47 compute-0 python3.9[158859]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:47 compute-0 sudo[158857]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:48 compute-0 sudo[158935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otmgumwyjrykxdchcwkmuulthlvnedti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782547.5553856-271-118005978311529/AnsiballZ_file.py'
Nov 22 03:35:48 compute-0 sudo[158935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:48 compute-0 python3.9[158937]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:48 compute-0 sudo[158935]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:48 compute-0 sudo[159087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aexfuxyqogagflsferiypkvojlbfhgve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782548.740093-283-121912335312422/AnsiballZ_systemd.py'
Nov 22 03:35:48 compute-0 sudo[159087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:48 compute-0 ceph-mon[75011]: pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:49 compute-0 python3.9[159089]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:35:49 compute-0 systemd[1]: Reloading.
Nov 22 03:35:49 compute-0 systemd-rc-local-generator[159111]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:49 compute-0 systemd-sysv-generator[159117]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:49 compute-0 sudo[159087]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:50 compute-0 sudo[159276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyxoxcopdbriqnaduhxvjnpxivgbpyiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782550.0024092-291-41868081295927/AnsiballZ_stat.py'
Nov 22 03:35:50 compute-0 sudo[159276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:50 compute-0 python3.9[159278]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:50 compute-0 sudo[159276]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:50 compute-0 sudo[159354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdtzhtpisrqbtbvgzbkvcjzepedzfjah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782550.0024092-291-41868081295927/AnsiballZ_file.py'
Nov 22 03:35:50 compute-0 sudo[159354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:50 compute-0 python3.9[159356]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:50 compute-0 sudo[159354]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:51 compute-0 ceph-mon[75011]: pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:51 compute-0 sudo[159506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhrlaysiinutkyvvqcbzbdwobwjcwmgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782551.301717-303-29577193952148/AnsiballZ_stat.py'
Nov 22 03:35:51 compute-0 sudo[159506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:51 compute-0 python3.9[159508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:51 compute-0 sudo[159506]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:51 compute-0 ceph-mgr[75294]: [devicehealth INFO root] Check health
Nov 22 03:35:51 compute-0 sudo[159584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiuczjyfwvvgiqybmdmeixlrdxafkstz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782551.301717-303-29577193952148/AnsiballZ_file.py'
Nov 22 03:35:51 compute-0 sudo[159584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:52 compute-0 python3.9[159586]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:52 compute-0 sudo[159584]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:52 compute-0 sudo[159736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyibqdvlatvgwphmupzqvgrbeenvktzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782552.5687025-315-130677223627450/AnsiballZ_systemd.py'
Nov 22 03:35:52 compute-0 sudo[159736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:52 compute-0 python3.9[159738]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:35:52 compute-0 systemd[1]: Reloading.
Nov 22 03:35:53 compute-0 systemd-rc-local-generator[159767]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:35:53 compute-0 systemd-sysv-generator[159771]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:35:53 compute-0 ceph-mon[75011]: pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:53 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 03:35:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:35:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:35:53 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 03:35:53 compute-0 sudo[159736]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:53 compute-0 sudo[159929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybikimdzshqwbhmpmlkgjdkoyqcdiiur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782553.859666-325-1897682505059/AnsiballZ_file.py'
Nov 22 03:35:53 compute-0 sudo[159929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:54 compute-0 python3.9[159931]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:54 compute-0 sudo[159929]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:54 compute-0 sudo[160081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajkcfnexdrkramldznoxiedpmvmpkonz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782554.577165-333-190377453857176/AnsiballZ_stat.py'
Nov 22 03:35:54 compute-0 sudo[160081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:54 compute-0 python3.9[160083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:54 compute-0 sudo[160081]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:55 compute-0 ceph-mon[75011]: pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:55 compute-0 sudo[160204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjeavtgwriapwgwhvthovrldhxhuirvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782554.577165-333-190377453857176/AnsiballZ_copy.py'
Nov 22 03:35:55 compute-0 sudo[160204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:55 compute-0 python3.9[160206]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782554.577165-333-190377453857176/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:55 compute-0 sudo[160204]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:56 compute-0 sudo[160356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emsqvbdqafxcehvejevdhjmcdjdmnams ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782556.031254-350-274047048159325/AnsiballZ_file.py'
Nov 22 03:35:56 compute-0 sudo[160356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:56 compute-0 python3.9[160358]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:35:56 compute-0 sudo[160356]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:56 compute-0 sudo[160508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvrheautzrfdoycgnuubnhgncyevppge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782556.8070104-358-174036564256969/AnsiballZ_stat.py'
Nov 22 03:35:56 compute-0 sudo[160508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:57 compute-0 python3.9[160510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:35:57 compute-0 sudo[160508]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:57 compute-0 ceph-mon[75011]: pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:35:57 compute-0 sudo[160631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arwjcvoogkfscaoqwryhymsznvuduoce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782556.8070104-358-174036564256969/AnsiballZ_copy.py'
Nov 22 03:35:57 compute-0 sudo[160631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:57 compute-0 python3.9[160633]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782556.8070104-358-174036564256969/.source.json _original_basename=.v6iykbjh follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:57 compute-0 sudo[160631]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:58 compute-0 sudo[160783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auwnfmrkgydydhcbgqkkiewzmnbvclkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782558.2077494-373-48680807525636/AnsiballZ_file.py'
Nov 22 03:35:58 compute-0 sudo[160783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:58 compute-0 python3.9[160785]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:35:58 compute-0 sudo[160783]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:58 compute-0 sudo[160935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwbpdkgveikougreralizxtluadcfzhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782558.891714-381-79776088595391/AnsiballZ_stat.py'
Nov 22 03:35:58 compute-0 sudo[160935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:59 compute-0 sudo[160935]: pam_unix(sudo:session): session closed for user root
Nov 22 03:35:59 compute-0 ceph-mon[75011]: pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:59 compute-0 sudo[161058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzyaeizoljspydfpqksgbygmhseinfzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782558.891714-381-79776088595391/AnsiballZ_copy.py'
Nov 22 03:35:59 compute-0 sudo[161058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:35:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:35:59 compute-0 sudo[161058]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:00 compute-0 sudo[161210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indyibmkblciwzvkpzrkbschupgidkpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782560.381524-398-170920340072312/AnsiballZ_container_config_data.py'
Nov 22 03:36:00 compute-0 sudo[161210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:00 compute-0 python3.9[161212]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 22 03:36:00 compute-0 sudo[161210]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:01 compute-0 ceph-mon[75011]: pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:01 compute-0 sudo[161362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usuyvqoejeiyvqytniimfglkrlgumnvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782561.295421-407-211666012165119/AnsiballZ_container_config_hash.py'
Nov 22 03:36:01 compute-0 sudo[161362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:01 compute-0 python3.9[161364]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:36:01 compute-0 sudo[161362]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:02 compute-0 sudo[161514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdmkajdljstchmyrggpkydycnpnkabsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782562.3894906-416-238395093284570/AnsiballZ_podman_container_info.py'
Nov 22 03:36:02 compute-0 sudo[161514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:02 compute-0 python3.9[161516]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 03:36:03 compute-0 sudo[161514]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:03 compute-0 ceph-mon[75011]: pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:04 compute-0 sudo[161693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcjzyytywjdtezhxrjadcfotlixsjchq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763782564.3464139-429-23023549451283/AnsiballZ_edpm_container_manage.py'
Nov 22 03:36:04 compute-0 sudo[161693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:04 compute-0 ceph-mon[75011]: pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:04 compute-0 python3[161695]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:36:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:06 compute-0 ceph-mon[75011]: pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:10 compute-0 ceph-mon[75011]: pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:12 compute-0 ceph-mon[75011]: pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:12 compute-0 podman[161774]: 2025-11-22 03:36:12.20935271 +0000 UTC m=+1.882533893 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:36:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:13 compute-0 ceph-mon[75011]: pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:13 compute-0 podman[161708]: 2025-11-22 03:36:13.960402132 +0000 UTC m=+8.883904548 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:36:14 compute-0 podman[161868]: 2025-11-22 03:36:14.114170014 +0000 UTC m=+0.047121704 container create 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 03:36:14 compute-0 podman[161868]: 2025-11-22 03:36:14.088834373 +0000 UTC m=+0.021786073 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:36:14 compute-0 python3[161695]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:36:14 compute-0 sudo[161693]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:14 compute-0 sudo[162058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mefwqvtwzcmsbjuesuxbbwtalgsymeon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782574.6438804-437-118790668687420/AnsiballZ_stat.py'
Nov 22 03:36:14 compute-0 sudo[162058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:14 compute-0 python3.9[162060]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:36:14 compute-0 sudo[162058]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:15 compute-0 ceph-mon[75011]: pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:15 compute-0 sudo[162212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqhzmxbqhakwkzgmuvroezdabojnceld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782575.489401-446-1179561840961/AnsiballZ_file.py'
Nov 22 03:36:15 compute-0 sudo[162212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:15 compute-0 python3.9[162214]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:15 compute-0 sudo[162212]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:16 compute-0 sudo[162288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauhcpyebqsgbwioiobvtqjzzjhxwbag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782575.489401-446-1179561840961/AnsiballZ_stat.py'
Nov 22 03:36:16 compute-0 sudo[162288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:16 compute-0 python3.9[162290]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:36:16 compute-0 sudo[162288]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:16 compute-0 sudo[162439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjzjqnfxgeoelbbonqgahyrtcxwuiimx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782576.6672008-446-207244874385669/AnsiballZ_copy.py'
Nov 22 03:36:16 compute-0 sudo[162439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:17 compute-0 python3.9[162441]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763782576.6672008-446-207244874385669/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:17 compute-0 sudo[162439]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:17 compute-0 sudo[162515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujxfvcwfmawnkwtzoictywkgcybittnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782576.6672008-446-207244874385669/AnsiballZ_systemd.py'
Nov 22 03:36:17 compute-0 sudo[162515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:17 compute-0 ceph-mon[75011]: pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:17 compute-0 python3.9[162517]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:36:17 compute-0 systemd[1]: Reloading.
Nov 22 03:36:17 compute-0 systemd-rc-local-generator[162546]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:36:17 compute-0 systemd-sysv-generator[162549]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:36:18 compute-0 sudo[162515]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:18 compute-0 sudo[162627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-metjxjmernrdbigcjtypqldiccjzbxwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782576.6672008-446-207244874385669/AnsiballZ_systemd.py'
Nov 22 03:36:18 compute-0 sudo[162627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:18 compute-0 python3.9[162629]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:36:18 compute-0 systemd[1]: Reloading.
Nov 22 03:36:18 compute-0 systemd-sysv-generator[162657]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:36:18 compute-0 systemd-rc-local-generator[162653]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:36:18 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Nov 22 03:36:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58286272244c96e04307f3520f9382d371f89336141a11cc31c52839f110080/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e58286272244c96e04307f3520f9382d371f89336141a11cc31c52839f110080/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:19 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4.
Nov 22 03:36:19 compute-0 podman[162669]: 2025-11-22 03:36:19.105964609 +0000 UTC m=+0.126313736 container init 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + sudo -E kolla_set_configs
Nov 22 03:36:19 compute-0 podman[162669]: 2025-11-22 03:36:19.139408994 +0000 UTC m=+0.159758151 container start 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:36:19 compute-0 edpm-start-podman-container[162669]: ovn_metadata_agent
Nov 22 03:36:19 compute-0 edpm-start-podman-container[162668]: Creating additional drop-in dependency for "ovn_metadata_agent" (253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4)
Nov 22 03:36:19 compute-0 podman[162690]: 2025-11-22 03:36:19.221052931 +0000 UTC m=+0.064685492 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 03:36:19 compute-0 systemd[1]: Reloading.
Nov 22 03:36:19 compute-0 systemd-rc-local-generator[162758]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:36:19 compute-0 systemd-sysv-generator[162761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Validating config file
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Copying service configuration files
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Writing out command to execute
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: ++ cat /run_command
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + CMD=neutron-ovn-metadata-agent
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + ARGS=
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + sudo kolla_copy_cacerts
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + [[ ! -n '' ]]
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + . kolla_extend_start
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: Running command: 'neutron-ovn-metadata-agent'
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + umask 0022
Nov 22 03:36:19 compute-0 ovn_metadata_agent[162684]: + exec neutron-ovn-metadata-agent
Nov 22 03:36:19 compute-0 systemd[1]: Started ovn_metadata_agent container.
Nov 22 03:36:19 compute-0 sudo[162627]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:19 compute-0 ceph-mon[75011]: pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:19 compute-0 sshd-session[153394]: Connection closed by 192.168.122.30 port 38066
Nov 22 03:36:19 compute-0 sshd-session[153343]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:36:19 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Nov 22 03:36:19 compute-0 systemd[1]: session-48.scope: Consumed 58.533s CPU time.
Nov 22 03:36:19 compute-0 systemd-logind[799]: Session 48 logged out. Waiting for processes to exit.
Nov 22 03:36:19 compute-0 systemd-logind[799]: Removed session 48.
Nov 22 03:36:20 compute-0 ceph-mon[75011]: pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.950 162689 INFO neutron.common.config [-] Logging enabled!
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.950 162689 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.950 162689 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.951 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.951 162689 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.951 162689 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.951 162689 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.952 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.952 162689 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.952 162689 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.952 162689 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.952 162689 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.952 162689 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.952 162689 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.953 162689 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.953 162689 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.953 162689 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.953 162689 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.953 162689 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.953 162689 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.953 162689 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.953 162689 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.954 162689 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.954 162689 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.954 162689 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.954 162689 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.954 162689 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.954 162689 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.954 162689 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.955 162689 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.956 162689 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.957 162689 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.958 162689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.959 162689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.959 162689 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.959 162689 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.959 162689 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.959 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.959 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.959 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.960 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.960 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.960 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.960 162689 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.960 162689 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.960 162689 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.960 162689 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.960 162689 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.961 162689 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.962 162689 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.963 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.964 162689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.965 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.965 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.965 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.965 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.965 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.965 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.965 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.965 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.966 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.967 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.968 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.969 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.970 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.971 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.972 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.973 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.974 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.975 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.976 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.977 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.978 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.979 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.979 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.979 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.979 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.979 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.979 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.979 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.979 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.980 162689 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.981 162689 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.981 162689 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.981 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.981 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.981 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.981 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.981 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.981 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.982 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.982 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.982 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.982 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.982 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.982 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.982 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.982 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.983 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.983 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.983 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.983 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.983 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.983 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.983 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.984 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.984 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.984 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.984 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.984 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.984 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.984 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.984 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.985 162689 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.995 162689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.995 162689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.995 162689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.995 162689 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Nov 22 03:36:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:22.996 162689 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.008 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 7d76f7df-fc3b-449d-b505-65b8b0ef9c3a (UUID: 7d76f7df-fc3b-449d-b505-65b8b0ef9c3a) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.031 162689 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.032 162689 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.032 162689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.032 162689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.034 162689 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.040 162689 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.044 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '7d76f7df-fc3b-449d-b505-65b8b0ef9c3a'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], external_ids={}, name=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, nb_cfg_timestamp=1763782517589, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.045 162689 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f48cd3acb20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.046 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.046 162689 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.046 162689 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.046 162689 INFO oslo_service.service [-] Starting 1 workers
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.051 162689 DEBUG oslo_service.service [-] Started child 162801 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.054 162689 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpniyoh3mi/privsep.sock']
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.055 162801 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-364186'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.091 162801 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.092 162801 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.092 162801 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.097 162801 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.105 162801 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.112 162801 INFO eventlet.wsgi.server [-] (162801) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Nov 22 03:36:23 compute-0 ceph-mon[75011]: pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:23 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.741 162689 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.742 162689 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpniyoh3mi/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.607 162806 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.613 162806 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.615 162806 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.616 162806 INFO oslo.privsep.daemon [-] privsep daemon running as pid 162806
Nov 22 03:36:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:23.745 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[a9945a55-a2a8-4f41-b2ec-4f5dcb4b5a76]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.252 162806 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.252 162806 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.252 162806 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.794 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f13c81-58fd-4183-b4f5-d2bc89b011f3]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.797 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, column=external_ids, values=({'neutron:ovn-metadata-id': '4e897855-192a-52dc-8443-1ac506949448'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.818 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.835 162689 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.835 162689 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.836 162689 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.836 162689 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.836 162689 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.836 162689 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.836 162689 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.836 162689 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.837 162689 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.837 162689 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.837 162689 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.837 162689 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.837 162689 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.837 162689 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.838 162689 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.838 162689 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.838 162689 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.838 162689 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.838 162689 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.839 162689 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.839 162689 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.839 162689 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.839 162689 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.839 162689 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.840 162689 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.840 162689 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.840 162689 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.840 162689 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.841 162689 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.841 162689 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.841 162689 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.841 162689 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.841 162689 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.842 162689 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.842 162689 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.842 162689 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.842 162689 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.843 162689 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.843 162689 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.843 162689 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.843 162689 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.844 162689 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.844 162689 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.844 162689 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.844 162689 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.844 162689 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.845 162689 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.845 162689 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.845 162689 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.845 162689 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.845 162689 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.846 162689 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.846 162689 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.846 162689 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.846 162689 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.846 162689 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.847 162689 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.847 162689 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.847 162689 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.847 162689 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.847 162689 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.847 162689 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.848 162689 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.848 162689 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.848 162689 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.848 162689 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.849 162689 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.849 162689 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.849 162689 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.849 162689 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.849 162689 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.850 162689 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.850 162689 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.850 162689 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.850 162689 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.850 162689 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.851 162689 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.851 162689 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.851 162689 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.851 162689 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.851 162689 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.852 162689 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.852 162689 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.852 162689 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.852 162689 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.852 162689 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.852 162689 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.853 162689 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.853 162689 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.853 162689 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.853 162689 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.853 162689 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.853 162689 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.854 162689 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.854 162689 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.854 162689 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.854 162689 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.854 162689 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.854 162689 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.855 162689 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.855 162689 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.855 162689 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.855 162689 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.855 162689 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.856 162689 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.856 162689 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.856 162689 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.856 162689 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:36:24 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.857 162689 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.857 162689 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.857 162689 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.857 162689 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.858 162689 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.858 162689 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.858 162689 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.858 162689 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.858 162689 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.859 162689 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.859 162689 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.859 162689 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.859 162689 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.860 162689 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.860 162689 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.860 162689 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.860 162689 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.861 162689 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.861 162689 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.861 162689 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.861 162689 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.861 162689 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.862 162689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.862 162689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.862 162689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.862 162689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.862 162689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.863 162689 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.863 162689 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.863 162689 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.863 162689 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.863 162689 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.864 162689 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.864 162689 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.864 162689 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.864 162689 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.864 162689 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.865 162689 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.865 162689 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.865 162689 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.865 162689 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.865 162689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.866 162689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.866 162689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.866 162689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.866 162689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.866 162689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.867 162689 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.867 162689 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.867 162689 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.867 162689 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.867 162689 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.868 162689 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.868 162689 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.868 162689 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.868 162689 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.868 162689 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.869 162689 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.869 162689 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.869 162689 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.869 162689 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.869 162689 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.870 162689 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.870 162689 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.870 162689 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.870 162689 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.870 162689 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.871 162689 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.871 162689 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.871 162689 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.871 162689 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.872 162689 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.872 162689 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.872 162689 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.872 162689 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.872 162689 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.873 162689 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.873 162689 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.873 162689 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.873 162689 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.873 162689 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.874 162689 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.874 162689 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.874 162689 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.874 162689 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.875 162689 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.875 162689 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.875 162689 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.875 162689 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.875 162689 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.876 162689 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.876 162689 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.876 162689 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.876 162689 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.876 162689 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.877 162689 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.877 162689 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.877 162689 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.877 162689 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.877 162689 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.878 162689 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.878 162689 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.878 162689 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.878 162689 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.878 162689 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.879 162689 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.879 162689 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.879 162689 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.879 162689 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.879 162689 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.880 162689 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.880 162689 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.880 162689 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.880 162689 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.880 162689 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.881 162689 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.881 162689 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.881 162689 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.881 162689 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.881 162689 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.882 162689 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.882 162689 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.882 162689 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.882 162689 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.882 162689 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.883 162689 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.883 162689 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.883 162689 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.883 162689 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.884 162689 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.884 162689 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.884 162689 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.884 162689 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.884 162689 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.885 162689 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.885 162689 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.885 162689 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.885 162689 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.885 162689 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.886 162689 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.886 162689 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.886 162689 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.886 162689 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.887 162689 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.887 162689 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.887 162689 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.887 162689 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.887 162689 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.888 162689 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.888 162689 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.888 162689 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.888 162689 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.888 162689 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.889 162689 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.889 162689 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.889 162689 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.889 162689 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.889 162689 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.890 162689 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.890 162689 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.890 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.890 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.890 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.891 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.891 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.891 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.891 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.891 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.892 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.892 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.892 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.892 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.893 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.893 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.893 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.893 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.893 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.894 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.894 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.894 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.894 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.894 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.895 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.895 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.895 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.895 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.895 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.896 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.896 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.896 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.896 162689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.897 162689 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.897 162689 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.897 162689 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.897 162689 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:36:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:36:24.898 162689 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 03:36:25 compute-0 ceph-mon[75011]: pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:25 compute-0 sshd-session[162812]: Accepted publickey for zuul from 192.168.122.30 port 38122 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:36:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:25 compute-0 systemd-logind[799]: New session 49 of user zuul.
Nov 22 03:36:25 compute-0 systemd[1]: Started Session 49 of User zuul.
Nov 22 03:36:25 compute-0 sshd-session[162812]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:36:26 compute-0 python3.9[162965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:36:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:27 compute-0 ceph-mon[75011]: pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:27 compute-0 sudo[163119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lklzqcyttnbbxlpzunnlqyexpjxadzar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782587.442763-34-268968061475841/AnsiballZ_command.py'
Nov 22 03:36:27 compute-0 sudo[163119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:27 compute-0 python3.9[163121]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:27 compute-0 sudo[163119]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:28 compute-0 ceph-mon[75011]: pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:28 compute-0 sudo[163284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvfecadhdhalzfsstkutipymfajbxtsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782588.5253878-45-277570848802354/AnsiballZ_systemd_service.py'
Nov 22 03:36:28 compute-0 sudo[163284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:29 compute-0 python3.9[163286]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:36:29 compute-0 systemd[1]: Reloading.
Nov 22 03:36:29 compute-0 systemd-rc-local-generator[163307]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:36:29 compute-0 systemd-sysv-generator[163312]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:36:29 compute-0 sudo[163284]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:29 compute-0 sudo[163398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:29 compute-0 sudo[163398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:29 compute-0 sudo[163398]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:29 compute-0 sudo[163423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:36:29 compute-0 sudo[163423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:29 compute-0 sudo[163423]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:29 compute-0 sudo[163471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:29 compute-0 sudo[163471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:29 compute-0 sudo[163471]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:30 compute-0 sudo[163520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:36:30 compute-0 sudo[163520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:30 compute-0 python3.9[163571]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:36:30 compute-0 network[163602]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:36:30 compute-0 network[163603]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:36:30 compute-0 network[163604]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:36:30 compute-0 sudo[163520]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:36:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:36:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:36:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:36:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:36:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 232f6c42-7530-4709-828a-2a20765ba3a7 does not exist
Nov 22 03:36:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6b25b0ac-816f-4099-9231-9aa03cdbc5cb does not exist
Nov 22 03:36:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e457fa71-597e-4a98-8a51-aa8c92c97807 does not exist
Nov 22 03:36:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:36:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:36:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:36:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:36:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:36:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:36:31 compute-0 sudo[163627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:31 compute-0 sudo[163627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:31 compute-0 sudo[163627]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:31 compute-0 sudo[163653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:36:31 compute-0 sudo[163653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:31 compute-0 sudo[163653]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:31 compute-0 sudo[163682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:31 compute-0 sudo[163682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:31 compute-0 sudo[163682]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:31 compute-0 sudo[163710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:36:31 compute-0 sudo[163710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:31 compute-0 podman[163793]: 2025-11-22 03:36:31.628160408 +0000 UTC m=+0.069051670 container create b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:36:31 compute-0 systemd[1]: Started libpod-conmon-b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61.scope.
Nov 22 03:36:31 compute-0 podman[163793]: 2025-11-22 03:36:31.587007678 +0000 UTC m=+0.027898950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:36:31 compute-0 podman[163793]: 2025-11-22 03:36:31.715882842 +0000 UTC m=+0.156774114 container init b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:31 compute-0 podman[163793]: 2025-11-22 03:36:31.722877431 +0000 UTC m=+0.163768693 container start b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:36:31 compute-0 podman[163793]: 2025-11-22 03:36:31.725816582 +0000 UTC m=+0.166707874 container attach b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:36:31 compute-0 happy_wozniak[163817]: 167 167
Nov 22 03:36:31 compute-0 systemd[1]: libpod-b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61.scope: Deactivated successfully.
Nov 22 03:36:31 compute-0 conmon[163817]: conmon b4aa0d6d2bba2038c020 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61.scope/container/memory.events
Nov 22 03:36:31 compute-0 podman[163793]: 2025-11-22 03:36:31.731109587 +0000 UTC m=+0.172000879 container died b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:36:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4af64ad6cbe83b023170ee7435c811a26cb1bde9f85391211dfe1f53590df50-merged.mount: Deactivated successfully.
Nov 22 03:36:31 compute-0 podman[163793]: 2025-11-22 03:36:31.774944311 +0000 UTC m=+0.215835583 container remove b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:36:31 compute-0 systemd[1]: libpod-conmon-b4aa0d6d2bba2038c0203ba2f1b80b1f9635c454a2426897dace75a526ff6b61.scope: Deactivated successfully.
Nov 22 03:36:31 compute-0 podman[163853]: 2025-11-22 03:36:31.946559599 +0000 UTC m=+0.048401351 container create a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_edison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:36:31 compute-0 systemd[1]: Started libpod-conmon-a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5.scope.
Nov 22 03:36:32 compute-0 podman[163853]: 2025-11-22 03:36:31.92411492 +0000 UTC m=+0.025956702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b87b39e2661fd4ee28e31feb2b59218fae5a7688a353aa7f2c9e1d75f3e34ba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b87b39e2661fd4ee28e31feb2b59218fae5a7688a353aa7f2c9e1d75f3e34ba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b87b39e2661fd4ee28e31feb2b59218fae5a7688a353aa7f2c9e1d75f3e34ba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b87b39e2661fd4ee28e31feb2b59218fae5a7688a353aa7f2c9e1d75f3e34ba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b87b39e2661fd4ee28e31feb2b59218fae5a7688a353aa7f2c9e1d75f3e34ba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:32 compute-0 podman[163853]: 2025-11-22 03:36:32.123160984 +0000 UTC m=+0.225002816 container init a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:32 compute-0 podman[163853]: 2025-11-22 03:36:32.129929868 +0000 UTC m=+0.231771630 container start a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:36:32 compute-0 podman[163853]: 2025-11-22 03:36:32.221269974 +0000 UTC m=+0.323111776 container attach a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:36:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:32 compute-0 ceph-mon[75011]: pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:33 compute-0 happy_edison[163873]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:36:33 compute-0 happy_edison[163873]: --> relative data size: 1.0
Nov 22 03:36:33 compute-0 happy_edison[163873]: --> All data devices are unavailable
Nov 22 03:36:33 compute-0 systemd[1]: libpod-a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5.scope: Deactivated successfully.
Nov 22 03:36:33 compute-0 podman[163853]: 2025-11-22 03:36:33.264558002 +0000 UTC m=+1.366399764 container died a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_edison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:36:33 compute-0 systemd[1]: libpod-a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5.scope: Consumed 1.074s CPU time.
Nov 22 03:36:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b87b39e2661fd4ee28e31feb2b59218fae5a7688a353aa7f2c9e1d75f3e34ba-merged.mount: Deactivated successfully.
Nov 22 03:36:33 compute-0 podman[163853]: 2025-11-22 03:36:33.340498341 +0000 UTC m=+1.442340123 container remove a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:33 compute-0 systemd[1]: libpod-conmon-a973ddf8faa430290a2f376424112eb10a46122d17beb6a4751e7b6dbf50f3e5.scope: Deactivated successfully.
Nov 22 03:36:33 compute-0 sudo[163710]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:33 compute-0 sudo[163968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:33 compute-0 sudo[163968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:33 compute-0 sudo[163968]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:33 compute-0 sudo[163993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:36:33 compute-0 sudo[163993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:33 compute-0 sudo[163993]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:33 compute-0 sudo[164018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:33 compute-0 sudo[164018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:33 compute-0 sudo[164018]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:33 compute-0 sudo[164043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:36:33 compute-0 sudo[164043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:34 compute-0 podman[164108]: 2025-11-22 03:36:34.086670946 +0000 UTC m=+0.046403204 container create 02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_aryabhata, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:34 compute-0 systemd[1]: Started libpod-conmon-02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81.scope.
Nov 22 03:36:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:36:34 compute-0 podman[164108]: 2025-11-22 03:36:34.065269665 +0000 UTC m=+0.025001923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:34 compute-0 podman[164108]: 2025-11-22 03:36:34.172380904 +0000 UTC m=+0.132113142 container init 02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:36:34 compute-0 podman[164108]: 2025-11-22 03:36:34.182235728 +0000 UTC m=+0.141967946 container start 02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_aryabhata, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:36:34 compute-0 kind_aryabhata[164125]: 167 167
Nov 22 03:36:34 compute-0 podman[164108]: 2025-11-22 03:36:34.186590854 +0000 UTC m=+0.146323222 container attach 02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:34 compute-0 systemd[1]: libpod-02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81.scope: Deactivated successfully.
Nov 22 03:36:34 compute-0 podman[164108]: 2025-11-22 03:36:34.188871197 +0000 UTC m=+0.148603415 container died 02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:36:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-278b42836660ce708c779415b91c5daa6a34251bd2cc1db6bcb65f12403f292d-merged.mount: Deactivated successfully.
Nov 22 03:36:34 compute-0 podman[164108]: 2025-11-22 03:36:34.309816255 +0000 UTC m=+0.269548503 container remove 02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_aryabhata, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:36:34 compute-0 systemd[1]: libpod-conmon-02af5417aa0de436ff321e2f8dbaa8c8328505ba9101ff398ff48cd1d1399a81.scope: Deactivated successfully.
Nov 22 03:36:34 compute-0 podman[164173]: 2025-11-22 03:36:34.565060711 +0000 UTC m=+0.094154726 container create 6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:36:34 compute-0 podman[164173]: 2025-11-22 03:36:34.509732995 +0000 UTC m=+0.038827010 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:34 compute-0 systemd[1]: Started libpod-conmon-6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c.scope.
Nov 22 03:36:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f216075be99f3ceb662e0924bc0c56cee315f78ff256c1fb3cf2f2351ad7cc19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f216075be99f3ceb662e0924bc0c56cee315f78ff256c1fb3cf2f2351ad7cc19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f216075be99f3ceb662e0924bc0c56cee315f78ff256c1fb3cf2f2351ad7cc19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f216075be99f3ceb662e0924bc0c56cee315f78ff256c1fb3cf2f2351ad7cc19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:34 compute-0 ceph-mon[75011]: pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:34 compute-0 podman[164173]: 2025-11-22 03:36:34.699767259 +0000 UTC m=+0.228861234 container init 6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:36:34 compute-0 podman[164173]: 2025-11-22 03:36:34.707463117 +0000 UTC m=+0.236557122 container start 6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swanson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:36:34 compute-0 podman[164173]: 2025-11-22 03:36:34.767988151 +0000 UTC m=+0.297082156 container attach 6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swanson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:36:34 compute-0 sudo[164319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycntjzqankgcillzdlmlvusyokdupag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782594.7548146-64-23644523561226/AnsiballZ_systemd_service.py'
Nov 22 03:36:34 compute-0 sudo[164319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:35 compute-0 python3.9[164321]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:36:35 compute-0 sudo[164319]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]: {
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:     "0": [
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:         {
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "devices": [
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "/dev/loop3"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             ],
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_name": "ceph_lv0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_size": "21470642176",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "name": "ceph_lv0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "tags": {
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cluster_name": "ceph",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.crush_device_class": "",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.encrypted": "0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osd_id": "0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.type": "block",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.vdo": "0"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             },
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "type": "block",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "vg_name": "ceph_vg0"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:         }
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:     ],
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:     "1": [
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:         {
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "devices": [
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "/dev/loop4"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             ],
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_name": "ceph_lv1",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_size": "21470642176",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "name": "ceph_lv1",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "tags": {
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cluster_name": "ceph",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.crush_device_class": "",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.encrypted": "0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osd_id": "1",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.type": "block",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.vdo": "0"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             },
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "type": "block",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "vg_name": "ceph_vg1"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:         }
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:     ],
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:     "2": [
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:         {
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "devices": [
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "/dev/loop5"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             ],
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_name": "ceph_lv2",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_size": "21470642176",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "name": "ceph_lv2",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "tags": {
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.cluster_name": "ceph",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.crush_device_class": "",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.encrypted": "0",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osd_id": "2",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.type": "block",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:                 "ceph.vdo": "0"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             },
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "type": "block",
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:             "vg_name": "ceph_vg2"
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:         }
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]:     ]
Nov 22 03:36:35 compute-0 mystifying_swanson[164244]: }
Nov 22 03:36:35 compute-0 systemd[1]: libpod-6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c.scope: Deactivated successfully.
Nov 22 03:36:35 compute-0 podman[164173]: 2025-11-22 03:36:35.510855124 +0000 UTC m=+1.039949129 container died 6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swanson, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f216075be99f3ceb662e0924bc0c56cee315f78ff256c1fb3cf2f2351ad7cc19-merged.mount: Deactivated successfully.
Nov 22 03:36:35 compute-0 podman[164173]: 2025-11-22 03:36:35.674547233 +0000 UTC m=+1.203641208 container remove 6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 03:36:35 compute-0 systemd[1]: libpod-conmon-6807b8e1a2d0735a2240e7dde508f9d6a5c44ff7f7fb59d2f55d280a8491317c.scope: Deactivated successfully.
Nov 22 03:36:35 compute-0 sudo[164043]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:35 compute-0 sudo[164442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:35 compute-0 sudo[164442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:35 compute-0 sudo[164442]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:35 compute-0 sudo[164493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:36:35 compute-0 sudo[164539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndnldhahxywokrwnwxjegaacdjtjbggp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782595.6218665-64-120505447226960/AnsiballZ_systemd_service.py'
Nov 22 03:36:35 compute-0 sudo[164493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:35 compute-0 sudo[164539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:35 compute-0 sudo[164493]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:35 compute-0 sudo[164544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:35 compute-0 sudo[164544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:35 compute-0 sudo[164544]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:35 compute-0 sudo[164569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:36:35 compute-0 sudo[164569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:36 compute-0 python3.9[164543]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:36:36
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.meta', '.mgr', 'backups']
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:36:36 compute-0 sudo[164539]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:36:36 compute-0 podman[164642]: 2025-11-22 03:36:36.264668268 +0000 UTC m=+0.065625443 container create 0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mestorf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:36:36 compute-0 systemd[1]: Started libpod-conmon-0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760.scope.
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:36:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:36:36 compute-0 podman[164642]: 2025-11-22 03:36:36.219473838 +0000 UTC m=+0.020431033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:36:36 compute-0 podman[164642]: 2025-11-22 03:36:36.379662545 +0000 UTC m=+0.180619740 container init 0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mestorf, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:36:36 compute-0 podman[164642]: 2025-11-22 03:36:36.386736602 +0000 UTC m=+0.187693777 container start 0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:36:36 compute-0 sad_mestorf[164707]: 167 167
Nov 22 03:36:36 compute-0 systemd[1]: libpod-0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760.scope: Deactivated successfully.
Nov 22 03:36:36 compute-0 podman[164642]: 2025-11-22 03:36:36.424913786 +0000 UTC m=+0.225870971 container attach 0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mestorf, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:36:36 compute-0 podman[164642]: 2025-11-22 03:36:36.425319142 +0000 UTC m=+0.226276317 container died 0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mestorf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:36:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-71282f99708d2e165e76f40e44aeadbb17aef9c9059c2ddfdaa8b5a3890b48bf-merged.mount: Deactivated successfully.
Nov 22 03:36:36 compute-0 sudo[164818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jncsbxslegpbygmcmjknywiyubibzwqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782596.5072734-64-262338220132289/AnsiballZ_systemd_service.py'
Nov 22 03:36:36 compute-0 sudo[164818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:36 compute-0 podman[164642]: 2025-11-22 03:36:36.711484561 +0000 UTC m=+0.512441776 container remove 0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_mestorf, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:36:36 compute-0 systemd[1]: libpod-conmon-0f9b6e0ab2412e7aacaab9b07a8822e8f28953d3b1ef07ea49151143be7fd760.scope: Deactivated successfully.
Nov 22 03:36:36 compute-0 ceph-mon[75011]: pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:36 compute-0 python3.9[164820]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:36:36 compute-0 podman[164828]: 2025-11-22 03:36:36.960015312 +0000 UTC m=+0.079766949 container create 19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:36:36 compute-0 sudo[164818]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:37 compute-0 podman[164828]: 2025-11-22 03:36:36.909231434 +0000 UTC m=+0.028983151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:36:37 compute-0 systemd[1]: Started libpod-conmon-19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53.scope.
Nov 22 03:36:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d473e571cf280a261cb8eb84b57e9619a717e4cfd55dc22b21434b23bf5310/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d473e571cf280a261cb8eb84b57e9619a717e4cfd55dc22b21434b23bf5310/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d473e571cf280a261cb8eb84b57e9619a717e4cfd55dc22b21434b23bf5310/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1d473e571cf280a261cb8eb84b57e9619a717e4cfd55dc22b21434b23bf5310/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:36:37 compute-0 podman[164828]: 2025-11-22 03:36:37.111460227 +0000 UTC m=+0.231211884 container init 19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:36:37 compute-0 podman[164828]: 2025-11-22 03:36:37.119491146 +0000 UTC m=+0.239242773 container start 19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:36:37 compute-0 podman[164828]: 2025-11-22 03:36:37.135253317 +0000 UTC m=+0.255004944 container attach 19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:36:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:37 compute-0 sudo[165000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crhktxlpxeosgvcfkejczuspsasdgjfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782597.3228722-64-218714249233467/AnsiballZ_systemd_service.py'
Nov 22 03:36:37 compute-0 sudo[165000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:37 compute-0 python3.9[165002]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:36:37 compute-0 sudo[165000]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:38 compute-0 gallant_bartik[164870]: {
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "osd_id": 1,
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "type": "bluestore"
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:     },
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "osd_id": 0,
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "type": "bluestore"
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:     },
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "osd_id": 2,
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:         "type": "bluestore"
Nov 22 03:36:38 compute-0 gallant_bartik[164870]:     }
Nov 22 03:36:38 compute-0 gallant_bartik[164870]: }
Nov 22 03:36:38 compute-0 systemd[1]: libpod-19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53.scope: Deactivated successfully.
Nov 22 03:36:38 compute-0 podman[164828]: 2025-11-22 03:36:38.084038304 +0000 UTC m=+1.203789951 container died 19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:36:38 compute-0 sudo[165192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tihvhtxomljpvdcjechgriteybfcunxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782598.1236327-64-82245589947032/AnsiballZ_systemd_service.py'
Nov 22 03:36:38 compute-0 sudo[165192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1d473e571cf280a261cb8eb84b57e9619a717e4cfd55dc22b21434b23bf5310-merged.mount: Deactivated successfully.
Nov 22 03:36:38 compute-0 podman[164828]: 2025-11-22 03:36:38.420662494 +0000 UTC m=+1.540414121 container remove 19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:36:38 compute-0 systemd[1]: libpod-conmon-19d8b1d7c39c406445a42bce4c899c907fb62b5b1f5c3fa0bd7c07f3ec9d3d53.scope: Deactivated successfully.
Nov 22 03:36:38 compute-0 sudo[164569]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:36:38 compute-0 python3.9[165195]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:36:38 compute-0 sudo[165192]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:36:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:36:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:36:38 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5aecb27c-e807-4c43-b1b7-b202408cf697 does not exist
Nov 22 03:36:38 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3403a613-14d6-468d-83e1-a1369e5ebc6c does not exist
Nov 22 03:36:38 compute-0 sudo[165222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:36:38 compute-0 sudo[165222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:38 compute-0 sudo[165222]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:38 compute-0 sudo[165278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:36:38 compute-0 sudo[165278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:36:38 compute-0 sudo[165278]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:38 compute-0 ceph-mon[75011]: pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:36:38 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:36:39 compute-0 sudo[165396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqkjrmwfyvkyhibbjqbqnzagbeeppuby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782598.9181755-64-195780809028798/AnsiballZ_systemd_service.py'
Nov 22 03:36:39 compute-0 sudo[165396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:39 compute-0 python3.9[165398]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:36:39 compute-0 sudo[165396]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:39 compute-0 sudo[165549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gclfocixamcnbyluturjysprjgtuwuqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782599.6881485-64-252124151750095/AnsiballZ_systemd_service.py'
Nov 22 03:36:39 compute-0 sudo[165549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:40 compute-0 python3.9[165551]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:36:40 compute-0 sudo[165549]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:40 compute-0 sudo[165702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfubbbuscsahywsszugsiwfqygaarnwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782600.6428792-116-181840714830659/AnsiballZ_file.py'
Nov 22 03:36:40 compute-0 sudo[165702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:41 compute-0 ceph-mon[75011]: pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:41 compute-0 python3.9[165704]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:41 compute-0 sudo[165702]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:41 compute-0 sudo[165854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajcotgxvdfmoematyxzudzzjmmthfbst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782601.4853523-116-52382967157775/AnsiballZ_file.py'
Nov 22 03:36:41 compute-0 sudo[165854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:41 compute-0 python3.9[165856]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:41 compute-0 sudo[165854]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:42 compute-0 sudo[166006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhrfyveeadjmurgnohdawjudzokbtqxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782602.1472745-116-183437395658279/AnsiballZ_file.py'
Nov 22 03:36:42 compute-0 sudo[166006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:42 compute-0 python3.9[166008]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:42 compute-0 sudo[166006]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:43 compute-0 sudo[166158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssjiosygzdychtkyjpeoatgzwgodrwaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782602.9809232-116-52253196601815/AnsiballZ_file.py'
Nov 22 03:36:43 compute-0 sudo[166158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:44 compute-0 python3.9[166160]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:44 compute-0 sudo[166158]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:44 compute-0 sudo[166328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckmssvnoffxlmntdompyiojabcdbfeiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782604.5281148-116-162617539402162/AnsiballZ_file.py'
Nov 22 03:36:44 compute-0 sudo[166328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:44 compute-0 podman[166204]: 2025-11-22 03:36:44.74388735 +0000 UTC m=+0.415952141 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:36:44 compute-0 python3.9[166330]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:45 compute-0 sudo[166328]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:45 compute-0 sudo[166488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-posvjvvcuhcamurpupznffeodbrsgkcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782605.388789-116-211227758629221/AnsiballZ_file.py'
Nov 22 03:36:45 compute-0 sudo[166488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:36:45 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:36:46 compute-0 python3.9[166490]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:46 compute-0 sudo[166488]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:46 compute-0 sudo[166641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxpiaxmkmqsruzmcohxwigahczkddpcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782606.453035-116-7905910158434/AnsiballZ_file.py'
Nov 22 03:36:46 compute-0 sudo[166641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:47 compute-0 python3.9[166643]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:47 compute-0 sudo[166641]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:47 compute-0 sudo[166793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfqxoxspckkagppaozjzmmpnxbrffenx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782607.784531-166-263588704092380/AnsiballZ_file.py'
Nov 22 03:36:47 compute-0 sudo[166793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:48 compute-0 python3.9[166795]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:48 compute-0 sudo[166793]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:48 compute-0 ceph-mon[75011]: pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:48 compute-0 sudo[166946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxeznihjvsmzlumowuvigliooocyyfwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782608.6100686-166-56132916695859/AnsiballZ_file.py'
Nov 22 03:36:48 compute-0 sudo[166946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:49 compute-0 python3.9[166948]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:49 compute-0 sudo[166946]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:49 compute-0 podman[167026]: 2025-11-22 03:36:49.414791111 +0000 UTC m=+0.088531887 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 03:36:49 compute-0 sudo[167118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulcbcrlicsfxdomgueqctreujvhmcpwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782609.3946927-166-94705064948886/AnsiballZ_file.py'
Nov 22 03:36:49 compute-0 sudo[167118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:49 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.730096340s, txc = 0x55936c5eac00
Nov 22 03:36:50 compute-0 python3.9[167120]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:50 compute-0 sudo[167118]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:50 compute-0 sudo[167270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrgevhusqtumdfdgkvrpurmwvlwctiwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782610.846706-166-121687505795383/AnsiballZ_file.py'
Nov 22 03:36:50 compute-0 sudo[167270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:51 compute-0 python3.9[167272]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:51 compute-0 sudo[167270]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:51 compute-0 sudo[167422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttoygobgtidnefeiagqoincnmamtanlm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782611.6958733-166-161326659531479/AnsiballZ_file.py'
Nov 22 03:36:51 compute-0 sudo[167422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:52 compute-0 python3.9[167424]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:52 compute-0 sudo[167422]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:52 compute-0 sudo[167575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syeiwdcpgtzhflkwxyynoamuiwxoxqhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782612.547003-166-250672243109747/AnsiballZ_file.py'
Nov 22 03:36:52 compute-0 sudo[167575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:52 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 6.799674988s, txc = 0x55936e43e300
Nov 22 03:36:52 compute-0 ceph-mon[75011]: pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:52 compute-0 ceph-mon[75011]: pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:36:52 compute-0 ceph-mon[75011]: pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:52 compute-0 ceph-mon[75011]: pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:52 compute-0 ceph-mon[75011]: pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:53 compute-0 python3.9[167577]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:53 compute-0 sudo[167575]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:53 compute-0 sudo[167727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdotprzztwyqbvrkcckxtpzjxwcobzjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782613.422105-166-47536100644348/AnsiballZ_file.py'
Nov 22 03:36:53 compute-0 sudo[167727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:53 compute-0 python3.9[167729]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:36:53 compute-0 sudo[167727]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:54 compute-0 sudo[167879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oaxovkqzxgzukovrrurmjuyvvzlpknoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782614.3386717-217-127786234023586/AnsiballZ_command.py'
Nov 22 03:36:54 compute-0 sudo[167879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:54 compute-0 python3.9[167881]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:54 compute-0 sudo[167879]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:55 compute-0 ceph-mon[75011]: pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:55 compute-0 python3.9[168033]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:36:56 compute-0 sudo[168183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lphddwkdmwfjjvgdayggvyctmiuprsil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782616.0919976-235-228955832994229/AnsiballZ_systemd_service.py'
Nov 22 03:36:56 compute-0 sudo[168183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:56 compute-0 python3.9[168185]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:36:56 compute-0 systemd[1]: Reloading.
Nov 22 03:36:56 compute-0 systemd-rc-local-generator[168212]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:36:56 compute-0 systemd-sysv-generator[168215]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:36:56 compute-0 sudo[168183]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:57 compute-0 sudo[168370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqsyjewfrsmexrnzgxyscptzlhahmwec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782617.3009388-243-217942756076877/AnsiballZ_command.py'
Nov 22 03:36:57 compute-0 sudo[168370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:57 compute-0 ceph-mon[75011]: pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Nov 22 03:36:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 22 03:36:57 compute-0 python3.9[168372]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:57 compute-0 sudo[168370]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:36:58 compute-0 sudo[168523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfktmxncdzobzfvlxincmthpzahsxamh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782618.061922-243-51509464431445/AnsiballZ_command.py'
Nov 22 03:36:58 compute-0 sudo[168523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:58 compute-0 python3.9[168525]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:58 compute-0 sudo[168523]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:58 compute-0 ceph-mon[75011]: pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Nov 22 03:36:59 compute-0 sudo[168676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cydvqfkmeydtcuxltcvodklgwwwcucnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782619.0479617-243-95541169000066/AnsiballZ_command.py'
Nov 22 03:36:59 compute-0 sudo[168676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:36:59 compute-0 python3.9[168678]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:36:59 compute-0 sudo[168676]: pam_unix(sudo:session): session closed for user root
Nov 22 03:36:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Nov 22 03:37:00 compute-0 sudo[168829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbwmymcaprimwlxanjatkjvxplmnneyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782619.9151442-243-93603708317618/AnsiballZ_command.py'
Nov 22 03:37:00 compute-0 sudo[168829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:00 compute-0 python3.9[168831]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:37:00 compute-0 sudo[168829]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:01 compute-0 sudo[168982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhvrpkffqfugdiulprviceqmfpwhdghy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782620.8552465-243-27930420015114/AnsiballZ_command.py'
Nov 22 03:37:01 compute-0 sudo[168982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:01 compute-0 ceph-mon[75011]: pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 0 B/s wr, 14 op/s
Nov 22 03:37:01 compute-0 python3.9[168984]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:37:01 compute-0 sudo[168982]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Nov 22 03:37:01 compute-0 anacron[29935]: Job `cron.weekly' started
Nov 22 03:37:01 compute-0 anacron[29935]: Job `cron.weekly' terminated
Nov 22 03:37:01 compute-0 sudo[169137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnydwgozpbrlhpcomagzxmlokrwzsnfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782621.7340734-243-272250105249980/AnsiballZ_command.py'
Nov 22 03:37:01 compute-0 sudo[169137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:02 compute-0 python3.9[169139]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:37:02 compute-0 sudo[169137]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:02 compute-0 sudo[169290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrpvanpaelmrgvexoluwctgsnvwcadvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782622.5930955-243-88788497985170/AnsiballZ_command.py'
Nov 22 03:37:02 compute-0 sudo[169290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:03 compute-0 ceph-mon[75011]: pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 17 op/s
Nov 22 03:37:03 compute-0 python3.9[169292]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:37:03 compute-0 sudo[169290]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 03:37:04 compute-0 sudo[169443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giwtgylozuvqhwcgyvcjpslpucwqpwky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782623.7597027-297-14532831826909/AnsiballZ_getent.py'
Nov 22 03:37:04 compute-0 sudo[169443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:04 compute-0 python3.9[169445]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 22 03:37:04 compute-0 sudo[169443]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:05 compute-0 sudo[169596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjsiermylsodcfhxzmirkgvzyukcomwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782624.7970767-305-77655513206922/AnsiballZ_group.py'
Nov 22 03:37:05 compute-0 sudo[169596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:05 compute-0 python3.9[169598]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:37:05 compute-0 ceph-mon[75011]: pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 03:37:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 03:37:05 compute-0 groupadd[169599]: group added to /etc/group: name=libvirt, GID=42473
Nov 22 03:37:05 compute-0 groupadd[169599]: group added to /etc/gshadow: name=libvirt
Nov 22 03:37:05 compute-0 groupadd[169599]: new group: name=libvirt, GID=42473
Nov 22 03:37:05 compute-0 sudo[169596]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:06 compute-0 sudo[169754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shkcokebsbfmpdzbpzcdltlverneywuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782626.2470412-313-19138312734529/AnsiballZ_user.py'
Nov 22 03:37:06 compute-0 sudo[169754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:06 compute-0 python3.9[169756]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 03:37:06 compute-0 useradd[169758]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Nov 22 03:37:06 compute-0 sudo[169754]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:07 compute-0 ceph-mon[75011]: pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 21 op/s
Nov 22 03:37:07 compute-0 sudo[169914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqgkaxvpbakkwbnfvirqvmpixbslfqvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782627.4035394-324-219082702343344/AnsiballZ_setup.py'
Nov 22 03:37:07 compute-0 sudo[169914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Nov 22 03:37:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:07 compute-0 python3.9[169916]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:37:08 compute-0 sudo[169914]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:08 compute-0 sudo[169998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zedxeykpurefuygqxtsixhonrcudgkgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782627.4035394-324-219082702343344/AnsiballZ_dnf.py'
Nov 22 03:37:08 compute-0 sudo[169998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:37:08 compute-0 python3.9[170000]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:37:09 compute-0 ceph-mon[75011]: pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 34 op/s
Nov 22 03:37:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 22 03:37:11 compute-0 ceph-mon[75011]: pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 22 03:37:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 22 03:37:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:13 compute-0 ceph-mon[75011]: pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 0 B/s wr, 45 op/s
Nov 22 03:37:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 22 03:37:15 compute-0 podman[170011]: 2025-11-22 03:37:15.474720528 +0000 UTC m=+0.124337957 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 03:37:15 compute-0 ceph-mon[75011]: pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 41 op/s
Nov 22 03:37:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 22 03:37:17 compute-0 ceph-mon[75011]: pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 22 03:37:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 22 03:37:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:19 compute-0 ceph-mon[75011]: pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Nov 22 03:37:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 03:37:20 compute-0 podman[170143]: 2025-11-22 03:37:20.419664585 +0000 UTC m=+0.099358695 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 22 03:37:21 compute-0 ceph-mon[75011]: pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 24 op/s
Nov 22 03:37:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Nov 22 03:37:22 compute-0 ceph-mon[75011]: pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 0 B/s wr, 10 op/s
Nov 22 03:37:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:37:22.988 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:37:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:37:22.989 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:37:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:37:22.989 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:37:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:24 compute-0 ceph-mon[75011]: pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:26 compute-0 ceph-mon[75011]: pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:28 compute-0 ceph-mon[75011]: pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:30 compute-0 ceph-mon[75011]: pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:32 compute-0 ceph-mon[75011]: pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:34 compute-0 ceph-mon[75011]: pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:37:36
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'images', 'default.rgw.control', 'vms', 'backups', '.mgr']
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:37:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:37:36 compute-0 ceph-mon[75011]: pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:37 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 22 03:37:37 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:37:37 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:37:37 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:37:37 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:37:37 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:37:37 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:37:37 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:37:38 compute-0 ceph-mon[75011]: pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:38 compute-0 sudo[170242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:38 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Nov 22 03:37:38 compute-0 sudo[170242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:38 compute-0 sudo[170242]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:38 compute-0 sudo[170267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:37:38 compute-0 sudo[170267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:38 compute-0 sudo[170267]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:39 compute-0 sudo[170292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:39 compute-0 sudo[170292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:39 compute-0 sudo[170292]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:39 compute-0 sudo[170317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:37:39 compute-0 sudo[170317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:39 compute-0 podman[170416]: 2025-11-22 03:37:39.634742509 +0000 UTC m=+0.081248615 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:37:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:39 compute-0 podman[170416]: 2025-11-22 03:37:39.757856357 +0000 UTC m=+0.204362443 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:37:40 compute-0 sudo[170317]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:37:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:37:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:40 compute-0 sudo[170571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:40 compute-0 sudo[170571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:40 compute-0 sudo[170571]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:40 compute-0 sudo[170596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:37:40 compute-0 sudo[170596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:40 compute-0 sudo[170596]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:40 compute-0 sudo[170621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:40 compute-0 sudo[170621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:40 compute-0 sudo[170621]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:40 compute-0 sudo[170646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:37:40 compute-0 sudo[170646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:41 compute-0 sudo[170646]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:37:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:37:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:37:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:37:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:37:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6ca38e4f-c70f-4137-b8d0-e8386ed4f50f does not exist
Nov 22 03:37:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1ee7a073-19c3-4c73-b241-61c2b57f46d3 does not exist
Nov 22 03:37:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6042e15b-a6b1-4a79-8338-4714d5f717f1 does not exist
Nov 22 03:37:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:37:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:37:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:37:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:37:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:37:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:37:41 compute-0 sudo[170702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:41 compute-0 sudo[170702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:41 compute-0 sudo[170702]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:41 compute-0 sudo[170727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:37:41 compute-0 sudo[170727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:41 compute-0 sudo[170727]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:41 compute-0 ceph-mon[75011]: pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:37:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:37:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:37:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:37:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:37:41 compute-0 sudo[170752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:41 compute-0 sudo[170752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:41 compute-0 sudo[170752]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:41 compute-0 sudo[170777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:37:41 compute-0 sudo[170777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:41 compute-0 podman[170844]: 2025-11-22 03:37:41.900672421 +0000 UTC m=+0.057127957 container create 0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:37:41 compute-0 systemd[1]: Started libpod-conmon-0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37.scope.
Nov 22 03:37:41 compute-0 podman[170844]: 2025-11-22 03:37:41.870500173 +0000 UTC m=+0.026955759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:37:42 compute-0 podman[170844]: 2025-11-22 03:37:42.00984269 +0000 UTC m=+0.166298257 container init 0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:37:42 compute-0 podman[170844]: 2025-11-22 03:37:42.020233517 +0000 UTC m=+0.176689063 container start 0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:37:42 compute-0 podman[170844]: 2025-11-22 03:37:42.025194832 +0000 UTC m=+0.181650379 container attach 0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ride, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:37:42 compute-0 relaxed_ride[170860]: 167 167
Nov 22 03:37:42 compute-0 systemd[1]: libpod-0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37.scope: Deactivated successfully.
Nov 22 03:37:42 compute-0 podman[170844]: 2025-11-22 03:37:42.031374248 +0000 UTC m=+0.187829764 container died 0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 03:37:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-44a44bc985a108a244662d998cd6f8fa693ae6bc2ac49c8a29432e482745b6eb-merged.mount: Deactivated successfully.
Nov 22 03:37:42 compute-0 podman[170844]: 2025-11-22 03:37:42.077917838 +0000 UTC m=+0.234373344 container remove 0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_ride, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:37:42 compute-0 systemd[1]: libpod-conmon-0d909501669abbcc692edb2acc9a038d3e76492cbb722bb31e805800a8456b37.scope: Deactivated successfully.
Nov 22 03:37:42 compute-0 podman[170884]: 2025-11-22 03:37:42.260507648 +0000 UTC m=+0.042958188 container create 678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:37:42 compute-0 systemd[1]: Started libpod-conmon-678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e.scope.
Nov 22 03:37:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:37:42 compute-0 podman[170884]: 2025-11-22 03:37:42.242628329 +0000 UTC m=+0.025078879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e920eb65ce82e1e286aea3d4db0ee25e8962abf1f629da42715ed5688e0a57af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e920eb65ce82e1e286aea3d4db0ee25e8962abf1f629da42715ed5688e0a57af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e920eb65ce82e1e286aea3d4db0ee25e8962abf1f629da42715ed5688e0a57af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e920eb65ce82e1e286aea3d4db0ee25e8962abf1f629da42715ed5688e0a57af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e920eb65ce82e1e286aea3d4db0ee25e8962abf1f629da42715ed5688e0a57af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:42 compute-0 podman[170884]: 2025-11-22 03:37:42.348456014 +0000 UTC m=+0.130906574 container init 678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hellman, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:37:42 compute-0 podman[170884]: 2025-11-22 03:37:42.358479663 +0000 UTC m=+0.140930194 container start 678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:37:42 compute-0 podman[170884]: 2025-11-22 03:37:42.36198121 +0000 UTC m=+0.144431750 container attach 678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:37:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:43 compute-0 brave_hellman[170900]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:37:43 compute-0 brave_hellman[170900]: --> relative data size: 1.0
Nov 22 03:37:43 compute-0 brave_hellman[170900]: --> All data devices are unavailable
Nov 22 03:37:43 compute-0 systemd[1]: libpod-678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e.scope: Deactivated successfully.
Nov 22 03:37:43 compute-0 systemd[1]: libpod-678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e.scope: Consumed 1.017s CPU time.
Nov 22 03:37:43 compute-0 podman[170884]: 2025-11-22 03:37:43.417864273 +0000 UTC m=+1.200314813 container died 678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:37:43 compute-0 ceph-mon[75011]: pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e920eb65ce82e1e286aea3d4db0ee25e8962abf1f629da42715ed5688e0a57af-merged.mount: Deactivated successfully.
Nov 22 03:37:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:43 compute-0 podman[170884]: 2025-11-22 03:37:43.716402748 +0000 UTC m=+1.498853298 container remove 678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:37:43 compute-0 systemd[1]: libpod-conmon-678799abd90e0a4856103f815abda2d10e2eb00c5895eacc75f47cc826a6936e.scope: Deactivated successfully.
Nov 22 03:37:43 compute-0 sudo[170777]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:43 compute-0 sudo[170943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:43 compute-0 sudo[170943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:43 compute-0 sudo[170943]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:43 compute-0 sudo[170968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:37:43 compute-0 sudo[170968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:43 compute-0 sudo[170968]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:43 compute-0 sudo[170993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:43 compute-0 sudo[170993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:43 compute-0 sudo[170993]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:44 compute-0 sudo[171018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:37:44 compute-0 sudo[171018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:44 compute-0 podman[171084]: 2025-11-22 03:37:44.413903827 +0000 UTC m=+0.049628147 container create 0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:37:44 compute-0 systemd[1]: Started libpod-conmon-0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166.scope.
Nov 22 03:37:44 compute-0 podman[171084]: 2025-11-22 03:37:44.392954768 +0000 UTC m=+0.028679139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:37:44 compute-0 podman[171084]: 2025-11-22 03:37:44.516968853 +0000 UTC m=+0.152693213 container init 0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:37:44 compute-0 podman[171084]: 2025-11-22 03:37:44.528521251 +0000 UTC m=+0.164245541 container start 0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:37:44 compute-0 podman[171084]: 2025-11-22 03:37:44.532931755 +0000 UTC m=+0.168656065 container attach 0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:37:44 compute-0 busy_lamarr[171100]: 167 167
Nov 22 03:37:44 compute-0 systemd[1]: libpod-0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166.scope: Deactivated successfully.
Nov 22 03:37:44 compute-0 podman[171084]: 2025-11-22 03:37:44.535974356 +0000 UTC m=+0.171698667 container died 0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:37:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9b966672306c70c1ce75691189e816599a2135ef443c38922d8b971bee36b8f-merged.mount: Deactivated successfully.
Nov 22 03:37:44 compute-0 podman[171084]: 2025-11-22 03:37:44.603692752 +0000 UTC m=+0.239417073 container remove 0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:37:44 compute-0 systemd[1]: libpod-conmon-0835e9344c731c3be79963751a6035ffc83a0935a687a06185767993b816e166.scope: Deactivated successfully.
Nov 22 03:37:44 compute-0 podman[171124]: 2025-11-22 03:37:44.853741827 +0000 UTC m=+0.062491916 container create 65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:37:44 compute-0 systemd[1]: Started libpod-conmon-65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c.scope.
Nov 22 03:37:44 compute-0 podman[171124]: 2025-11-22 03:37:44.833093041 +0000 UTC m=+0.041843170 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9461c45dceb861cfe9359525eee3d667677e46ccbf36dc5fe672c816b687c6f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9461c45dceb861cfe9359525eee3d667677e46ccbf36dc5fe672c816b687c6f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9461c45dceb861cfe9359525eee3d667677e46ccbf36dc5fe672c816b687c6f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9461c45dceb861cfe9359525eee3d667677e46ccbf36dc5fe672c816b687c6f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:44 compute-0 podman[171124]: 2025-11-22 03:37:44.952582561 +0000 UTC m=+0.161332731 container init 65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:37:44 compute-0 podman[171124]: 2025-11-22 03:37:44.964088966 +0000 UTC m=+0.172839095 container start 65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:37:44 compute-0 podman[171124]: 2025-11-22 03:37:44.973925373 +0000 UTC m=+0.182675542 container attach 65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:37:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]: {
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:     "0": [
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:         {
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "devices": [
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "/dev/loop3"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             ],
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_name": "ceph_lv0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_size": "21470642176",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "name": "ceph_lv0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "tags": {
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cluster_name": "ceph",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.crush_device_class": "",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.encrypted": "0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osd_id": "0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.type": "block",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.vdo": "0"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             },
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "type": "block",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "vg_name": "ceph_vg0"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:         }
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:     ],
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:     "1": [
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:         {
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "devices": [
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "/dev/loop4"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             ],
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_name": "ceph_lv1",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_size": "21470642176",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "name": "ceph_lv1",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "tags": {
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cluster_name": "ceph",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.crush_device_class": "",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.encrypted": "0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osd_id": "1",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.type": "block",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.vdo": "0"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             },
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "type": "block",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "vg_name": "ceph_vg1"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:         }
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:     ],
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:     "2": [
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:         {
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "devices": [
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "/dev/loop5"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             ],
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_name": "ceph_lv2",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_size": "21470642176",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "name": "ceph_lv2",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "tags": {
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.cluster_name": "ceph",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.crush_device_class": "",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.encrypted": "0",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osd_id": "2",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.type": "block",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:                 "ceph.vdo": "0"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             },
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "type": "block",
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:             "vg_name": "ceph_vg2"
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:         }
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]:     ]
Nov 22 03:37:45 compute-0 dazzling_beaver[171141]: }
Nov 22 03:37:45 compute-0 systemd[1]: libpod-65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c.scope: Deactivated successfully.
Nov 22 03:37:45 compute-0 podman[171124]: 2025-11-22 03:37:45.740653486 +0000 UTC m=+0.949403576 container died 65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:37:45 compute-0 ceph-mon[75011]: pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:37:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:37:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-9461c45dceb861cfe9359525eee3d667677e46ccbf36dc5fe672c816b687c6f7-merged.mount: Deactivated successfully.
Nov 22 03:37:46 compute-0 podman[171124]: 2025-11-22 03:37:46.21872253 +0000 UTC m=+1.427472629 container remove 65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:37:46 compute-0 systemd[1]: libpod-conmon-65e57530411159089e1bac108a1e81d5d31ca7d899aa68dee25d71d3fa04929c.scope: Deactivated successfully.
Nov 22 03:37:46 compute-0 sudo[171018]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:46 compute-0 podman[171151]: 2025-11-22 03:37:46.343259315 +0000 UTC m=+0.568211423 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:37:46 compute-0 sudo[171188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:46 compute-0 sudo[171188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:46 compute-0 sudo[171188]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:46 compute-0 sudo[171214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:37:46 compute-0 sudo[171214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:46 compute-0 sudo[171214]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:46 compute-0 sudo[171239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:46 compute-0 sudo[171239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:46 compute-0 sudo[171239]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:46 compute-0 sudo[171264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:37:46 compute-0 sudo[171264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:47 compute-0 podman[171329]: 2025-11-22 03:37:47.016693008 +0000 UTC m=+0.104004059 container create cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:37:47 compute-0 podman[171329]: 2025-11-22 03:37:46.941936458 +0000 UTC m=+0.029247559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:47 compute-0 systemd[1]: Started libpod-conmon-cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab.scope.
Nov 22 03:37:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:37:47 compute-0 ceph-mon[75011]: pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:47 compute-0 podman[171329]: 2025-11-22 03:37:47.313202699 +0000 UTC m=+0.400513809 container init cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:37:47 compute-0 podman[171329]: 2025-11-22 03:37:47.325848344 +0000 UTC m=+0.413159395 container start cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:37:47 compute-0 busy_bell[171349]: 167 167
Nov 22 03:37:47 compute-0 systemd[1]: libpod-cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab.scope: Deactivated successfully.
Nov 22 03:37:47 compute-0 podman[171329]: 2025-11-22 03:37:47.529450821 +0000 UTC m=+0.616761891 container attach cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:37:47 compute-0 podman[171329]: 2025-11-22 03:37:47.530091881 +0000 UTC m=+0.617402912 container died cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:37:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f26b02a0acc571ab420c51e7c11cfbcb01edd677a3af14b86dbfb8114363fb80-merged.mount: Deactivated successfully.
Nov 22 03:37:48 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Nov 22 03:37:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:37:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:37:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:37:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:37:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:37:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:37:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:37:48 compute-0 podman[171329]: 2025-11-22 03:37:48.338545301 +0000 UTC m=+1.425856342 container remove cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_bell, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:37:48 compute-0 systemd[1]: libpod-conmon-cade12e5fdaf23d98a932f4a03ededa85a6d13f066cd3dee8832678bcaef39ab.scope: Deactivated successfully.
Nov 22 03:37:48 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 22 03:37:48 compute-0 podman[171377]: 2025-11-22 03:37:48.523404444 +0000 UTC m=+0.038007435 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:37:48 compute-0 podman[171377]: 2025-11-22 03:37:48.669382989 +0000 UTC m=+0.183985960 container create f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_rhodes, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:37:48 compute-0 systemd[1]: Started libpod-conmon-f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7.scope.
Nov 22 03:37:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc0efeddef9ce3ff62b951d97c94485922d6ed5dfd6cea22c31e5a3b14755ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc0efeddef9ce3ff62b951d97c94485922d6ed5dfd6cea22c31e5a3b14755ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc0efeddef9ce3ff62b951d97c94485922d6ed5dfd6cea22c31e5a3b14755ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cc0efeddef9ce3ff62b951d97c94485922d6ed5dfd6cea22c31e5a3b14755ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:37:49 compute-0 podman[171377]: 2025-11-22 03:37:49.225928875 +0000 UTC m=+0.740531956 container init f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:37:49 compute-0 podman[171377]: 2025-11-22 03:37:49.237237788 +0000 UTC m=+0.751840769 container start f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_rhodes, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:37:49 compute-0 podman[171377]: 2025-11-22 03:37:49.375058671 +0000 UTC m=+0.889661691 container attach f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_rhodes, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:37:49 compute-0 ceph-mon[75011]: pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]: {
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "osd_id": 1,
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "type": "bluestore"
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:     },
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "osd_id": 0,
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "type": "bluestore"
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:     },
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "osd_id": 2,
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:         "type": "bluestore"
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]:     }
Nov 22 03:37:50 compute-0 suspicious_rhodes[171393]: }
Nov 22 03:37:50 compute-0 systemd[1]: libpod-f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7.scope: Deactivated successfully.
Nov 22 03:37:50 compute-0 systemd[1]: libpod-f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7.scope: Consumed 1.159s CPU time.
Nov 22 03:37:50 compute-0 podman[171426]: 2025-11-22 03:37:50.431800807 +0000 UTC m=+0.027708753 container died f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:37:50 compute-0 ceph-mon[75011]: pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc0efeddef9ce3ff62b951d97c94485922d6ed5dfd6cea22c31e5a3b14755ec-merged.mount: Deactivated successfully.
Nov 22 03:37:50 compute-0 podman[171426]: 2025-11-22 03:37:50.860743451 +0000 UTC m=+0.456651427 container remove f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_rhodes, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:37:50 compute-0 systemd[1]: libpod-conmon-f66b367a9b67fb612c124edf40e2708cca183368a83e7dd8aa095b9f8ed7f0f7.scope: Deactivated successfully.
Nov 22 03:37:50 compute-0 sudo[171264]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:50 compute-0 podman[171441]: 2025-11-22 03:37:50.920779423 +0000 UTC m=+0.220422828 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 03:37:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:37:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:37:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:50 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 9d540c82-a46a-4630-abd5-acb54ad9f6cf does not exist
Nov 22 03:37:50 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d03d04ce-abfb-4e27-867f-60350fefef27 does not exist
Nov 22 03:37:51 compute-0 sudo[171460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:37:51 compute-0 sudo[171460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:51 compute-0 sudo[171460]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:51 compute-0 sudo[171485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:37:51 compute-0 sudo[171485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:37:51 compute-0 sudo[171485]: pam_unix(sudo:session): session closed for user root
Nov 22 03:37:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:37:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:53 compute-0 ceph-mon[75011]: pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:55 compute-0 ceph-mon[75011]: pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:56 compute-0 ceph-mon[75011]: pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:37:58 compute-0 ceph-mon[75011]: pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:37:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:01 compute-0 ceph-mon[75011]: pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:03 compute-0 ceph-mon[75011]: pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:05 compute-0 ceph-mon[75011]: pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:07 compute-0 ceph-mon[75011]: pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:09 compute-0 ceph-mon[75011]: pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:11 compute-0 ceph-mon[75011]: pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:13 compute-0 ceph-mon[75011]: pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:15 compute-0 ceph-mon[75011]: pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:17 compute-0 ceph-mon[75011]: pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:17 compute-0 podman[179812]: 2025-11-22 03:38:17.44617237 +0000 UTC m=+0.123033808 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:38:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:19 compute-0 ceph-mon[75011]: pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:21 compute-0 podman[182009]: 2025-11-22 03:38:21.395928053 +0000 UTC m=+0.073839280 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 22 03:38:21 compute-0 ceph-mon[75011]: pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:22 compute-0 ceph-mon[75011]: pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:38:22.989 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:38:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:38:22.990 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:38:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:38:22.990 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:38:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:24 compute-0 ceph-mon[75011]: pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:26 compute-0 ceph-mon[75011]: pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:29 compute-0 ceph-mon[75011]: pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:31 compute-0 ceph-mon[75011]: pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:33 compute-0 ceph-mon[75011]: pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:35 compute-0 ceph-mon[75011]: pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:38:36
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'vms', '.rgw.root', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta']
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:38:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:38:37 compute-0 ceph-mon[75011]: pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:39 compute-0 ceph-mon[75011]: pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:41 compute-0 ceph-mon[75011]: pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:43 compute-0 ceph-mon[75011]: pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:38:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:38:47 compute-0 ceph-mon[75011]: pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:47 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Nov 22 03:38:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Nov 22 03:38:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Nov 22 03:38:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Nov 22 03:38:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Nov 22 03:38:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 22 03:38:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 22 03:38:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 22 03:38:48 compute-0 ceph-mon[75011]: pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:48 compute-0 podman[188355]: 2025-11-22 03:38:48.653610427 +0000 UTC m=+0.091421596 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:38:48 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 22 03:38:49 compute-0 ceph-mon[75011]: pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:49 compute-0 groupadd[188387]: group added to /etc/group: name=dnsmasq, GID=991
Nov 22 03:38:49 compute-0 groupadd[188387]: group added to /etc/gshadow: name=dnsmasq
Nov 22 03:38:49 compute-0 groupadd[188387]: new group: name=dnsmasq, GID=991
Nov 22 03:38:49 compute-0 useradd[188394]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Nov 22 03:38:49 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Nov 22 03:38:49 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Nov 22 03:38:50 compute-0 groupadd[188407]: group added to /etc/group: name=clevis, GID=990
Nov 22 03:38:50 compute-0 groupadd[188407]: group added to /etc/gshadow: name=clevis
Nov 22 03:38:50 compute-0 groupadd[188407]: new group: name=clevis, GID=990
Nov 22 03:38:50 compute-0 useradd[188414]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Nov 22 03:38:50 compute-0 usermod[188424]: add 'clevis' to group 'tss'
Nov 22 03:38:50 compute-0 usermod[188424]: add 'clevis' to shadow group 'tss'
Nov 22 03:38:51 compute-0 sudo[188431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:51 compute-0 sudo[188431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:51 compute-0 sudo[188431]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:51 compute-0 sudo[188456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:38:51 compute-0 sudo[188456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:51 compute-0 sudo[188456]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:51 compute-0 sudo[188482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:51 compute-0 sudo[188482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:51 compute-0 sudo[188482]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:51 compute-0 sudo[188509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:38:51 compute-0 sudo[188509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:51 compute-0 podman[188533]: 2025-11-22 03:38:51.523314778 +0000 UTC m=+0.076109274 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 22 03:38:51 compute-0 ceph-mon[75011]: pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:51 compute-0 sudo[188509]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:38:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:38:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:38:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:38:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:38:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:38:51 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 13e82ca5-6077-48d5-b0c5-c9bb1bb27688 does not exist
Nov 22 03:38:51 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 74f3e341-8e87-45cb-99b6-30067684cf2a does not exist
Nov 22 03:38:51 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 831ecfb9-f5bf-488a-8412-5b380de58900 does not exist
Nov 22 03:38:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:38:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:38:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:38:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:38:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:38:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:38:52 compute-0 sudo[188598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:52 compute-0 sudo[188598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:52 compute-0 sudo[188598]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:52 compute-0 sudo[188623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:38:52 compute-0 sudo[188623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:52 compute-0 sudo[188623]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:52 compute-0 sudo[188648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:52 compute-0 sudo[188648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:52 compute-0 sudo[188648]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:52 compute-0 sudo[188673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:38:52 compute-0 sudo[188673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:38:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:38:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:38:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:38:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:38:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:38:52 compute-0 podman[188739]: 2025-11-22 03:38:52.638575319 +0000 UTC m=+0.101641832 container create 6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:38:52 compute-0 podman[188739]: 2025-11-22 03:38:52.565635603 +0000 UTC m=+0.028702176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:52 compute-0 systemd[1]: Started libpod-conmon-6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5.scope.
Nov 22 03:38:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:38:52 compute-0 podman[188739]: 2025-11-22 03:38:52.731403674 +0000 UTC m=+0.194470217 container init 6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:38:52 compute-0 podman[188739]: 2025-11-22 03:38:52.741595405 +0000 UTC m=+0.204661918 container start 6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:38:52 compute-0 infallible_bardeen[188755]: 167 167
Nov 22 03:38:52 compute-0 podman[188739]: 2025-11-22 03:38:52.747067763 +0000 UTC m=+0.210134306 container attach 6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:38:52 compute-0 systemd[1]: libpod-6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5.scope: Deactivated successfully.
Nov 22 03:38:52 compute-0 podman[188739]: 2025-11-22 03:38:52.747864265 +0000 UTC m=+0.210930778 container died 6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-392245ab619fb2c4bf5019a6305fd62f245d658a1d4f0ec110bc874c7d65a7bd-merged.mount: Deactivated successfully.
Nov 22 03:38:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:52 compute-0 podman[188739]: 2025-11-22 03:38:52.810828671 +0000 UTC m=+0.273895184 container remove 6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:38:52 compute-0 systemd[1]: libpod-conmon-6472c640814c0370c7ce589763ad1cfcac9474d6706a6d951b1f678c78f9f3a5.scope: Deactivated successfully.
Nov 22 03:38:53 compute-0 podman[188779]: 2025-11-22 03:38:53.038470429 +0000 UTC m=+0.072862562 container create 5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pike, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:38:53 compute-0 systemd[1]: Started libpod-conmon-5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27.scope.
Nov 22 03:38:53 compute-0 podman[188779]: 2025-11-22 03:38:53.008350821 +0000 UTC m=+0.042742974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829e9092e924c17a810322b9778082b49d809db05cbbbf12fab0cd37812c8119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829e9092e924c17a810322b9778082b49d809db05cbbbf12fab0cd37812c8119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829e9092e924c17a810322b9778082b49d809db05cbbbf12fab0cd37812c8119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829e9092e924c17a810322b9778082b49d809db05cbbbf12fab0cd37812c8119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/829e9092e924c17a810322b9778082b49d809db05cbbbf12fab0cd37812c8119/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:53 compute-0 podman[188779]: 2025-11-22 03:38:53.150487387 +0000 UTC m=+0.184879530 container init 5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:38:53 compute-0 podman[188779]: 2025-11-22 03:38:53.159035844 +0000 UTC m=+0.193427967 container start 5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pike, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:38:53 compute-0 podman[188779]: 2025-11-22 03:38:53.1646554 +0000 UTC m=+0.199047533 container attach 5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:38:53 compute-0 polkitd[43398]: Reloading rules
Nov 22 03:38:53 compute-0 polkitd[43398]: Collecting garbage unconditionally...
Nov 22 03:38:53 compute-0 polkitd[43398]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 03:38:53 compute-0 polkitd[43398]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 03:38:53 compute-0 polkitd[43398]: Finished loading, compiling and executing 3 rules
Nov 22 03:38:53 compute-0 polkitd[43398]: Reloading rules
Nov 22 03:38:53 compute-0 polkitd[43398]: Collecting garbage unconditionally...
Nov 22 03:38:53 compute-0 polkitd[43398]: Loading rules from directory /etc/polkit-1/rules.d
Nov 22 03:38:53 compute-0 polkitd[43398]: Loading rules from directory /usr/share/polkit-1/rules.d
Nov 22 03:38:53 compute-0 polkitd[43398]: Finished loading, compiling and executing 3 rules
Nov 22 03:38:53 compute-0 ceph-mon[75011]: pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:54 compute-0 objective_pike[188797]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:38:54 compute-0 objective_pike[188797]: --> relative data size: 1.0
Nov 22 03:38:54 compute-0 objective_pike[188797]: --> All data devices are unavailable
Nov 22 03:38:54 compute-0 systemd[1]: libpod-5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27.scope: Deactivated successfully.
Nov 22 03:38:54 compute-0 podman[188779]: 2025-11-22 03:38:54.213607925 +0000 UTC m=+1.248000088 container died 5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pike, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:38:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-829e9092e924c17a810322b9778082b49d809db05cbbbf12fab0cd37812c8119-merged.mount: Deactivated successfully.
Nov 22 03:38:54 compute-0 podman[188779]: 2025-11-22 03:38:54.279785866 +0000 UTC m=+1.314177989 container remove 5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:38:54 compute-0 systemd[1]: libpod-conmon-5b7b1dc1448c8d5cf4c7922d0a89f5d340cf580ad98cbdf9f3636083dac6ad27.scope: Deactivated successfully.
Nov 22 03:38:54 compute-0 sudo[188673]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:54 compute-0 sudo[188966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:54 compute-0 sudo[188966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:54 compute-0 sudo[188966]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:54 compute-0 sudo[188999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:38:54 compute-0 sudo[188999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:54 compute-0 sudo[188999]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:54 compute-0 sudo[189032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:54 compute-0 sudo[189032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:54 compute-0 sudo[189032]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:54 compute-0 sudo[189067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:38:54 compute-0 sudo[189067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:54 compute-0 ceph-mon[75011]: pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:54 compute-0 groupadd[189113]: group added to /etc/group: name=ceph, GID=167
Nov 22 03:38:54 compute-0 groupadd[189113]: group added to /etc/gshadow: name=ceph
Nov 22 03:38:54 compute-0 groupadd[189113]: new group: name=ceph, GID=167
Nov 22 03:38:54 compute-0 useradd[189133]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Nov 22 03:38:54 compute-0 podman[189153]: 2025-11-22 03:38:54.942804527 +0000 UTC m=+0.062486546 container create 6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:38:54 compute-0 systemd[1]: Started libpod-conmon-6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637.scope.
Nov 22 03:38:55 compute-0 podman[189153]: 2025-11-22 03:38:54.921514531 +0000 UTC m=+0.041196540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:38:55 compute-0 podman[189153]: 2025-11-22 03:38:55.034778177 +0000 UTC m=+0.154460186 container init 6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:38:55 compute-0 podman[189153]: 2025-11-22 03:38:55.04393219 +0000 UTC m=+0.163614199 container start 6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:38:55 compute-0 podman[189153]: 2025-11-22 03:38:55.047470877 +0000 UTC m=+0.167152886 container attach 6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:38:55 compute-0 dazzling_fermi[189169]: 167 167
Nov 22 03:38:55 compute-0 systemd[1]: libpod-6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637.scope: Deactivated successfully.
Nov 22 03:38:55 compute-0 podman[189153]: 2025-11-22 03:38:55.050486455 +0000 UTC m=+0.170168514 container died 6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e307f4f0ec44c0ac3909f46d8ebaa06d36f6e9282e4241036f5d47c6e923340-merged.mount: Deactivated successfully.
Nov 22 03:38:55 compute-0 podman[189153]: 2025-11-22 03:38:55.093487208 +0000 UTC m=+0.213169197 container remove 6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_fermi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:38:55 compute-0 systemd[1]: libpod-conmon-6736e449cbc0f173001c9dbfabd100b2b2f92d8d9c488d3c3d013c8c9e944637.scope: Deactivated successfully.
Nov 22 03:38:55 compute-0 podman[189194]: 2025-11-22 03:38:55.298410046 +0000 UTC m=+0.049274717 container create beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lovelace, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:55 compute-0 systemd[1]: Started libpod-conmon-beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6.scope.
Nov 22 03:38:55 compute-0 podman[189194]: 2025-11-22 03:38:55.276452001 +0000 UTC m=+0.027316682 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4d8e7eb7f1e425da83c838d0bc308461d6d3915cd0f96553f65889d4d9ae0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4d8e7eb7f1e425da83c838d0bc308461d6d3915cd0f96553f65889d4d9ae0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4d8e7eb7f1e425da83c838d0bc308461d6d3915cd0f96553f65889d4d9ae0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd4d8e7eb7f1e425da83c838d0bc308461d6d3915cd0f96553f65889d4d9ae0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:55 compute-0 podman[189194]: 2025-11-22 03:38:55.398155634 +0000 UTC m=+0.149020385 container init beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lovelace, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:38:55 compute-0 podman[189194]: 2025-11-22 03:38:55.407259517 +0000 UTC m=+0.158124218 container start beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:38:55 compute-0 podman[189194]: 2025-11-22 03:38:55.411572123 +0000 UTC m=+0.162436894 container attach beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lovelace, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:38:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]: {
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:     "0": [
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:         {
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "devices": [
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "/dev/loop3"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             ],
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_name": "ceph_lv0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_size": "21470642176",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "name": "ceph_lv0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "tags": {
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cluster_name": "ceph",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.crush_device_class": "",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.encrypted": "0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osd_id": "0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.type": "block",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.vdo": "0"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             },
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "type": "block",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "vg_name": "ceph_vg0"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:         }
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:     ],
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:     "1": [
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:         {
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "devices": [
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "/dev/loop4"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             ],
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_name": "ceph_lv1",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_size": "21470642176",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "name": "ceph_lv1",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "tags": {
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cluster_name": "ceph",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.crush_device_class": "",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.encrypted": "0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osd_id": "1",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.type": "block",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.vdo": "0"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             },
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "type": "block",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "vg_name": "ceph_vg1"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:         }
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:     ],
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:     "2": [
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:         {
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "devices": [
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "/dev/loop5"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             ],
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_name": "ceph_lv2",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_size": "21470642176",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "name": "ceph_lv2",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "tags": {
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.cluster_name": "ceph",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.crush_device_class": "",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.encrypted": "0",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osd_id": "2",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.type": "block",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:                 "ceph.vdo": "0"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             },
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "type": "block",
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:             "vg_name": "ceph_vg2"
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:         }
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]:     ]
Nov 22 03:38:56 compute-0 elegant_lovelace[189211]: }
Nov 22 03:38:56 compute-0 systemd[1]: libpod-beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6.scope: Deactivated successfully.
Nov 22 03:38:56 compute-0 podman[189194]: 2025-11-22 03:38:56.238768828 +0000 UTC m=+0.989633549 container died beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:38:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bd4d8e7eb7f1e425da83c838d0bc308461d6d3915cd0f96553f65889d4d9ae0-merged.mount: Deactivated successfully.
Nov 22 03:38:56 compute-0 podman[189194]: 2025-11-22 03:38:56.31763408 +0000 UTC m=+1.068498741 container remove beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:38:56 compute-0 systemd[1]: libpod-conmon-beed0b3036598aaa9d61ab822ffd25ec70f6359dab0d872d8809ac6677d89ab6.scope: Deactivated successfully.
Nov 22 03:38:56 compute-0 sudo[189067]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:56 compute-0 sudo[189234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:56 compute-0 sudo[189234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:56 compute-0 sudo[189234]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:56 compute-0 sudo[189259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:38:56 compute-0 sudo[189259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:56 compute-0 sudo[189259]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:56 compute-0 sudo[189284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:56 compute-0 sudo[189284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:56 compute-0 sudo[189284]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:56 compute-0 sudo[189309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:38:56 compute-0 sudo[189309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:56 compute-0 ceph-mon[75011]: pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:56 compute-0 podman[189513]: 2025-11-22 03:38:56.976336063 +0000 UTC m=+0.037202805 container create 77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:38:57 compute-0 systemd[1]: Started libpod-conmon-77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06.scope.
Nov 22 03:38:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:38:57 compute-0 podman[189513]: 2025-11-22 03:38:57.047237829 +0000 UTC m=+0.108104561 container init 77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:57 compute-0 podman[189513]: 2025-11-22 03:38:56.957814507 +0000 UTC m=+0.018681249 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:57 compute-0 podman[189513]: 2025-11-22 03:38:57.053684133 +0000 UTC m=+0.114550855 container start 77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:38:57 compute-0 podman[189513]: 2025-11-22 03:38:57.058107189 +0000 UTC m=+0.118973911 container attach 77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:38:57 compute-0 affectionate_bhaskara[189579]: 167 167
Nov 22 03:38:57 compute-0 systemd[1]: libpod-77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06.scope: Deactivated successfully.
Nov 22 03:38:57 compute-0 podman[189513]: 2025-11-22 03:38:57.063159267 +0000 UTC m=+0.124025989 container died 77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-153ea9b2b025694fa6ae62af7c243d79dee46a2b56f586cf5568fc8fe0d01b38-merged.mount: Deactivated successfully.
Nov 22 03:38:57 compute-0 podman[189513]: 2025-11-22 03:38:57.110229319 +0000 UTC m=+0.171096041 container remove 77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:38:57 compute-0 systemd[1]: libpod-conmon-77478284fadbc22e3fbe2b18bcaa9b9bdefc8f5815cd4c18acfcd93cf167fd06.scope: Deactivated successfully.
Nov 22 03:38:57 compute-0 podman[189752]: 2025-11-22 03:38:57.267109638 +0000 UTC m=+0.045629938 container create f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:57 compute-0 systemd[1]: Started libpod-conmon-f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e.scope.
Nov 22 03:38:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5ef22b1d2d8ec42600c828214ae61bf342e1006524364647f2c773e5b9b63b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5ef22b1d2d8ec42600c828214ae61bf342e1006524364647f2c773e5b9b63b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5ef22b1d2d8ec42600c828214ae61bf342e1006524364647f2c773e5b9b63b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f5ef22b1d2d8ec42600c828214ae61bf342e1006524364647f2c773e5b9b63b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:38:57 compute-0 podman[189752]: 2025-11-22 03:38:57.341936385 +0000 UTC m=+0.120456715 container init f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:38:57 compute-0 podman[189752]: 2025-11-22 03:38:57.247964382 +0000 UTC m=+0.026484722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:38:57 compute-0 podman[189752]: 2025-11-22 03:38:57.351541454 +0000 UTC m=+0.130061754 container start f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:57 compute-0 podman[189752]: 2025-11-22 03:38:57.356453334 +0000 UTC m=+0.134973634 container attach f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:38:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:38:57 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Nov 22 03:38:57 compute-0 sshd[1008]: Received signal 15; terminating.
Nov 22 03:38:57 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Nov 22 03:38:57 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Nov 22 03:38:57 compute-0 systemd[1]: sshd.service: Consumed 3.113s CPU time, read 32.0K from disk, written 4.0K to disk.
Nov 22 03:38:57 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Nov 22 03:38:57 compute-0 systemd[1]: Stopping sshd-keygen.target...
Nov 22 03:38:57 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 03:38:57 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 03:38:57 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 22 03:38:57 compute-0 systemd[1]: Reached target sshd-keygen.target.
Nov 22 03:38:57 compute-0 systemd[1]: Starting OpenSSH server daemon...
Nov 22 03:38:57 compute-0 sshd[190048]: Server listening on 0.0.0.0 port 22.
Nov 22 03:38:57 compute-0 sshd[190048]: Server listening on :: port 22.
Nov 22 03:38:57 compute-0 systemd[1]: Started OpenSSH server daemon.
Nov 22 03:38:58 compute-0 trusting_saha[189823]: {
Nov 22 03:38:58 compute-0 trusting_saha[189823]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "osd_id": 1,
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "type": "bluestore"
Nov 22 03:38:58 compute-0 trusting_saha[189823]:     },
Nov 22 03:38:58 compute-0 trusting_saha[189823]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "osd_id": 0,
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "type": "bluestore"
Nov 22 03:38:58 compute-0 trusting_saha[189823]:     },
Nov 22 03:38:58 compute-0 trusting_saha[189823]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "osd_id": 2,
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:38:58 compute-0 trusting_saha[189823]:         "type": "bluestore"
Nov 22 03:38:58 compute-0 trusting_saha[189823]:     }
Nov 22 03:38:58 compute-0 trusting_saha[189823]: }
Nov 22 03:38:58 compute-0 systemd[1]: libpod-f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e.scope: Deactivated successfully.
Nov 22 03:38:58 compute-0 systemd[1]: libpod-f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e.scope: Consumed 1.050s CPU time.
Nov 22 03:38:58 compute-0 podman[189752]: 2025-11-22 03:38:58.418172755 +0000 UTC m=+1.196693065 container died f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f5ef22b1d2d8ec42600c828214ae61bf342e1006524364647f2c773e5b9b63b-merged.mount: Deactivated successfully.
Nov 22 03:38:58 compute-0 podman[189752]: 2025-11-22 03:38:58.48504815 +0000 UTC m=+1.263568450 container remove f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_saha, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:38:58 compute-0 systemd[1]: libpod-conmon-f684081dbe881f2b4408e459edcdccc31b60af07de53539a638b3d963d9d868e.scope: Deactivated successfully.
Nov 22 03:38:58 compute-0 sudo[189309]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:38:58 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:38:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:38:58 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:38:58 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 0eaaebb9-e320-4971-bea4-418882ad4e62 does not exist
Nov 22 03:38:58 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 929066a6-7c27-4c32-9c58-96a22ddadf62 does not exist
Nov 22 03:38:58 compute-0 sudo[190172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:38:58 compute-0 sudo[190172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:58 compute-0 sudo[190172]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:58 compute-0 sudo[190207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:38:58 compute-0 sudo[190207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:38:58 compute-0 sudo[190207]: pam_unix(sudo:session): session closed for user root
Nov 22 03:38:59 compute-0 ceph-mon[75011]: pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:38:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:38:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:38:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:00 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:39:00 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:39:00 compute-0 systemd[1]: Reloading.
Nov 22 03:39:00 compute-0 systemd-sysv-generator[190398]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:00 compute-0 systemd-rc-local-generator[190393]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:00 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:39:01 compute-0 ceph-mon[75011]: pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.154778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782744154803, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2049, "num_deletes": 251, "total_data_size": 3539247, "memory_usage": 3596208, "flush_reason": "Manual Compaction"}
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Nov 22 03:39:04 compute-0 ceph-mon[75011]: pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782744184544, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3453406, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9825, "largest_seqno": 11873, "table_properties": {"data_size": 3444048, "index_size": 5916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18013, "raw_average_key_size": 19, "raw_value_size": 3425552, "raw_average_value_size": 3711, "num_data_blocks": 268, "num_entries": 923, "num_filter_entries": 923, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763782501, "oldest_key_time": 1763782501, "file_creation_time": 1763782744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 29875 microseconds, and 8822 cpu microseconds.
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.184646) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3453406 bytes OK
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.184671) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.186645) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.186671) EVENT_LOG_v1 {"time_micros": 1763782744186662, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.186694) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3530677, prev total WAL file size 3531832, number of live WAL files 2.
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.188542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3372KB)], [26(6323KB)]
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782744188742, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9928605, "oldest_snapshot_seqno": -1}
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3770 keys, 8221468 bytes, temperature: kUnknown
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782744254389, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8221468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8192088, "index_size": 18838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9477, "raw_key_size": 90650, "raw_average_key_size": 24, "raw_value_size": 8119932, "raw_average_value_size": 2153, "num_data_blocks": 815, "num_entries": 3770, "num_filter_entries": 3770, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763782744, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.254718) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8221468 bytes
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.256308) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.9 rd, 125.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.2 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.3) write-amplify(2.4) OK, records in: 4284, records dropped: 514 output_compression: NoCompression
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.256332) EVENT_LOG_v1 {"time_micros": 1763782744256320, "job": 10, "event": "compaction_finished", "compaction_time_micros": 65786, "compaction_time_cpu_micros": 27768, "output_level": 6, "num_output_files": 1, "total_output_size": 8221468, "num_input_records": 4284, "num_output_records": 3770, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782744257291, "job": 10, "event": "table_file_deletion", "file_number": 28}
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782744258613, "job": 10, "event": "table_file_deletion", "file_number": 26}
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.188376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.258681) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.258689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.258693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.258696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:39:04 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:39:04.258699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:39:05 compute-0 sudo[169998]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:05 compute-0 ceph-mon[75011]: pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:05 compute-0 sudo[195125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vczkgdbmdgxnoekhteuczaaieaxayblr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782745.238147-336-199820813691405/AnsiballZ_systemd.py'
Nov 22 03:39:05 compute-0 sudo[195125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:06 compute-0 python3.9[195152]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:39:06 compute-0 systemd[1]: Reloading.
Nov 22 03:39:06 compute-0 systemd-rc-local-generator[195602]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:06 compute-0 systemd-sysv-generator[195605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:06 compute-0 sudo[195125]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:07 compute-0 sudo[196399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkfbgmoypbutwbgrzqvhjduenlduioyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782746.8183675-336-215479938451288/AnsiballZ_systemd.py'
Nov 22 03:39:07 compute-0 sudo[196399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:07 compute-0 ceph-mon[75011]: pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:07 compute-0 python3.9[196422]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:39:07 compute-0 systemd[1]: Reloading.
Nov 22 03:39:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:07 compute-0 systemd-sysv-generator[196912]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:07 compute-0 systemd-rc-local-generator[196907]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:07 compute-0 sudo[196399]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:08 compute-0 sudo[197792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duidvqfbmrszucbsfhbqljkcnoixucmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782748.0820198-336-114635426875659/AnsiballZ_systemd.py'
Nov 22 03:39:08 compute-0 sudo[197792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:08 compute-0 python3.9[197811]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:39:08 compute-0 systemd[1]: Reloading.
Nov 22 03:39:08 compute-0 systemd-sysv-generator[198288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:08 compute-0 systemd-rc-local-generator[198283]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:09 compute-0 sudo[197792]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:09 compute-0 ceph-mon[75011]: pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:09 compute-0 sudo[199004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpnwmirunwqfwcuomgrxqrmnjlqzuthm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782749.1689947-336-224976854813974/AnsiballZ_systemd.py'
Nov 22 03:39:09 compute-0 sudo[199004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:09 compute-0 python3.9[199021]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:39:09 compute-0 systemd[1]: Reloading.
Nov 22 03:39:09 compute-0 systemd-rc-local-generator[199460]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:09 compute-0 systemd-sysv-generator[199467]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:10 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:39:10 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:39:10 compute-0 systemd[1]: man-db-cache-update.service: Consumed 12.267s CPU time.
Nov 22 03:39:10 compute-0 systemd[1]: run-r2648cc0e725e4bd7a98c36e3ac89cbbf.service: Deactivated successfully.
Nov 22 03:39:10 compute-0 sudo[199004]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:10 compute-0 sudo[199682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvbjxyracsxaludfbayigxiltaegjkiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782750.3774-365-211827291553215/AnsiballZ_systemd.py'
Nov 22 03:39:10 compute-0 sudo[199682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:11 compute-0 python3.9[199684]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:11 compute-0 ceph-mon[75011]: pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:12 compute-0 systemd[1]: Reloading.
Nov 22 03:39:12 compute-0 systemd-sysv-generator[199715]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:12 compute-0 systemd-rc-local-generator[199710]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:12 compute-0 sudo[199682]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:13 compute-0 sudo[199872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esvdbdvpoxjpasnwxpqrhkfswbgxkuja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782752.739934-365-235084132454567/AnsiballZ_systemd.py'
Nov 22 03:39:13 compute-0 sudo[199872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:13 compute-0 ceph-mon[75011]: pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:13 compute-0 python3.9[199874]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:13 compute-0 systemd[1]: Reloading.
Nov 22 03:39:13 compute-0 systemd-sysv-generator[199905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:13 compute-0 systemd-rc-local-generator[199901]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:13 compute-0 sudo[199872]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:14 compute-0 sudo[200062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bewfssubugklanokdywhxwzcmmvzbkzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782753.981257-365-262733257382140/AnsiballZ_systemd.py'
Nov 22 03:39:14 compute-0 sudo[200062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:14 compute-0 python3.9[200064]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:14 compute-0 systemd[1]: Reloading.
Nov 22 03:39:14 compute-0 systemd-rc-local-generator[200090]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:14 compute-0 systemd-sysv-generator[200097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:15 compute-0 sudo[200062]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:15 compute-0 ceph-mon[75011]: pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:15 compute-0 sudo[200251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njmfxfogdrwizjfpltwjqqhfmahroray ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782755.2483635-365-162168663580068/AnsiballZ_systemd.py'
Nov 22 03:39:15 compute-0 sudo[200251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:15 compute-0 python3.9[200253]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:16 compute-0 sudo[200251]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:16 compute-0 sudo[200406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyyqhkzpgawnilyhpskmjbdmcwofhhic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782756.2302668-365-165304645849189/AnsiballZ_systemd.py'
Nov 22 03:39:16 compute-0 sudo[200406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:16 compute-0 python3.9[200408]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:16 compute-0 systemd[1]: Reloading.
Nov 22 03:39:17 compute-0 systemd-rc-local-generator[200439]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:17 compute-0 systemd-sysv-generator[200444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:17 compute-0 ceph-mon[75011]: pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:17 compute-0 sudo[200406]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:17 compute-0 sudo[200597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayicdajhxllwrpbdaxzzzatnrxfqzumi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782757.511173-401-128712154422319/AnsiballZ_systemd.py'
Nov 22 03:39:17 compute-0 sudo[200597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:18 compute-0 python3.9[200599]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 22 03:39:18 compute-0 systemd[1]: Reloading.
Nov 22 03:39:18 compute-0 systemd-rc-local-generator[200630]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:39:18 compute-0 systemd-sysv-generator[200635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:39:18 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 22 03:39:18 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 22 03:39:18 compute-0 sudo[200597]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:19 compute-0 ceph-mon[75011]: pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:19 compute-0 sudo[200799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erdikccfjweyngziqtxqchyupqksdpmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782758.860055-409-168560388521233/AnsiballZ_systemd.py'
Nov 22 03:39:19 compute-0 sudo[200799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:19 compute-0 podman[200764]: 2025-11-22 03:39:19.315018911 +0000 UTC m=+0.133689700 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:39:19 compute-0 python3.9[200809]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:19 compute-0 sudo[200799]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:20 compute-0 sudo[200971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxyraunciwbbjcwsfcawkywvdsicyuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782759.828146-409-142112273933053/AnsiballZ_systemd.py'
Nov 22 03:39:20 compute-0 sudo[200971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:20 compute-0 python3.9[200973]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:20 compute-0 sudo[200971]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:20 compute-0 auditd[704]: Audit daemon rotating log files
Nov 22 03:39:20 compute-0 sudo[201126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nalqffluohaquxbiugipizpnbusyyugy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782760.605937-409-19452116057843/AnsiballZ_systemd.py'
Nov 22 03:39:20 compute-0 sudo[201126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:21 compute-0 ceph-mon[75011]: pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:21 compute-0 python3.9[201128]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:21 compute-0 sudo[201126]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:21 compute-0 sudo[201294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjcqpxjlualntueehjlplxfpquihviir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782761.5144455-409-143238713233716/AnsiballZ_systemd.py'
Nov 22 03:39:21 compute-0 sudo[201294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:21 compute-0 podman[201255]: 2025-11-22 03:39:21.894494702 +0000 UTC m=+0.070470372 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:39:22 compute-0 python3.9[201302]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:22 compute-0 sudo[201294]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:22 compute-0 sudo[201455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgwjhwkokcmrczqilalfedyshsdasmgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782762.5023487-409-117312063102965/AnsiballZ_systemd.py'
Nov 22 03:39:22 compute-0 sudo[201455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:39:22.991 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:39:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:39:22.991 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:39:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:39:22.992 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:39:23 compute-0 python3.9[201457]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:23 compute-0 sudo[201455]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:23 compute-0 ceph-mon[75011]: pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:23 compute-0 sudo[201610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-covmklkdnjmodyezhudtueyrimevdsnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782763.3749778-409-237116346027083/AnsiballZ_systemd.py'
Nov 22 03:39:23 compute-0 sudo[201610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:23 compute-0 python3.9[201612]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:25 compute-0 sudo[201610]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:25 compute-0 ceph-mon[75011]: pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:25 compute-0 sudo[201765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzbpxzojogvkscfttpejfvrrcxlvwbdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782765.229229-409-202643113932045/AnsiballZ_systemd.py'
Nov 22 03:39:25 compute-0 sudo[201765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:25 compute-0 python3.9[201767]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:25 compute-0 sudo[201765]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:26 compute-0 sudo[201920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rojmbnwhubwtctcqpnpkaqahbjbymugq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782766.1363087-409-248732089764416/AnsiballZ_systemd.py'
Nov 22 03:39:26 compute-0 sudo[201920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:26 compute-0 python3.9[201922]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:26 compute-0 sudo[201920]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:27 compute-0 ceph-mon[75011]: pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:27 compute-0 sudo[202075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuoeornpuuxfrroyhrjuqufzbbcffxhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782767.0598452-409-208823873047621/AnsiballZ_systemd.py'
Nov 22 03:39:27 compute-0 sudo[202075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:27 compute-0 python3.9[202077]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:27 compute-0 sudo[202075]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:28 compute-0 sudo[202230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsgpwfmirurtnhgymngisrvrfvhapsrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782768.0556946-409-115075762933897/AnsiballZ_systemd.py'
Nov 22 03:39:28 compute-0 sudo[202230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:28 compute-0 python3.9[202232]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:28 compute-0 sudo[202230]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:29 compute-0 ceph-mon[75011]: pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:29 compute-0 sudo[202385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stmaiqqogslqhftenotwpciwmzubpnir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782768.9832995-409-155372098623331/AnsiballZ_systemd.py'
Nov 22 03:39:29 compute-0 sudo[202385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:29 compute-0 python3.9[202387]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:29 compute-0 sudo[202385]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:30 compute-0 sudo[202540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdilpwmkevdxlpqbvmupihsjhucitdyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782769.877735-409-179986025179618/AnsiballZ_systemd.py'
Nov 22 03:39:30 compute-0 sudo[202540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:30 compute-0 python3.9[202542]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:30 compute-0 sudo[202540]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:31 compute-0 sudo[202695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoxxzrhxokyuawpjzvdltvfgwayxigwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782770.778202-409-61151346625731/AnsiballZ_systemd.py'
Nov 22 03:39:31 compute-0 sudo[202695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:31 compute-0 ceph-mon[75011]: pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:31 compute-0 python3.9[202697]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:32 compute-0 sudo[202695]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:33 compute-0 sudo[202850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uumorkcvggqukxiwvvpykmcxmfvkeqes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782772.7903655-409-49728302729213/AnsiballZ_systemd.py'
Nov 22 03:39:33 compute-0 sudo[202850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:33 compute-0 ceph-mon[75011]: pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:33 compute-0 python3.9[202852]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 22 03:39:33 compute-0 sudo[202850]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:34 compute-0 sudo[203005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moosalhutfanmdotcjykdcimenjvodmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782774.0081584-511-77990630618816/AnsiballZ_file.py'
Nov 22 03:39:34 compute-0 sudo[203005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:34 compute-0 python3.9[203007]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:34 compute-0 sudo[203005]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:35 compute-0 sudo[203157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnfcxjjglrohwzkfsgyqxjgkroqpqltr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782774.7454798-511-76299830417142/AnsiballZ_file.py'
Nov 22 03:39:35 compute-0 sudo[203157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:35 compute-0 python3.9[203159]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:35 compute-0 ceph-mon[75011]: pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:35 compute-0 sudo[203157]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:35 compute-0 sudo[203309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsviwvjwmuafxhijycmsztolvbjvknqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782775.5006158-511-79719437167042/AnsiballZ_file.py'
Nov 22 03:39:35 compute-0 sudo[203309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:36 compute-0 python3.9[203311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:39:36
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'vms', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:39:36 compute-0 sudo[203309]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:39:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:39:36 compute-0 sudo[203461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouopvjumoifeulzajctjuounfookfweg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782776.2796512-511-274183374225913/AnsiballZ_file.py'
Nov 22 03:39:36 compute-0 sudo[203461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:36 compute-0 python3.9[203463]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:36 compute-0 sudo[203461]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:37 compute-0 sudo[203613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqeogcwnnryfejcvedinaatgmctxlvna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782776.9434483-511-268821426666387/AnsiballZ_file.py'
Nov 22 03:39:37 compute-0 sudo[203613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:37 compute-0 ceph-mon[75011]: pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:37 compute-0 python3.9[203615]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:37 compute-0 sudo[203613]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:37 compute-0 sudo[203765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afsnmpvqstdbltajrpfhdvcnkplkxymg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782777.6578174-511-219850445785968/AnsiballZ_file.py'
Nov 22 03:39:37 compute-0 sudo[203765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:38 compute-0 python3.9[203767]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:39:38 compute-0 sudo[203765]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:38 compute-0 sudo[203917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcehwtsglrkudqfwcpzqcziacixhtnmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782778.285677-554-9136813592374/AnsiballZ_stat.py'
Nov 22 03:39:38 compute-0 sudo[203917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:38 compute-0 python3.9[203919]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:39 compute-0 sudo[203917]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:39 compute-0 ceph-mon[75011]: pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:39 compute-0 sudo[204042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoixrawwrciztqlovwumaytpyseiaxdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782778.285677-554-9136813592374/AnsiballZ_copy.py'
Nov 22 03:39:39 compute-0 sudo[204042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:39 compute-0 python3.9[204044]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763782778.285677-554-9136813592374/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:39 compute-0 sudo[204042]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:40 compute-0 sudo[204194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekijijevzufqdekgnlcltfdimhrjfgom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782780.0563807-554-83435271132588/AnsiballZ_stat.py'
Nov 22 03:39:40 compute-0 sudo[204194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:40 compute-0 python3.9[204196]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:40 compute-0 sudo[204194]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:41 compute-0 sudo[204319]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwolxfnupseiujzhoizqqcuaubeutpcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782780.0563807-554-83435271132588/AnsiballZ_copy.py'
Nov 22 03:39:41 compute-0 sudo[204319]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:41 compute-0 python3.9[204321]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763782780.0563807-554-83435271132588/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:41 compute-0 sudo[204319]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:41 compute-0 ceph-mon[75011]: pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:41 compute-0 sudo[204471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqxtlyklvvzzuhcpfxigqecjcdcelfjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782781.5143375-554-161032617249263/AnsiballZ_stat.py'
Nov 22 03:39:41 compute-0 sudo[204471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:42 compute-0 python3.9[204473]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:42 compute-0 sudo[204471]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:42 compute-0 sudo[204596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uigsmzstdzbfmofzoirmizltuxpphgjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782781.5143375-554-161032617249263/AnsiballZ_copy.py'
Nov 22 03:39:42 compute-0 sudo[204596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:42 compute-0 python3.9[204598]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763782781.5143375-554-161032617249263/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:42 compute-0 sudo[204596]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:43 compute-0 sudo[204748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgesxmlryzydqxiotsopxjrtpqdxxpkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782782.9333906-554-24449488286732/AnsiballZ_stat.py'
Nov 22 03:39:43 compute-0 sudo[204748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:43 compute-0 ceph-mon[75011]: pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:43 compute-0 python3.9[204750]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:43 compute-0 sudo[204748]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:43 compute-0 sudo[204873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnwbrhztzqotqfegvdpzfdlmjrsjtddo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782782.9333906-554-24449488286732/AnsiballZ_copy.py'
Nov 22 03:39:43 compute-0 sudo[204873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:44 compute-0 python3.9[204875]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763782782.9333906-554-24449488286732/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:44 compute-0 sudo[204873]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:44 compute-0 sudo[205025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wikbqwmuhoyglkiocquhmfpaozeduvsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782784.3652923-554-245303758916094/AnsiballZ_stat.py'
Nov 22 03:39:44 compute-0 sudo[205025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:44 compute-0 python3.9[205027]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:45 compute-0 sudo[205025]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:45 compute-0 sudo[205150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hblyzybibseucovkwhtsuduxpefkljni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782784.3652923-554-245303758916094/AnsiballZ_copy.py'
Nov 22 03:39:45 compute-0 sudo[205150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:45 compute-0 ceph-mon[75011]: pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:45 compute-0 python3.9[205152]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763782784.3652923-554-245303758916094/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:45 compute-0 sudo[205150]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:39:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:39:46 compute-0 sudo[205302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-garfsrqzguubrevahxmvxeymplolkezw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782785.831671-554-37243467064379/AnsiballZ_stat.py'
Nov 22 03:39:46 compute-0 sudo[205302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:46 compute-0 python3.9[205304]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:46 compute-0 sudo[205302]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:46 compute-0 sudo[205427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfkdnpdlqlwrbxusmpyshfccecswfuln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782785.831671-554-37243467064379/AnsiballZ_copy.py'
Nov 22 03:39:46 compute-0 sudo[205427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:46 compute-0 python3.9[205429]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763782785.831671-554-37243467064379/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:46 compute-0 sudo[205427]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:47 compute-0 sudo[205579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyibheissdrzmybntlkuiwvaumgvbejq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782787.0650249-554-253132217342553/AnsiballZ_stat.py'
Nov 22 03:39:47 compute-0 sudo[205579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:47 compute-0 ceph-mon[75011]: pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:47 compute-0 python3.9[205581]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:47 compute-0 sudo[205579]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:47 compute-0 sudo[205702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byobencuuqxrjrukosxsvxhbvxgwbkus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782787.0650249-554-253132217342553/AnsiballZ_copy.py'
Nov 22 03:39:47 compute-0 sudo[205702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:48 compute-0 python3.9[205704]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763782787.0650249-554-253132217342553/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:48 compute-0 sudo[205702]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:48 compute-0 sudo[205854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrpqgrqfayhonqgefjsaiqapitrohzde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782788.3065815-554-83938673564481/AnsiballZ_stat.py'
Nov 22 03:39:48 compute-0 sudo[205854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:48 compute-0 python3.9[205856]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:39:48 compute-0 sudo[205854]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:49 compute-0 sudo[205979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trwjferdmdfkqbjumdodriqijhhddbuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782788.3065815-554-83938673564481/AnsiballZ_copy.py'
Nov 22 03:39:49 compute-0 sudo[205979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:49 compute-0 ceph-mon[75011]: pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:49 compute-0 podman[205981]: 2025-11-22 03:39:49.540682838 +0000 UTC m=+0.115913412 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 22 03:39:49 compute-0 python3.9[205982]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1763782788.3065815-554-83938673564481/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:49 compute-0 sudo[205979]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:50 compute-0 sudo[206157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbdobxuswwdjjzrgdvngjjhqdicwdjho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782789.8280365-667-88736571305724/AnsiballZ_command.py'
Nov 22 03:39:50 compute-0 sudo[206157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:50 compute-0 python3.9[206159]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 22 03:39:50 compute-0 sudo[206157]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:50 compute-0 sudo[206310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyrlkpijonjxnjahpkbtoxasacqzlydu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782790.5717447-676-63555557673707/AnsiballZ_file.py'
Nov 22 03:39:50 compute-0 sudo[206310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:51 compute-0 python3.9[206312]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:51 compute-0 sudo[206310]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:51 compute-0 ceph-mon[75011]: pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:51 compute-0 sudo[206462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyerjuazurlpaswrojnmxndhxodmmnkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782791.266731-676-120427387774256/AnsiballZ_file.py'
Nov 22 03:39:51 compute-0 sudo[206462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:51 compute-0 python3.9[206464]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:51 compute-0 sudo[206462]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:52 compute-0 podman[206588]: 2025-11-22 03:39:52.235848038 +0000 UTC m=+0.059037556 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 22 03:39:52 compute-0 sudo[206630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmxkjctsfnxrzjxjvjbdfpfuhiuroiwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782791.910881-676-104945504454687/AnsiballZ_file.py'
Nov 22 03:39:52 compute-0 sudo[206630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:52 compute-0 python3.9[206634]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:52 compute-0 sudo[206630]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:52 compute-0 sudo[206784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwjaxohxorzobdcpscthmeholcfbhkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782792.5824695-676-7690274866565/AnsiballZ_file.py'
Nov 22 03:39:52 compute-0 sudo[206784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:53 compute-0 python3.9[206786]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:53 compute-0 sudo[206784]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:53 compute-0 ceph-mon[75011]: pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:53 compute-0 sudo[206936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpgcmtkwebtgxjgdkqenuasoltgfvomq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782793.354521-676-188798920577302/AnsiballZ_file.py'
Nov 22 03:39:53 compute-0 sudo[206936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:53 compute-0 python3.9[206938]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:53 compute-0 sudo[206936]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:54 compute-0 sudo[207088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrdqsrytnyxfgfznbixmxwihkmravont ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782794.1389008-676-231537008084680/AnsiballZ_file.py'
Nov 22 03:39:54 compute-0 sudo[207088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:54 compute-0 python3.9[207090]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:54 compute-0 sudo[207088]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:55 compute-0 sudo[207240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpizhuuxgngutvdwutastynetdwvtyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782794.8095155-676-200418197553097/AnsiballZ_file.py'
Nov 22 03:39:55 compute-0 sudo[207240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:55 compute-0 python3.9[207242]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:55 compute-0 sudo[207240]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:55 compute-0 ceph-mon[75011]: pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:55 compute-0 sudo[207392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtrrmewhutrxfcmmvxotxzqznrttijvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782795.5123985-676-250091135333452/AnsiballZ_file.py'
Nov 22 03:39:55 compute-0 sudo[207392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:56 compute-0 python3.9[207394]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:56 compute-0 sudo[207392]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:56 compute-0 sudo[207544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qavxpgffqbyzyhyvwxrixgcxgrzeyrbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782796.200738-676-111336631239208/AnsiballZ_file.py'
Nov 22 03:39:56 compute-0 sudo[207544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:56 compute-0 python3.9[207546]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:56 compute-0 sudo[207544]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:57 compute-0 sudo[207696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuussliuyjxfkjwvkumqevdtvrzfueqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782796.915926-676-168755752616552/AnsiballZ_file.py'
Nov 22 03:39:57 compute-0 sudo[207696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:57 compute-0 python3.9[207698]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:57 compute-0 sudo[207696]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:57 compute-0 ceph-mon[75011]: pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:57 compute-0 sudo[207848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlolkcwropztcetarrtkmjhbsuohcypa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782797.5759022-676-95326527557079/AnsiballZ_file.py'
Nov 22 03:39:57 compute-0 sudo[207848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:58 compute-0 python3.9[207850]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:58 compute-0 sudo[207848]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:58 compute-0 sudo[208000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evxshvymegskylzmvufioqcfdpuvlfzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782798.2218628-676-96865051235852/AnsiballZ_file.py'
Nov 22 03:39:58 compute-0 sudo[208000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:58 compute-0 sudo[208001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:39:58 compute-0 sudo[208001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:39:58 compute-0 sudo[208001]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:58 compute-0 sudo[208028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:39:58 compute-0 sudo[208028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:39:58 compute-0 sudo[208028]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:58 compute-0 sudo[208053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:39:58 compute-0 sudo[208053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:39:58 compute-0 sudo[208053]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:58 compute-0 python3.9[208008]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:58 compute-0 sudo[208000]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:58 compute-0 sudo[208078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:39:58 compute-0 sudo[208078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:39:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:39:59 compute-0 sudo[208270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkaplvsfnswzbjiugakogcqkrxtfnrrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782799.0374866-676-237666120430477/AnsiballZ_file.py'
Nov 22 03:39:59 compute-0 sudo[208270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:39:59 compute-0 sudo[208078]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:39:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:39:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:39:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:39:59 compute-0 ceph-mon[75011]: pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:39:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:39:59 compute-0 python3.9[208273]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:39:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:39:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 949b7429-d37f-4372-9e2e-fd7989fd5510 does not exist
Nov 22 03:39:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev eba16ddc-a0ff-44d3-8cac-d3a62a9fdad6 does not exist
Nov 22 03:39:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev eacbcd31-eb18-47de-9b19-6f2ec793ecba does not exist
Nov 22 03:39:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:39:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:39:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:39:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:39:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:39:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:39:59 compute-0 sudo[208270]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:59 compute-0 sudo[208286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:39:59 compute-0 sudo[208286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:39:59 compute-0 sudo[208286]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:39:59 compute-0 sudo[208332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:39:59 compute-0 sudo[208332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:39:59 compute-0 sudo[208332]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:59 compute-0 sudo[208383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:39:59 compute-0 sudo[208383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:39:59 compute-0 sudo[208383]: pam_unix(sudo:session): session closed for user root
Nov 22 03:39:59 compute-0 sudo[208437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:39:59 compute-0 sudo[208437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:00 compute-0 sudo[208546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynpajwoqpuygzlcdoomswtugkxwxgjwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782799.737708-676-72429238675983/AnsiballZ_file.py'
Nov 22 03:40:00 compute-0 sudo[208546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:00 compute-0 python3.9[208555]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:00 compute-0 sudo[208546]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:00 compute-0 podman[208578]: 2025-11-22 03:40:00.289736541 +0000 UTC m=+0.074420190 container create d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:40:00 compute-0 systemd[1]: Started libpod-conmon-d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0.scope.
Nov 22 03:40:00 compute-0 podman[208578]: 2025-11-22 03:40:00.255998579 +0000 UTC m=+0.040682328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:40:00 compute-0 podman[208578]: 2025-11-22 03:40:00.393649998 +0000 UTC m=+0.178333737 container init d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ardinghelli, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:40:00 compute-0 podman[208578]: 2025-11-22 03:40:00.404039116 +0000 UTC m=+0.188722805 container start d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:40:00 compute-0 podman[208578]: 2025-11-22 03:40:00.408163147 +0000 UTC m=+0.192846916 container attach d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:40:00 compute-0 optimistic_ardinghelli[208617]: 167 167
Nov 22 03:40:00 compute-0 systemd[1]: libpod-d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0.scope: Deactivated successfully.
Nov 22 03:40:00 compute-0 podman[208578]: 2025-11-22 03:40:00.412622812 +0000 UTC m=+0.197306491 container died d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-50c0cd90f6274bcb847338b6e47ad906dbc3073931570da2451c592d641e4565-merged.mount: Deactivated successfully.
Nov 22 03:40:00 compute-0 podman[208578]: 2025-11-22 03:40:00.466598809 +0000 UTC m=+0.251282468 container remove d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:40:00 compute-0 systemd[1]: libpod-conmon-d7ec5064e05a7459726349bc9ed8f081271bec1d6c473669036ae95bfdf5fab0.scope: Deactivated successfully.
Nov 22 03:40:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:40:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:40:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:40:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:40:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:40:00 compute-0 podman[208696]: 2025-11-22 03:40:00.668330428 +0000 UTC m=+0.061562394 container create da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_booth, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:40:00 compute-0 systemd[1]: Started libpod-conmon-da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b.scope.
Nov 22 03:40:00 compute-0 podman[208696]: 2025-11-22 03:40:00.641591992 +0000 UTC m=+0.034824008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aacd2c5c241efc6f94908dded12231de5b526addf2900d7a98f486b897e3a08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aacd2c5c241efc6f94908dded12231de5b526addf2900d7a98f486b897e3a08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aacd2c5c241efc6f94908dded12231de5b526addf2900d7a98f486b897e3a08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aacd2c5c241efc6f94908dded12231de5b526addf2900d7a98f486b897e3a08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9aacd2c5c241efc6f94908dded12231de5b526addf2900d7a98f486b897e3a08/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:00 compute-0 podman[208696]: 2025-11-22 03:40:00.774860051 +0000 UTC m=+0.168092047 container init da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:40:00 compute-0 podman[208696]: 2025-11-22 03:40:00.791152838 +0000 UTC m=+0.184384804 container start da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_booth, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:40:00 compute-0 podman[208696]: 2025-11-22 03:40:00.794742808 +0000 UTC m=+0.187974814 container attach da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:40:00 compute-0 sudo[208788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoikovoaajothbwfgjuzxsxbsqqhrmcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782800.4761176-775-57786232543337/AnsiballZ_stat.py'
Nov 22 03:40:00 compute-0 sudo[208788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:01 compute-0 python3.9[208790]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:01 compute-0 sudo[208788]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:01 compute-0 sudo[208912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfdufqdbpmgdtmrfipuzkadrtuzquxjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782800.4761176-775-57786232543337/AnsiballZ_copy.py'
Nov 22 03:40:01 compute-0 sudo[208912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:01 compute-0 ceph-mon[75011]: pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:01 compute-0 python3.9[208916]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782800.4761176-775-57786232543337/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:01 compute-0 sudo[208912]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:01 compute-0 cool_booth[208757]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:40:01 compute-0 cool_booth[208757]: --> relative data size: 1.0
Nov 22 03:40:01 compute-0 cool_booth[208757]: --> All data devices are unavailable
Nov 22 03:40:01 compute-0 systemd[1]: libpod-da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b.scope: Deactivated successfully.
Nov 22 03:40:01 compute-0 podman[208696]: 2025-11-22 03:40:01.973110219 +0000 UTC m=+1.366342225 container died da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:40:01 compute-0 systemd[1]: libpod-da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b.scope: Consumed 1.113s CPU time.
Nov 22 03:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9aacd2c5c241efc6f94908dded12231de5b526addf2900d7a98f486b897e3a08-merged.mount: Deactivated successfully.
Nov 22 03:40:02 compute-0 podman[208696]: 2025-11-22 03:40:02.048854169 +0000 UTC m=+1.442086115 container remove da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_booth, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:40:02 compute-0 systemd[1]: libpod-conmon-da30452bc510924052b89632af5e5630e4da78428790478bd763d4de8a64e74b.scope: Deactivated successfully.
Nov 22 03:40:02 compute-0 sudo[208437]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:02 compute-0 sudo[209065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:40:02 compute-0 sudo[209065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:02 compute-0 sudo[209065]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:02 compute-0 sudo[209123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgazztkamixdrormelpbzzxbifvkegyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782801.8506777-775-105639433443460/AnsiballZ_stat.py'
Nov 22 03:40:02 compute-0 sudo[209123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:02 compute-0 sudo[209124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:40:02 compute-0 sudo[209124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:02 compute-0 sudo[209124]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:02 compute-0 sudo[209151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:40:02 compute-0 sudo[209151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:02 compute-0 sudo[209151]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:02 compute-0 sudo[209176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:40:02 compute-0 sudo[209176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:02 compute-0 python3.9[209133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:02 compute-0 sudo[209123]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:02 compute-0 podman[209319]: 2025-11-22 03:40:02.751204875 +0000 UTC m=+0.072541634 container create 5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feistel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:40:02 compute-0 sudo[209373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhbvuquqmzooztjucpotdwsmxzlaleep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782801.8506777-775-105639433443460/AnsiballZ_copy.py'
Nov 22 03:40:02 compute-0 sudo[209373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:02 compute-0 podman[209319]: 2025-11-22 03:40:02.707509666 +0000 UTC m=+0.028846445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:02 compute-0 systemd[1]: Started libpod-conmon-5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83.scope.
Nov 22 03:40:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:40:02 compute-0 podman[209319]: 2025-11-22 03:40:02.881105056 +0000 UTC m=+0.202441905 container init 5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:40:02 compute-0 podman[209319]: 2025-11-22 03:40:02.892356017 +0000 UTC m=+0.213692786 container start 5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:40:02 compute-0 wizardly_feistel[209379]: 167 167
Nov 22 03:40:02 compute-0 systemd[1]: libpod-5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83.scope: Deactivated successfully.
Nov 22 03:40:02 compute-0 podman[209319]: 2025-11-22 03:40:02.913819724 +0000 UTC m=+0.235156533 container attach 5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feistel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:40:02 compute-0 podman[209319]: 2025-11-22 03:40:02.914233302 +0000 UTC m=+0.235570071 container died 5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:40:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-c839674e6a962e77431428cc222b7c617451c42ec8d565828d506d788c076bfd-merged.mount: Deactivated successfully.
Nov 22 03:40:02 compute-0 podman[209319]: 2025-11-22 03:40:02.957676073 +0000 UTC m=+0.279012852 container remove 5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:40:02 compute-0 systemd[1]: libpod-conmon-5e6a7322dded26ce146095f183fe98a15de0108464a7d810e7f194eaaf085b83.scope: Deactivated successfully.
Nov 22 03:40:03 compute-0 python3.9[209378]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782801.8506777-775-105639433443460/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:03 compute-0 sudo[209373]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:03 compute-0 podman[209414]: 2025-11-22 03:40:03.147005653 +0000 UTC m=+0.056379983 container create 4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:40:03 compute-0 systemd[1]: Started libpod-conmon-4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33.scope.
Nov 22 03:40:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d532c3cf0c74ca9c3013e5ef9f5ed48a3445d721ce38b9feb8598c4da3c0557/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:03 compute-0 podman[209414]: 2025-11-22 03:40:03.12735777 +0000 UTC m=+0.036732120 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d532c3cf0c74ca9c3013e5ef9f5ed48a3445d721ce38b9feb8598c4da3c0557/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d532c3cf0c74ca9c3013e5ef9f5ed48a3445d721ce38b9feb8598c4da3c0557/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d532c3cf0c74ca9c3013e5ef9f5ed48a3445d721ce38b9feb8598c4da3c0557/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:03 compute-0 podman[209414]: 2025-11-22 03:40:03.256719288 +0000 UTC m=+0.166093618 container init 4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:40:03 compute-0 podman[209414]: 2025-11-22 03:40:03.297324661 +0000 UTC m=+0.206698941 container start 4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:40:03 compute-0 podman[209414]: 2025-11-22 03:40:03.303682026 +0000 UTC m=+0.213056556 container attach 4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:40:03 compute-0 sudo[209573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pikiftjrgipwaglcdqcmfirobvjdliqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782803.1633272-775-179027692743429/AnsiballZ_stat.py'
Nov 22 03:40:03 compute-0 sudo[209573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:03 compute-0 ceph-mon[75011]: pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:03 compute-0 python3.9[209575]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:03 compute-0 sudo[209573]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]: {
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:     "0": [
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:         {
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "devices": [
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "/dev/loop3"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             ],
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_name": "ceph_lv0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_size": "21470642176",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "name": "ceph_lv0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "tags": {
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cluster_name": "ceph",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.crush_device_class": "",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.encrypted": "0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osd_id": "0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.type": "block",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.vdo": "0"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             },
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "type": "block",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "vg_name": "ceph_vg0"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:         }
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:     ],
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:     "1": [
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:         {
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "devices": [
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "/dev/loop4"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             ],
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_name": "ceph_lv1",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_size": "21470642176",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "name": "ceph_lv1",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "tags": {
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cluster_name": "ceph",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.crush_device_class": "",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.encrypted": "0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osd_id": "1",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.type": "block",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.vdo": "0"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             },
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "type": "block",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "vg_name": "ceph_vg1"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:         }
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:     ],
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:     "2": [
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:         {
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "devices": [
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "/dev/loop5"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             ],
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_name": "ceph_lv2",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_size": "21470642176",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "name": "ceph_lv2",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "tags": {
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.cluster_name": "ceph",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.crush_device_class": "",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.encrypted": "0",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osd_id": "2",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.type": "block",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:                 "ceph.vdo": "0"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             },
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "type": "block",
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:             "vg_name": "ceph_vg2"
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:         }
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]:     ]
Nov 22 03:40:04 compute-0 pensive_chatterjee[209467]: }
Nov 22 03:40:04 compute-0 sudo[209700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnfbggjbxyktezbkwuohwrajwogdasyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782803.1633272-775-179027692743429/AnsiballZ_copy.py'
Nov 22 03:40:04 compute-0 sudo[209700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:04 compute-0 systemd[1]: libpod-4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33.scope: Deactivated successfully.
Nov 22 03:40:04 compute-0 podman[209414]: 2025-11-22 03:40:04.127571919 +0000 UTC m=+1.036946239 container died 4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d532c3cf0c74ca9c3013e5ef9f5ed48a3445d721ce38b9feb8598c4da3c0557-merged.mount: Deactivated successfully.
Nov 22 03:40:04 compute-0 podman[209414]: 2025-11-22 03:40:04.223865052 +0000 UTC m=+1.133239342 container remove 4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_chatterjee, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:40:04 compute-0 systemd[1]: libpod-conmon-4f89c34276add173390beba5f7a8e1f5dbbcfc9b3ab081738e50debbd52efc33.scope: Deactivated successfully.
Nov 22 03:40:04 compute-0 sudo[209176]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:04 compute-0 sudo[209714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:40:04 compute-0 sudo[209714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:04 compute-0 sudo[209714]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:04 compute-0 python3.9[209702]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782803.1633272-775-179027692743429/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:04 compute-0 sudo[209700]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:04 compute-0 sudo[209739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:40:04 compute-0 sudo[209739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:04 compute-0 sudo[209739]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:04 compute-0 sudo[209765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:40:04 compute-0 sudo[209765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:04 compute-0 sudo[209765]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:04 compute-0 sudo[209813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:40:04 compute-0 sudo[209813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:04 compute-0 sudo[210003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjwkujsbnfdkgerjsfmsanlnwxtguhmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782804.5053754-775-43930505113602/AnsiballZ_stat.py'
Nov 22 03:40:04 compute-0 sudo[210003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:04 compute-0 podman[209999]: 2025-11-22 03:40:04.841159444 +0000 UTC m=+0.053456739 container create 5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:40:04 compute-0 systemd[1]: Started libpod-conmon-5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728.scope.
Nov 22 03:40:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:40:04 compute-0 podman[209999]: 2025-11-22 03:40:04.815544291 +0000 UTC m=+0.027841606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:04 compute-0 podman[209999]: 2025-11-22 03:40:04.919867365 +0000 UTC m=+0.132164610 container init 5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:40:04 compute-0 podman[209999]: 2025-11-22 03:40:04.926027482 +0000 UTC m=+0.138324707 container start 5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:40:04 compute-0 modest_gagarin[210020]: 167 167
Nov 22 03:40:04 compute-0 podman[209999]: 2025-11-22 03:40:04.929746827 +0000 UTC m=+0.142044072 container attach 5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 03:40:04 compute-0 systemd[1]: libpod-5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728.scope: Deactivated successfully.
Nov 22 03:40:04 compute-0 podman[209999]: 2025-11-22 03:40:04.931009196 +0000 UTC m=+0.143306411 container died 5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:40:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-081bd787c375fba841820d1d0bd01098cd41a856de4a48d62ddfc8eb54ed0b04-merged.mount: Deactivated successfully.
Nov 22 03:40:04 compute-0 podman[209999]: 2025-11-22 03:40:04.96892904 +0000 UTC m=+0.181226265 container remove 5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_gagarin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:40:04 compute-0 systemd[1]: libpod-conmon-5b0b337d98e038db6a4a69c0b3c92ef3ac3ee73a2b66466f9845ac91d88e1728.scope: Deactivated successfully.
Nov 22 03:40:04 compute-0 python3.9[210014]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:05 compute-0 sudo[210003]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:05 compute-0 podman[210073]: 2025-11-22 03:40:05.140070794 +0000 UTC m=+0.044425907 container create ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:40:05 compute-0 systemd[1]: Started libpod-conmon-ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b.scope.
Nov 22 03:40:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2ef9b7a6b88a57fd9a011656c2fbe436ba1a9de2a4c610fffc4a5f4b3d8d7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2ef9b7a6b88a57fd9a011656c2fbe436ba1a9de2a4c610fffc4a5f4b3d8d7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2ef9b7a6b88a57fd9a011656c2fbe436ba1a9de2a4c610fffc4a5f4b3d8d7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f2ef9b7a6b88a57fd9a011656c2fbe436ba1a9de2a4c610fffc4a5f4b3d8d7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:40:05 compute-0 podman[210073]: 2025-11-22 03:40:05.119056213 +0000 UTC m=+0.023411326 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:40:05 compute-0 podman[210073]: 2025-11-22 03:40:05.221306765 +0000 UTC m=+0.125661878 container init ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:40:05 compute-0 podman[210073]: 2025-11-22 03:40:05.229904024 +0000 UTC m=+0.134259097 container start ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:40:05 compute-0 podman[210073]: 2025-11-22 03:40:05.233129396 +0000 UTC m=+0.137484479 container attach ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:40:05 compute-0 sudo[210185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iossvnimiiylzrpwoigjmuclfbnkcrhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782804.5053754-775-43930505113602/AnsiballZ_copy.py'
Nov 22 03:40:05 compute-0 sudo[210185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:05 compute-0 python3.9[210187]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782804.5053754-775-43930505113602/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:05 compute-0 sudo[210185]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:05 compute-0 ceph-mon[75011]: pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:05 compute-0 sudo[210348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzubsaakxudgwnjbzpdxjuzephjysecy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782805.6604884-775-201660240700662/AnsiballZ_stat.py'
Nov 22 03:40:05 compute-0 sudo[210348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:06 compute-0 python3.9[210350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:06 compute-0 sudo[210348]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:06 compute-0 practical_bartik[210130]: {
Nov 22 03:40:06 compute-0 practical_bartik[210130]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "osd_id": 1,
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "type": "bluestore"
Nov 22 03:40:06 compute-0 practical_bartik[210130]:     },
Nov 22 03:40:06 compute-0 practical_bartik[210130]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "osd_id": 0,
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "type": "bluestore"
Nov 22 03:40:06 compute-0 practical_bartik[210130]:     },
Nov 22 03:40:06 compute-0 practical_bartik[210130]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "osd_id": 2,
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:40:06 compute-0 practical_bartik[210130]:         "type": "bluestore"
Nov 22 03:40:06 compute-0 practical_bartik[210130]:     }
Nov 22 03:40:06 compute-0 practical_bartik[210130]: }
Nov 22 03:40:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:06 compute-0 systemd[1]: libpod-ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b.scope: Deactivated successfully.
Nov 22 03:40:06 compute-0 podman[210073]: 2025-11-22 03:40:06.199312397 +0000 UTC m=+1.103667480 container died ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:40:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f2ef9b7a6b88a57fd9a011656c2fbe436ba1a9de2a4c610fffc4a5f4b3d8d7a-merged.mount: Deactivated successfully.
Nov 22 03:40:06 compute-0 podman[210073]: 2025-11-22 03:40:06.268653313 +0000 UTC m=+1.173008386 container remove ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:40:06 compute-0 systemd[1]: libpod-conmon-ed5089567e461df3bfd2ab25351755b6c7f908d169ac1249816ba8bf8e1a247b.scope: Deactivated successfully.
Nov 22 03:40:06 compute-0 sudo[209813]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:40:06 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:40:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:40:06 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:40:06 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b62c96f8-39f1-4fbe-9473-397064554b91 does not exist
Nov 22 03:40:06 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 440e4df3-6ca6-47cc-9b38-79dc90013d1f does not exist
Nov 22 03:40:06 compute-0 sudo[210433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:40:06 compute-0 sudo[210433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:06 compute-0 sudo[210433]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:06 compute-0 sudo[210481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:40:06 compute-0 sudo[210481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:40:06 compute-0 sudo[210481]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:06 compute-0 sudo[210552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgdzdaalkusgfadxpoarduenyilvqqux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782805.6604884-775-201660240700662/AnsiballZ_copy.py'
Nov 22 03:40:06 compute-0 sudo[210552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:06 compute-0 python3.9[210554]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782805.6604884-775-201660240700662/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:06 compute-0 sudo[210552]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:07 compute-0 sudo[210704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrbmaprrgrdmtkzfgxoajdjgskwfxiko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782806.835275-775-165989685559041/AnsiballZ_stat.py'
Nov 22 03:40:07 compute-0 sudo[210704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:07 compute-0 ceph-mon[75011]: pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:40:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:40:07 compute-0 python3.9[210706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:07 compute-0 sudo[210704]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:07 compute-0 sudo[210827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnspfmgsbsctribltthzqxjezhufuhvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782806.835275-775-165989685559041/AnsiballZ_copy.py'
Nov 22 03:40:07 compute-0 sudo[210827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:08 compute-0 python3.9[210829]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782806.835275-775-165989685559041/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:08 compute-0 sudo[210827]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:08 compute-0 sudo[210979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evvgwcancrkvxbaucqjvppeptbxkxnnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782808.190915-775-170424439913089/AnsiballZ_stat.py'
Nov 22 03:40:08 compute-0 sudo[210979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:08 compute-0 python3.9[210981]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:08 compute-0 sudo[210979]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:09 compute-0 sudo[211102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgvdvlwirmvdjhxmlrrctzgxlmvihyrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782808.190915-775-170424439913089/AnsiballZ_copy.py'
Nov 22 03:40:09 compute-0 sudo[211102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:09 compute-0 python3.9[211104]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782808.190915-775-170424439913089/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:09 compute-0 sudo[211102]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:09 compute-0 ceph-mon[75011]: pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:09 compute-0 sudo[211254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkxcdjynlrmspwbrusjyqxybwwnmdcxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782809.436878-775-145494561996882/AnsiballZ_stat.py'
Nov 22 03:40:09 compute-0 sudo[211254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:09 compute-0 python3.9[211256]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:09 compute-0 sudo[211254]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:10 compute-0 sudo[211377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onfmhyuoqsupmurngkfkprnujeoiwcpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782809.436878-775-145494561996882/AnsiballZ_copy.py'
Nov 22 03:40:10 compute-0 sudo[211377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:10 compute-0 python3.9[211379]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782809.436878-775-145494561996882/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:10 compute-0 sudo[211377]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:11 compute-0 sudo[211529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiwcxasrfgxmnufcjjvmdnuopbtwnumf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782810.726971-775-173075063444300/AnsiballZ_stat.py'
Nov 22 03:40:11 compute-0 sudo[211529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:11 compute-0 ceph-mon[75011]: pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:11 compute-0 python3.9[211531]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:11 compute-0 sudo[211529]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:11 compute-0 sudo[211652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzytrswrwtvanifbamdlzhmpykomauqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782810.726971-775-173075063444300/AnsiballZ_copy.py'
Nov 22 03:40:11 compute-0 sudo[211652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:12 compute-0 python3.9[211654]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782810.726971-775-173075063444300/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:12 compute-0 sudo[211652]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:12 compute-0 sudo[211804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmzvrrndltzqqlargyrzseaihkitadgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782812.2272952-775-244995525095390/AnsiballZ_stat.py'
Nov 22 03:40:12 compute-0 sudo[211804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:12 compute-0 python3.9[211806]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:12 compute-0 sudo[211804]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:13 compute-0 sudo[211927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdipnygvvvhdodavsoauwpwqrzgjamjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782812.2272952-775-244995525095390/AnsiballZ_copy.py'
Nov 22 03:40:13 compute-0 sudo[211927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:13 compute-0 ceph-mon[75011]: pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:13 compute-0 python3.9[211929]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782812.2272952-775-244995525095390/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:13 compute-0 sudo[211927]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:13 compute-0 sudo[212079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsjjpnsealhdfkqdtgeztdgzitppmjzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782813.56547-775-63083760149892/AnsiballZ_stat.py'
Nov 22 03:40:13 compute-0 sudo[212079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:14 compute-0 python3.9[212081]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:14 compute-0 sudo[212079]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:14 compute-0 sudo[212202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgjckndwybselgfawqutdttgdcgpfpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782813.56547-775-63083760149892/AnsiballZ_copy.py'
Nov 22 03:40:14 compute-0 sudo[212202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:14 compute-0 python3.9[212204]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782813.56547-775-63083760149892/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:14 compute-0 sudo[212202]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:15 compute-0 ceph-mon[75011]: pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:15 compute-0 sudo[212354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdssdoffezyflqmxrecporsktycqsskv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782815.0135653-775-41778592100333/AnsiballZ_stat.py'
Nov 22 03:40:15 compute-0 sudo[212354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:15 compute-0 python3.9[212356]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:15 compute-0 sudo[212354]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:16 compute-0 sudo[212477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oukzuixnteqhdmtaizkxxqpbejvcynds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782815.0135653-775-41778592100333/AnsiballZ_copy.py'
Nov 22 03:40:16 compute-0 sudo[212477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:16 compute-0 python3.9[212479]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782815.0135653-775-41778592100333/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:16 compute-0 sudo[212477]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:16 compute-0 sudo[212629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iskrhyvtivslloprgkfxbtsfdtjwbvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782816.6293807-775-212106132559458/AnsiballZ_stat.py'
Nov 22 03:40:16 compute-0 sudo[212629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:17 compute-0 python3.9[212631]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:17 compute-0 sudo[212629]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:17 compute-0 ceph-mon[75011]: pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:17 compute-0 sudo[212752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sumfoohgekfxnntfdecvumltnvbefbnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782816.6293807-775-212106132559458/AnsiballZ_copy.py'
Nov 22 03:40:17 compute-0 sudo[212752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:17 compute-0 python3.9[212754]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782816.6293807-775-212106132559458/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:17 compute-0 sudo[212752]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:18 compute-0 sudo[212904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrdegihrpspqhdbjvwvehavhbesbjpnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782817.8934674-775-146023824789978/AnsiballZ_stat.py'
Nov 22 03:40:18 compute-0 sudo[212904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:18 compute-0 python3.9[212906]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:18 compute-0 sudo[212904]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:18 compute-0 sudo[213027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcvcnirobwbeesondtocqwxfhsdlnyhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782817.8934674-775-146023824789978/AnsiballZ_copy.py'
Nov 22 03:40:18 compute-0 sudo[213027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:18 compute-0 python3.9[213029]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782817.8934674-775-146023824789978/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:18 compute-0 sudo[213027]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:19 compute-0 ceph-mon[75011]: pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:19 compute-0 python3.9[213179]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:40:20 compute-0 podman[213282]: 2025-11-22 03:40:20.431942557 +0000 UTC m=+0.111968000 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:40:20 compute-0 sudo[213359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkwfexzygzpjihyhqytliedrhskkrqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782819.9138265-981-60059456051696/AnsiballZ_seboolean.py'
Nov 22 03:40:20 compute-0 sudo[213359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:20 compute-0 python3.9[213362]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 22 03:40:20 compute-0 ceph-mon[75011]: pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:22 compute-0 sudo[213359]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:22 compute-0 dbus-broker-launch[777]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 22 03:40:22 compute-0 podman[213404]: 2025-11-22 03:40:22.428366621 +0000 UTC m=+0.079542496 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 03:40:22 compute-0 sudo[213537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbluegpjbowjpxowffvkmfxnstbbeioa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782822.3296523-989-2896769030200/AnsiballZ_copy.py'
Nov 22 03:40:22 compute-0 sudo[213537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:22 compute-0 python3.9[213539]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:22 compute-0 sudo[213537]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:22 compute-0 ceph-mon[75011]: pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:40:22.992 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:40:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:40:22.993 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:40:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:40:22.993 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:40:23 compute-0 sudo[213689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlxgihikmiuoyhpawmbqulosasjbyiek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782823.097356-989-197278400759079/AnsiballZ_copy.py'
Nov 22 03:40:23 compute-0 sudo[213689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:23 compute-0 python3.9[213691]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:23 compute-0 sudo[213689]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:24 compute-0 sudo[213841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpozenugcwlzlmgamancotcxpwhqrgpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782823.804942-989-175544050666352/AnsiballZ_copy.py'
Nov 22 03:40:24 compute-0 sudo[213841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:24 compute-0 python3.9[213843]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:24 compute-0 sudo[213841]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:24 compute-0 sudo[213993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhycppaougfrkpppwqvvfikaeqgztutb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782824.505623-989-81039285764645/AnsiballZ_copy.py'
Nov 22 03:40:24 compute-0 sudo[213993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:24 compute-0 ceph-mon[75011]: pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:25 compute-0 python3.9[213995]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:25 compute-0 sudo[213993]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:25 compute-0 sudo[214145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkynwwhpyvwpxigyrszqyoquujfhmjea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782825.2192073-989-159051852452851/AnsiballZ_copy.py'
Nov 22 03:40:25 compute-0 sudo[214145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:25 compute-0 python3.9[214147]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:25 compute-0 sudo[214145]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:26 compute-0 sudo[214297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yazwqstfjoolzyqapecdomgvmmpclnly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782825.9355092-1025-85093314540141/AnsiballZ_copy.py'
Nov 22 03:40:26 compute-0 sudo[214297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:26 compute-0 python3.9[214299]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:26 compute-0 sudo[214297]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:26 compute-0 sudo[214449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brkdqclhnpcfxhafrgfiagbdwjhpasqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782826.5923264-1025-240237755844806/AnsiballZ_copy.py'
Nov 22 03:40:26 compute-0 sudo[214449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:26 compute-0 ceph-mon[75011]: pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:27 compute-0 python3.9[214451]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:27 compute-0 sudo[214449]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:27 compute-0 sudo[214601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtqnzathafqmdohbesvroybifrvmzgsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782827.2472162-1025-112473608463846/AnsiballZ_copy.py'
Nov 22 03:40:27 compute-0 sudo[214601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:27 compute-0 python3.9[214603]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:27 compute-0 sudo[214601]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:28 compute-0 sudo[214753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hriohdzsbmhurqeudhhblxdndeahcywy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782827.9178739-1025-233550817184327/AnsiballZ_copy.py'
Nov 22 03:40:28 compute-0 sudo[214753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:28 compute-0 python3.9[214755]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:28 compute-0 sudo[214753]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:28 compute-0 sudo[214905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tywwmlqgrbfleapfuooxcjhlbdytfenr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782828.5890064-1025-248100642811438/AnsiballZ_copy.py'
Nov 22 03:40:28 compute-0 sudo[214905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:28 compute-0 ceph-mon[75011]: pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:29 compute-0 python3.9[214907]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:29 compute-0 sudo[214905]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:29 compute-0 sudo[215057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyuccsedmfkghxbugbxtwfqxuiyqdcdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782829.302635-1061-262819609546404/AnsiballZ_systemd.py'
Nov 22 03:40:29 compute-0 sudo[215057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:29 compute-0 python3.9[215059]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:40:29 compute-0 systemd[1]: Reloading.
Nov 22 03:40:29 compute-0 systemd-rc-local-generator[215081]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:40:29 compute-0 systemd-sysv-generator[215084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:40:30 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Nov 22 03:40:30 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Nov 22 03:40:30 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 22 03:40:30 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 22 03:40:30 compute-0 systemd[1]: Starting libvirt logging daemon...
Nov 22 03:40:30 compute-0 systemd[1]: Started libvirt logging daemon.
Nov 22 03:40:30 compute-0 sudo[215057]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:30 compute-0 sudo[215249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yurnywvaiymoapfqdstilpmdcsxxrobh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782830.5980334-1061-93308234426653/AnsiballZ_systemd.py'
Nov 22 03:40:30 compute-0 ceph-mon[75011]: pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:30 compute-0 sudo[215249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:31 compute-0 python3.9[215251]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:40:31 compute-0 systemd[1]: Reloading.
Nov 22 03:40:31 compute-0 systemd-rc-local-generator[215281]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:40:31 compute-0 systemd-sysv-generator[215285]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:40:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:31 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 22 03:40:31 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 22 03:40:31 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 22 03:40:31 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 22 03:40:31 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 22 03:40:31 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 22 03:40:31 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 03:40:31 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 22 03:40:31 compute-0 sudo[215249]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:32 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 22 03:40:32 compute-0 sudo[215466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrsfwfpjlbllfjhrrshydfwcvsdwwttm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782832.0891073-1061-179635644749759/AnsiballZ_systemd.py'
Nov 22 03:40:32 compute-0 sudo[215466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:32 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 22 03:40:32 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 22 03:40:32 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 22 03:40:32 compute-0 python3.9[215468]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:40:32 compute-0 systemd[1]: Reloading.
Nov 22 03:40:32 compute-0 systemd-rc-local-generator[215502]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:40:32 compute-0 systemd-sysv-generator[215505]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:40:32 compute-0 ceph-mon[75011]: pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:33 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 22 03:40:33 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 22 03:40:33 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 22 03:40:33 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 22 03:40:33 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 22 03:40:33 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 22 03:40:33 compute-0 sudo[215466]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:33 compute-0 setroubleshoot[215363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ebc5519d-5e09-492c-8145-103ad8743012
Nov 22 03:40:33 compute-0 setroubleshoot[215363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 22 03:40:33 compute-0 setroubleshoot[215363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ebc5519d-5e09-492c-8145-103ad8743012
Nov 22 03:40:33 compute-0 setroubleshoot[215363]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Nov 22 03:40:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:33 compute-0 sudo[215686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ospukwuniwnvivydvhxqyjfvrhwuwvni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782833.4203756-1061-259081275143822/AnsiballZ_systemd.py'
Nov 22 03:40:33 compute-0 sudo[215686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:34 compute-0 python3.9[215688]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:40:34 compute-0 systemd[1]: Reloading.
Nov 22 03:40:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:34 compute-0 systemd-rc-local-generator[215712]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:40:34 compute-0 systemd-sysv-generator[215716]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:40:34 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Nov 22 03:40:34 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 22 03:40:34 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 22 03:40:34 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 22 03:40:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 22 03:40:34 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 22 03:40:34 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 22 03:40:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 22 03:40:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 22 03:40:34 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 22 03:40:34 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 03:40:34 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 22 03:40:34 compute-0 sudo[215686]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:34 compute-0 ceph-mon[75011]: pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:35 compute-0 sudo[215901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asbsbbqngtposoryfiupydvuwrubtfvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782834.8792179-1061-193193513238590/AnsiballZ_systemd.py'
Nov 22 03:40:35 compute-0 sudo[215901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:35 compute-0 python3.9[215903]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:40:35 compute-0 systemd[1]: Reloading.
Nov 22 03:40:35 compute-0 systemd-sysv-generator[215937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:40:35 compute-0 systemd-rc-local-generator[215933]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:40:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:35 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Nov 22 03:40:35 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Nov 22 03:40:35 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 22 03:40:35 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 22 03:40:35 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 22 03:40:35 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 22 03:40:35 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 22 03:40:35 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 22 03:40:35 compute-0 sudo[215901]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:40:36
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.mgr', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes', 'default.rgw.meta']
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:40:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:40:36 compute-0 sudo[216115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjznvvfmefdxyniibbmclodmtdmhbwmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782836.2923083-1098-53039485065831/AnsiballZ_file.py'
Nov 22 03:40:36 compute-0 sudo[216115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:36 compute-0 python3.9[216117]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:36 compute-0 sudo[216115]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:36 compute-0 ceph-mon[75011]: pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:37 compute-0 sudo[216267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znvehiguextanpjibpkxlycnbdaeondw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782837.0853767-1106-87862452431910/AnsiballZ_find.py'
Nov 22 03:40:37 compute-0 sudo[216267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:37 compute-0 python3.9[216269]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:40:37 compute-0 sudo[216267]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:38 compute-0 sudo[216419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbgieizehvzdlicgnkfchjqaqdiahkla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782837.8206139-1114-77540882825667/AnsiballZ_command.py'
Nov 22 03:40:38 compute-0 sudo[216419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:38 compute-0 python3.9[216421]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:40:38 compute-0 sudo[216419]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:38 compute-0 ceph-mon[75011]: pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:39 compute-0 python3.9[216575]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:40:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:39 compute-0 python3.9[216725]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:40 compute-0 python3.9[216846]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782839.3849714-1133-124666339392632/.source.xml follow=False _original_basename=secret.xml.j2 checksum=e0a3dd0670aaaa4fde5851a7aef7e53085cb5956 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:40 compute-0 sudo[216996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzztkkcqyrnvaeursvzlwixmluqnmejk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782840.6743906-1148-65929409209246/AnsiballZ_command.py'
Nov 22 03:40:40 compute-0 sudo[216996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:41 compute-0 ceph-mon[75011]: pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:41 compute-0 python3.9[216998]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 7adcc38b-6484-5de6-b879-33a0309153df
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:40:41 compute-0 polkitd[43398]: Registered Authentication Agent for unix-process:217000:309379 (system bus name :1.2842 [pkttyagent --process 217000 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 22 03:40:41 compute-0 polkitd[43398]: Unregistered Authentication Agent for unix-process:217000:309379 (system bus name :1.2842, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 22 03:40:41 compute-0 polkitd[43398]: Registered Authentication Agent for unix-process:216999:309378 (system bus name :1.2843 [pkttyagent --process 216999 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 22 03:40:41 compute-0 polkitd[43398]: Unregistered Authentication Agent for unix-process:216999:309378 (system bus name :1.2843, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 22 03:40:41 compute-0 sudo[216996]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:42 compute-0 python3.9[217160]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:42 compute-0 sudo[217310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwrnhqbnygyspsxnnlhvxelreingobfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782842.2581103-1164-6047568489831/AnsiballZ_command.py'
Nov 22 03:40:42 compute-0 sudo[217310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:42 compute-0 sudo[217310]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:43 compute-0 ceph-mon[75011]: pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:43 compute-0 sudo[217463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjpsztooctqbytapulbzmhrssfmfsnrq ; FSID=7adcc38b-6484-5de6-b879-33a0309153df KEY=AQClLCFpAAAAABAAr5ufdHAA+2+5xQHYTrvFiA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782843.0694344-1172-46265248893116/AnsiballZ_command.py'
Nov 22 03:40:43 compute-0 sudo[217463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:43 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 22 03:40:43 compute-0 polkitd[43398]: Registered Authentication Agent for unix-process:217466:309621 (system bus name :1.2846 [pkttyagent --process 217466 --notify-fd 5 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Nov 22 03:40:43 compute-0 polkitd[43398]: Unregistered Authentication Agent for unix-process:217466:309621 (system bus name :1.2846, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Nov 22 03:40:43 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 22 03:40:43 compute-0 sudo[217463]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:44 compute-0 sudo[217621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqyjklmrljqdaedodqnxqrjimbtrfmew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782843.9160075-1180-102421053592846/AnsiballZ_copy.py'
Nov 22 03:40:44 compute-0 sudo[217621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:44 compute-0 python3.9[217623]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:44 compute-0 sudo[217621]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:45 compute-0 sudo[217773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbszvrmgyguxiqmvauyhogjpnadxqicw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782844.7018209-1188-139399760172136/AnsiballZ_stat.py'
Nov 22 03:40:45 compute-0 sudo[217773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:45 compute-0 ceph-mon[75011]: pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:45 compute-0 python3.9[217775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:45 compute-0 sudo[217773]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:45 compute-0 sudo[217896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsrsgtkawqlhyqkrdnakesjotdcjbfun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782844.7018209-1188-139399760172136/AnsiballZ_copy.py'
Nov 22 03:40:45 compute-0 sudo[217896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:45 compute-0 python3.9[217898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782844.7018209-1188-139399760172136/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:45 compute-0 sudo[217896]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:40:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:40:46 compute-0 sudo[218048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtuirizmgmtwcklzbevutuilrqmbshqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782846.1671596-1204-174152998657561/AnsiballZ_file.py'
Nov 22 03:40:46 compute-0 sudo[218048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:46 compute-0 python3.9[218050]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:46 compute-0 sudo[218048]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:47 compute-0 ceph-mon[75011]: pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:47 compute-0 sudo[218200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiekujvpotyvkmczvuiyphprkpworqvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782846.9482722-1212-186467317668718/AnsiballZ_stat.py'
Nov 22 03:40:47 compute-0 sudo[218200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:47 compute-0 python3.9[218202]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:47 compute-0 sudo[218200]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:47 compute-0 sudo[218278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqhpkrxtnmpofnhygvjawxrkbkttaore ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782846.9482722-1212-186467317668718/AnsiballZ_file.py'
Nov 22 03:40:47 compute-0 sudo[218278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:47 compute-0 python3.9[218280]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:47 compute-0 sudo[218278]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:48 compute-0 sudo[218430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chywekbkcdoqdklccetvxipzhtwshlck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782848.1788385-1224-518116545145/AnsiballZ_stat.py'
Nov 22 03:40:48 compute-0 sudo[218430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:48 compute-0 python3.9[218432]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:48 compute-0 sudo[218430]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:48 compute-0 sudo[218508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzjrxrasssrlwgsuwjxtxwwfmhbpmywu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782848.1788385-1224-518116545145/AnsiballZ_file.py'
Nov 22 03:40:48 compute-0 sudo[218508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:49 compute-0 python3.9[218510]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.r7vgm56h recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:49 compute-0 sudo[218508]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:49 compute-0 ceph-mon[75011]: pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:49 compute-0 sudo[218660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akgnashdnwlzpezhiybdrzysvkcwslmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782849.3300116-1236-214336653461636/AnsiballZ_stat.py'
Nov 22 03:40:49 compute-0 sudo[218660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:49 compute-0 python3.9[218662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:49 compute-0 sudo[218660]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:50 compute-0 sudo[218738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dksveukbrazhoiimksixhtgcglmplvmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782849.3300116-1236-214336653461636/AnsiballZ_file.py'
Nov 22 03:40:50 compute-0 sudo[218738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:50 compute-0 python3.9[218740]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:50 compute-0 sudo[218738]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:51 compute-0 sudo[218900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygmjcurukifgjupabgmcdkwkttchswis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782850.7139168-1249-62805986278399/AnsiballZ_command.py'
Nov 22 03:40:51 compute-0 sudo[218900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:51 compute-0 ceph-mon[75011]: pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:51 compute-0 podman[218864]: 2025-11-22 03:40:51.160714476 +0000 UTC m=+0.122568911 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 03:40:51 compute-0 python3.9[218910]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:40:51 compute-0 sudo[218900]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:52 compute-0 sudo[219068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szzbzwvmrsioydtqfyvsqcfqmnmarrxw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763782851.581467-1257-22141187660396/AnsiballZ_edpm_nftables_from_files.py'
Nov 22 03:40:52 compute-0 sudo[219068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:52 compute-0 python3[219070]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 22 03:40:52 compute-0 sudo[219068]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:53 compute-0 sudo[219230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajaweqqohrowoywixjkuyhvgxjqeebcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782852.7111228-1265-142119171292270/AnsiballZ_stat.py'
Nov 22 03:40:53 compute-0 sudo[219230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:53 compute-0 podman[219194]: 2025-11-22 03:40:53.119975813 +0000 UTC m=+0.087525630 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:40:53 compute-0 python3.9[219235]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:53 compute-0 sudo[219230]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:53 compute-0 ceph-mon[75011]: pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:53 compute-0 sudo[219317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwhkgucldcnhlgdclmcxhhxudufhdpai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782852.7111228-1265-142119171292270/AnsiballZ_file.py'
Nov 22 03:40:53 compute-0 sudo[219317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:53 compute-0 python3.9[219319]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:53 compute-0 sudo[219317]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:54 compute-0 sudo[219469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbmawypnbzfmnyumfijzvktfxjgeoefl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782854.1237185-1277-66033477542261/AnsiballZ_stat.py'
Nov 22 03:40:54 compute-0 sudo[219469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:54 compute-0 python3.9[219471]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:54 compute-0 sudo[219469]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:54 compute-0 sudo[219547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snaecgrsscstabmpprwwfljzbnponrui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782854.1237185-1277-66033477542261/AnsiballZ_file.py'
Nov 22 03:40:54 compute-0 sudo[219547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:55 compute-0 python3.9[219549]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:55 compute-0 sudo[219547]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:55 compute-0 ceph-mon[75011]: pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:55 compute-0 sudo[219699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmijejkdsgfwtzeeywrkempcyfqcvzsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782855.4154103-1289-267416809476687/AnsiballZ_stat.py'
Nov 22 03:40:55 compute-0 sudo[219699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:56 compute-0 python3.9[219701]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:56 compute-0 sudo[219699]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:56 compute-0 sudo[219777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dokyrbenhkgknjbavckznarlcypiqtbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782855.4154103-1289-267416809476687/AnsiballZ_file.py'
Nov 22 03:40:56 compute-0 sudo[219777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:56 compute-0 python3.9[219779]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:56 compute-0 sudo[219777]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:57 compute-0 sudo[219929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsupgnbjuoergrwkinyxjixmvriwhybe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782856.8128166-1301-183177088376461/AnsiballZ_stat.py'
Nov 22 03:40:57 compute-0 sudo[219929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:57 compute-0 python3.9[219931]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:57 compute-0 sudo[219929]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:57 compute-0 ceph-mon[75011]: pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:57 compute-0 sudo[220007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjxcxhauqnvdyvqtewezequhqjjvotoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782856.8128166-1301-183177088376461/AnsiballZ_file.py'
Nov 22 03:40:57 compute-0 sudo[220007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:57 compute-0 python3.9[220009]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:57 compute-0 sudo[220007]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:58 compute-0 sudo[220159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpemjsrlbfmnwwydzelnqdlauqajzbol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782858.1853771-1313-82228938492279/AnsiballZ_stat.py'
Nov 22 03:40:58 compute-0 sudo[220159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:58 compute-0 python3.9[220161]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:40:58 compute-0 sudo[220159]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:40:59 compute-0 sudo[220284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzqdmlzxgsumcqvhowvsiaavgjgiuuut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782858.1853771-1313-82228938492279/AnsiballZ_copy.py'
Nov 22 03:40:59 compute-0 sudo[220284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:40:59 compute-0 python3.9[220286]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1763782858.1853771-1313-82228938492279/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:40:59 compute-0 sudo[220284]: pam_unix(sudo:session): session closed for user root
Nov 22 03:40:59 compute-0 ceph-mon[75011]: pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:40:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:00 compute-0 sudo[220436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtimwalpzygkwtvktqytbxywgfrjksqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782859.7094285-1328-32630651577602/AnsiballZ_file.py'
Nov 22 03:41:00 compute-0 sudo[220436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:00 compute-0 python3.9[220438]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:00 compute-0 sudo[220436]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:00 compute-0 sudo[220588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfbdykouqintgrqryyryrpwxsdzrsqeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782860.5294285-1336-7362342287288/AnsiballZ_command.py'
Nov 22 03:41:00 compute-0 sudo[220588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:01 compute-0 python3.9[220590]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:41:01 compute-0 sudo[220588]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:01 compute-0 ceph-mon[75011]: pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:01 compute-0 sudo[220743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wajnxwxxdtziarkgsowbhpusjxavofac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782861.310253-1344-256705951090038/AnsiballZ_blockinfile.py'
Nov 22 03:41:01 compute-0 sudo[220743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:01 compute-0 python3.9[220745]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:02 compute-0 sudo[220743]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:02 compute-0 sudo[220895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibytroitxsqnoassdyobfpdkghvuoeny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782862.3295236-1353-160024178153621/AnsiballZ_command.py'
Nov 22 03:41:02 compute-0 sudo[220895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:02 compute-0 python3.9[220897]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:41:02 compute-0 sudo[220895]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:03 compute-0 sudo[221048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwjqnlcgququbxsoawfzlaoduqpajvbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782863.18153-1361-181311936468568/AnsiballZ_stat.py'
Nov 22 03:41:03 compute-0 sudo[221048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:03 compute-0 ceph-mon[75011]: pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:03 compute-0 python3.9[221050]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:41:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:03 compute-0 sudo[221048]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:04 compute-0 sudo[221202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlqqgiczcopgcmympojklknyciwrzrlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782863.994156-1369-8869388675613/AnsiballZ_command.py'
Nov 22 03:41:04 compute-0 sudo[221202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:04 compute-0 python3.9[221204]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:41:04 compute-0 sudo[221202]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:05 compute-0 sudo[221357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfdwmlqcsxliipqfxaevxhjlsartlymr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782864.8247843-1377-199452797861495/AnsiballZ_file.py'
Nov 22 03:41:05 compute-0 sudo[221357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:05 compute-0 python3.9[221359]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:05 compute-0 sudo[221357]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:05 compute-0 ceph-mon[75011]: pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:05 compute-0 sudo[221509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwnnppzifkdlovpteivclspdjrvyxdim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782865.651253-1385-142233326087077/AnsiballZ_stat.py'
Nov 22 03:41:06 compute-0 sudo[221509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:06 compute-0 python3.9[221511]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:06 compute-0 sudo[221509]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:06 compute-0 sudo[221564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:06 compute-0 sudo[221564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:06 compute-0 sudo[221564]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:06 compute-0 sudo[221607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:41:06 compute-0 sudo[221607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:06 compute-0 sudo[221607]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:06 compute-0 ceph-mon[75011]: pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:06 compute-0 sudo[221656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:06 compute-0 sudo[221656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:06 compute-0 sudo[221656]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:06 compute-0 sudo[221705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqwgnyaoeeytnomylyqnmgpfjenvewyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782865.651253-1385-142233326087077/AnsiballZ_copy.py'
Nov 22 03:41:06 compute-0 sudo[221705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:06 compute-0 sudo[221709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:41:06 compute-0 sudo[221709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:06 compute-0 python3.9[221713]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782865.651253-1385-142233326087077/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:06 compute-0 sudo[221705]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:07 compute-0 sudo[221709]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:41:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:41:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:41:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:41:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:41:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:41:07 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 0d78351d-44b0-4bd4-be1a-f495594d4294 does not exist
Nov 22 03:41:07 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 995b073d-7f9c-4286-92f5-bf3503f8c9c8 does not exist
Nov 22 03:41:07 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1d109fa6-2654-4dbc-b540-ea85d56fd8cb does not exist
Nov 22 03:41:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:41:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:41:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:41:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:41:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:41:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:41:07 compute-0 sudo[221865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:07 compute-0 sudo[221865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:07 compute-0 sudo[221865]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:07 compute-0 sudo[221908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:41:07 compute-0 sudo[221908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:07 compute-0 sudo[221908]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:07 compute-0 sudo[221982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-texayjdmtycuexyuaeiigajyrclvydok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782867.1221585-1400-191379921605191/AnsiballZ_stat.py'
Nov 22 03:41:07 compute-0 sudo[221982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:07 compute-0 sudo[221950]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:07 compute-0 sudo[221950]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:07 compute-0 sudo[221950]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:07 compute-0 sudo[221993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:41:07 compute-0 sudo[221993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:41:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:41:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:41:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:41:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:41:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:41:07 compute-0 python3.9[221990]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:07 compute-0 sudo[221982]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:07 compute-0 podman[222105]: 2025-11-22 03:41:07.91337533 +0000 UTC m=+0.045745507 container create aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:41:07 compute-0 systemd[1]: Started libpod-conmon-aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655.scope.
Nov 22 03:41:07 compute-0 podman[222105]: 2025-11-22 03:41:07.895415833 +0000 UTC m=+0.027786060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:41:08 compute-0 podman[222105]: 2025-11-22 03:41:08.02561276 +0000 UTC m=+0.157982987 container init aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:41:08 compute-0 podman[222105]: 2025-11-22 03:41:08.035730329 +0000 UTC m=+0.168100516 container start aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:41:08 compute-0 vibrant_chatelet[222156]: 167 167
Nov 22 03:41:08 compute-0 systemd[1]: libpod-aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655.scope: Deactivated successfully.
Nov 22 03:41:08 compute-0 podman[222105]: 2025-11-22 03:41:08.044502331 +0000 UTC m=+0.176872618 container attach aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:41:08 compute-0 podman[222105]: 2025-11-22 03:41:08.045666461 +0000 UTC m=+0.178036678 container died aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:41:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c58dbf963ec74f44383f1e294807e44b66b8235b6a77f2b9a31f59c7ccac6257-merged.mount: Deactivated successfully.
Nov 22 03:41:08 compute-0 sudo[222206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asdoodwpebjshxzvzovxgqerrhqkxyuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782867.1221585-1400-191379921605191/AnsiballZ_copy.py'
Nov 22 03:41:08 compute-0 sudo[222206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:08 compute-0 podman[222105]: 2025-11-22 03:41:08.091410096 +0000 UTC m=+0.223780293 container remove aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_chatelet, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:41:08 compute-0 systemd[1]: libpod-conmon-aaeb71aae473927a91609adf9500c39da790b9eae559af447b8342b925780655.scope: Deactivated successfully.
Nov 22 03:41:08 compute-0 podman[222221]: 2025-11-22 03:41:08.253293063 +0000 UTC m=+0.048357723 container create 97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:41:08 compute-0 python3.9[222213]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782867.1221585-1400-191379921605191/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:08 compute-0 systemd[1]: Started libpod-conmon-97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2.scope.
Nov 22 03:41:08 compute-0 podman[222221]: 2025-11-22 03:41:08.231671172 +0000 UTC m=+0.026735852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:08 compute-0 sudo[222206]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422a9aa3dcbc70b4d7c3886d63a1992fa75d906ea921ec56e5cee97e13f73d34/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422a9aa3dcbc70b4d7c3886d63a1992fa75d906ea921ec56e5cee97e13f73d34/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422a9aa3dcbc70b4d7c3886d63a1992fa75d906ea921ec56e5cee97e13f73d34/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422a9aa3dcbc70b4d7c3886d63a1992fa75d906ea921ec56e5cee97e13f73d34/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422a9aa3dcbc70b4d7c3886d63a1992fa75d906ea921ec56e5cee97e13f73d34/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:08 compute-0 podman[222221]: 2025-11-22 03:41:08.366895102 +0000 UTC m=+0.161959772 container init 97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:41:08 compute-0 podman[222221]: 2025-11-22 03:41:08.381057379 +0000 UTC m=+0.176122029 container start 97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 03:41:08 compute-0 podman[222221]: 2025-11-22 03:41:08.385037706 +0000 UTC m=+0.180102356 container attach 97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:41:08 compute-0 ceph-mon[75011]: pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:08 compute-0 sudo[222392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oemglpmhqgrmosmvklwtivabmdpdseuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782868.5463414-1415-97426250687450/AnsiballZ_stat.py'
Nov 22 03:41:08 compute-0 sudo[222392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:09 compute-0 python3.9[222394]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:09 compute-0 sudo[222392]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:09 compute-0 strange_nobel[222238]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:41:09 compute-0 strange_nobel[222238]: --> relative data size: 1.0
Nov 22 03:41:09 compute-0 strange_nobel[222238]: --> All data devices are unavailable
Nov 22 03:41:09 compute-0 systemd[1]: libpod-97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2.scope: Deactivated successfully.
Nov 22 03:41:09 compute-0 podman[222221]: 2025-11-22 03:41:09.421819798 +0000 UTC m=+1.216884518 container died 97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:41:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-422a9aa3dcbc70b4d7c3886d63a1992fa75d906ea921ec56e5cee97e13f73d34-merged.mount: Deactivated successfully.
Nov 22 03:41:09 compute-0 podman[222221]: 2025-11-22 03:41:09.489639434 +0000 UTC m=+1.284704084 container remove 97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_nobel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:41:09 compute-0 systemd[1]: libpod-conmon-97fe1e3f81f2295f3fe4e6791089b53c454e2f9d9d1483d18da62f74ed0831c2.scope: Deactivated successfully.
Nov 22 03:41:09 compute-0 sudo[222551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmtlabwwcscruckgixoeuzjkmijahhfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782868.5463414-1415-97426250687450/AnsiballZ_copy.py'
Nov 22 03:41:09 compute-0 sudo[222551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:09 compute-0 sudo[221993]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:09 compute-0 sudo[222554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:09 compute-0 sudo[222554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:09 compute-0 sudo[222554]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:09 compute-0 sudo[222579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:41:09 compute-0 sudo[222579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:09 compute-0 sudo[222579]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:09 compute-0 python3.9[222553]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782868.5463414-1415-97426250687450/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:09 compute-0 sudo[222604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:09 compute-0 sudo[222604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:09 compute-0 sudo[222604]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:09 compute-0 sudo[222551]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:09 compute-0 sudo[222629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:41:09 compute-0 sudo[222629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:10 compute-0 podman[222798]: 2025-11-22 03:41:10.265805687 +0000 UTC m=+0.061930394 container create 25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 03:41:10 compute-0 systemd[1]: Started libpod-conmon-25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec.scope.
Nov 22 03:41:10 compute-0 podman[222798]: 2025-11-22 03:41:10.241173322 +0000 UTC m=+0.037298049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:41:10 compute-0 sudo[222859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlrrjdfpwaljwhbjziwaesohbsbgcofi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782869.971914-1430-62852339858718/AnsiballZ_systemd.py'
Nov 22 03:41:10 compute-0 sudo[222859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:10 compute-0 podman[222798]: 2025-11-22 03:41:10.384648422 +0000 UTC m=+0.180773199 container init 25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:41:10 compute-0 podman[222798]: 2025-11-22 03:41:10.3997262 +0000 UTC m=+0.195850917 container start 25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:41:10 compute-0 podman[222798]: 2025-11-22 03:41:10.404202154 +0000 UTC m=+0.200326871 container attach 25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:41:10 compute-0 dazzling_burnell[222858]: 167 167
Nov 22 03:41:10 compute-0 systemd[1]: libpod-25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec.scope: Deactivated successfully.
Nov 22 03:41:10 compute-0 podman[222798]: 2025-11-22 03:41:10.411191405 +0000 UTC m=+0.207316142 container died 25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:41:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-79f484790b8823baf891ebfb2e287d0bcfc8a51c25401bceca54f7e71674eb89-merged.mount: Deactivated successfully.
Nov 22 03:41:10 compute-0 podman[222798]: 2025-11-22 03:41:10.460170389 +0000 UTC m=+0.256295066 container remove 25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_burnell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:41:10 compute-0 systemd[1]: libpod-conmon-25befe1d81006ef6bd0072fb37af275e79adf300252e8f5e84c332aa5f80c5ec.scope: Deactivated successfully.
Nov 22 03:41:10 compute-0 podman[222884]: 2025-11-22 03:41:10.677825492 +0000 UTC m=+0.074474122 container create b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lovelace, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:41:10 compute-0 python3.9[222863]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:41:10 compute-0 systemd[1]: Reloading.
Nov 22 03:41:10 compute-0 podman[222884]: 2025-11-22 03:41:10.64860382 +0000 UTC m=+0.045252540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:10 compute-0 ceph-mon[75011]: pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:10 compute-0 systemd-sysv-generator[222925]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:41:10 compute-0 systemd-rc-local-generator[222922]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:41:11 compute-0 systemd[1]: Started libpod-conmon-b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa.scope.
Nov 22 03:41:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c1ffb96d45dd2b1126d06c7cf0c530fdb2a632daf47d6cc69aef54d541e0fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:11 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Nov 22 03:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c1ffb96d45dd2b1126d06c7cf0c530fdb2a632daf47d6cc69aef54d541e0fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c1ffb96d45dd2b1126d06c7cf0c530fdb2a632daf47d6cc69aef54d541e0fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2c1ffb96d45dd2b1126d06c7cf0c530fdb2a632daf47d6cc69aef54d541e0fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:11 compute-0 podman[222884]: 2025-11-22 03:41:11.113783896 +0000 UTC m=+0.510432576 container init b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:41:11 compute-0 sudo[222859]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:11 compute-0 podman[222884]: 2025-11-22 03:41:11.131905165 +0000 UTC m=+0.528553815 container start b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:41:11 compute-0 podman[222884]: 2025-11-22 03:41:11.136340121 +0000 UTC m=+0.532988841 container attach b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lovelace, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:41:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:11 compute-0 sudo[223096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auhuaitqggrzbinblbfzocyjdecjjmog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782871.39766-1438-165153538197514/AnsiballZ_systemd.py'
Nov 22 03:41:11 compute-0 sudo[223096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]: {
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:     "0": [
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:         {
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "devices": [
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "/dev/loop3"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             ],
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_name": "ceph_lv0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_size": "21470642176",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "name": "ceph_lv0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "tags": {
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cluster_name": "ceph",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.crush_device_class": "",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.encrypted": "0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osd_id": "0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.type": "block",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.vdo": "0"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             },
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "type": "block",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "vg_name": "ceph_vg0"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:         }
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:     ],
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:     "1": [
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:         {
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "devices": [
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "/dev/loop4"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             ],
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_name": "ceph_lv1",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_size": "21470642176",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "name": "ceph_lv1",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "tags": {
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cluster_name": "ceph",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.crush_device_class": "",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.encrypted": "0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osd_id": "1",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.type": "block",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.vdo": "0"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             },
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "type": "block",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "vg_name": "ceph_vg1"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:         }
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:     ],
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:     "2": [
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:         {
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "devices": [
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "/dev/loop5"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             ],
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_name": "ceph_lv2",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_size": "21470642176",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "name": "ceph_lv2",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "tags": {
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.cluster_name": "ceph",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.crush_device_class": "",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.encrypted": "0",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osd_id": "2",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.type": "block",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:                 "ceph.vdo": "0"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             },
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "type": "block",
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:             "vg_name": "ceph_vg2"
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:         }
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]:     ]
Nov 22 03:41:11 compute-0 xenodochial_lovelace[222937]: }
Nov 22 03:41:11 compute-0 systemd[1]: libpod-b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa.scope: Deactivated successfully.
Nov 22 03:41:11 compute-0 podman[222884]: 2025-11-22 03:41:11.876845871 +0000 UTC m=+1.273494501 container died b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2c1ffb96d45dd2b1126d06c7cf0c530fdb2a632daf47d6cc69aef54d541e0fd-merged.mount: Deactivated successfully.
Nov 22 03:41:11 compute-0 podman[222884]: 2025-11-22 03:41:11.957407583 +0000 UTC m=+1.354056223 container remove b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lovelace, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:41:11 compute-0 systemd[1]: libpod-conmon-b6533922534e117493031ed1aa6dad4d76d2081cd8310888ab87773735b236aa.scope: Deactivated successfully.
Nov 22 03:41:11 compute-0 sudo[222629]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:12 compute-0 sudo[223115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:12 compute-0 sudo[223115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:12 compute-0 sudo[223115]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:12 compute-0 python3.9[223100]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 22 03:41:12 compute-0 sudo[223140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:41:12 compute-0 sudo[223140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:12 compute-0 sudo[223140]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:12 compute-0 systemd[1]: Reloading.
Nov 22 03:41:12 compute-0 systemd-rc-local-generator[223218]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:41:12 compute-0 systemd-sysv-generator[223222]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:41:12 compute-0 sudo[223166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:12 compute-0 sudo[223166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:12 compute-0 sudo[223166]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:12 compute-0 systemd[1]: Reloading.
Nov 22 03:41:12 compute-0 systemd-sysv-generator[223276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:41:12 compute-0 systemd-rc-local-generator[223273]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:41:12 compute-0 ceph-mon[75011]: pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:12 compute-0 sudo[223227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:41:12 compute-0 sudo[223227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:12 compute-0 sudo[223096]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:13 compute-0 podman[223351]: 2025-11-22 03:41:13.284336377 +0000 UTC m=+0.068925747 container create 8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:41:13 compute-0 sshd-session[162815]: Connection closed by 192.168.122.30 port 38122
Nov 22 03:41:13 compute-0 sshd-session[162812]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:41:13 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Nov 22 03:41:13 compute-0 systemd[1]: session-49.scope: Consumed 3min 46.555s CPU time.
Nov 22 03:41:13 compute-0 systemd-logind[799]: Session 49 logged out. Waiting for processes to exit.
Nov 22 03:41:13 compute-0 systemd-logind[799]: Removed session 49.
Nov 22 03:41:13 compute-0 systemd[1]: Started libpod-conmon-8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50.scope.
Nov 22 03:41:13 compute-0 podman[223351]: 2025-11-22 03:41:13.257598526 +0000 UTC m=+0.042187976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:41:13 compute-0 podman[223351]: 2025-11-22 03:41:13.390528923 +0000 UTC m=+0.175118303 container init 8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:41:13 compute-0 podman[223351]: 2025-11-22 03:41:13.398371231 +0000 UTC m=+0.182960591 container start 8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:41:13 compute-0 podman[223351]: 2025-11-22 03:41:13.401271504 +0000 UTC m=+0.185860854 container attach 8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:41:13 compute-0 hungry_margulis[223367]: 167 167
Nov 22 03:41:13 compute-0 systemd[1]: libpod-8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50.scope: Deactivated successfully.
Nov 22 03:41:13 compute-0 conmon[223367]: conmon 8cbb241f26d4df4d81a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50.scope/container/memory.events
Nov 22 03:41:13 compute-0 podman[223351]: 2025-11-22 03:41:13.403838261 +0000 UTC m=+0.188427631 container died 8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:41:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dce108e90b26a62f4b032b2fb6798f9a8996a7f994a50f7021c627fdee3cdee-merged.mount: Deactivated successfully.
Nov 22 03:41:13 compute-0 podman[223351]: 2025-11-22 03:41:13.435575741 +0000 UTC m=+0.220165091 container remove 8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_margulis, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:41:13 compute-0 systemd[1]: libpod-conmon-8cbb241f26d4df4d81a1d22ffec87259003e153426ec167787256ed392d60b50.scope: Deactivated successfully.
Nov 22 03:41:13 compute-0 podman[223390]: 2025-11-22 03:41:13.641012799 +0000 UTC m=+0.061284905 container create 429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:41:13 compute-0 systemd[1]: Started libpod-conmon-429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02.scope.
Nov 22 03:41:13 compute-0 podman[223390]: 2025-11-22 03:41:13.618777528 +0000 UTC m=+0.039049634 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:41:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0203dfabf0d3fbe56d95ff665a87e7eee1e48405ff24156a5066c11d0f4fcfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0203dfabf0d3fbe56d95ff665a87e7eee1e48405ff24156a5066c11d0f4fcfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0203dfabf0d3fbe56d95ff665a87e7eee1e48405ff24156a5066c11d0f4fcfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0203dfabf0d3fbe56d95ff665a87e7eee1e48405ff24156a5066c11d0f4fcfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:41:13 compute-0 podman[223390]: 2025-11-22 03:41:13.748015446 +0000 UTC m=+0.168287582 container init 429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:41:13 compute-0 podman[223390]: 2025-11-22 03:41:13.763611636 +0000 UTC m=+0.183883742 container start 429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:41:13 compute-0 podman[223390]: 2025-11-22 03:41:13.768718815 +0000 UTC m=+0.188990961 container attach 429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:41:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]: {
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "osd_id": 1,
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "type": "bluestore"
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:     },
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "osd_id": 0,
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "type": "bluestore"
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:     },
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "osd_id": 2,
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:         "type": "bluestore"
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]:     }
Nov 22 03:41:14 compute-0 pedantic_wilbur[223407]: }
Nov 22 03:41:14 compute-0 systemd[1]: libpod-429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02.scope: Deactivated successfully.
Nov 22 03:41:14 compute-0 systemd[1]: libpod-429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02.scope: Consumed 1.056s CPU time.
Nov 22 03:41:14 compute-0 podman[223390]: 2025-11-22 03:41:14.811065046 +0000 UTC m=+1.231337182 container died 429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:41:14 compute-0 ceph-mon[75011]: pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0203dfabf0d3fbe56d95ff665a87e7eee1e48405ff24156a5066c11d0f4fcfd-merged.mount: Deactivated successfully.
Nov 22 03:41:14 compute-0 podman[223390]: 2025-11-22 03:41:14.902937754 +0000 UTC m=+1.323209830 container remove 429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wilbur, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:41:14 compute-0 systemd[1]: libpod-conmon-429a376242164193b7b9c4615fe53aeb2e8f803e88317e5ae321b90f5c5c4e02.scope: Deactivated successfully.
Nov 22 03:41:14 compute-0 sudo[223227]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:41:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:41:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:41:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:41:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a4f43b9c-a555-40d0-89f5-f3641e35ece9 does not exist
Nov 22 03:41:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 049d2162-4443-432c-aa5d-c1a08d097364 does not exist
Nov 22 03:41:15 compute-0 sudo[223454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:41:15 compute-0 sudo[223454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:15 compute-0 sudo[223454]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:15 compute-0 sudo[223479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:41:15 compute-0 sudo[223479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:41:15 compute-0 sudo[223479]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:41:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:41:16 compute-0 ceph-mon[75011]: pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:18 compute-0 ceph-mon[75011]: pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:19 compute-0 sshd-session[223504]: Accepted publickey for zuul from 192.168.122.30 port 33180 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:41:19 compute-0 systemd-logind[799]: New session 50 of user zuul.
Nov 22 03:41:19 compute-0 systemd[1]: Started Session 50 of User zuul.
Nov 22 03:41:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:19 compute-0 sshd-session[223504]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:41:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:20 compute-0 python3.9[223657]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:41:20 compute-0 ceph-mon[75011]: pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:21 compute-0 podman[223761]: 2025-11-22 03:41:21.467888436 +0000 UTC m=+0.139652896 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:41:21 compute-0 python3.9[223833]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:41:21 compute-0 network[223854]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:41:21 compute-0 network[223855]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:41:21 compute-0 network[223856]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:41:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:41:22.993 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:41:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:41:22.994 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:41:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:41:22.995 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:41:23 compute-0 ceph-mon[75011]: pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:23 compute-0 podman[223886]: 2025-11-22 03:41:23.247172001 +0000 UTC m=+0.070648126 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent)
Nov 22 03:41:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:25 compute-0 ceph-mon[75011]: pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:25 compute-0 sudo[224144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcxluhfvofxpmsanthjyhuirbpmubsuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782885.306981-47-83885286276164/AnsiballZ_setup.py'
Nov 22 03:41:25 compute-0 sudo[224144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:25 compute-0 python3.9[224146]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 22 03:41:26 compute-0 sudo[224144]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:26 compute-0 sudo[224228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fznoommfbrumftwxohmevfqwrsjqqrrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782885.306981-47-83885286276164/AnsiballZ_dnf.py'
Nov 22 03:41:26 compute-0 sudo[224228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:26 compute-0 python3.9[224230]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:41:27 compute-0 ceph-mon[75011]: pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:29 compute-0 ceph-mon[75011]: pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:31 compute-0 ceph-mon[75011]: pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:32 compute-0 sudo[224228]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:32 compute-0 sudo[224381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kidaxuxetvdqtjztxzerxepqqbftbuxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782892.55434-59-165564948623848/AnsiballZ_stat.py'
Nov 22 03:41:32 compute-0 sudo[224381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:33 compute-0 ceph-mon[75011]: pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:33 compute-0 python3.9[224383]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:41:33 compute-0 sudo[224381]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:33 compute-0 sudo[224533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nysluobrfpzxtwhxsnosboqbqwrxxjlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782893.4455137-69-212410460209583/AnsiballZ_command.py'
Nov 22 03:41:33 compute-0 sudo[224533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:34 compute-0 python3.9[224535]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:41:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:34 compute-0 sudo[224533]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:34 compute-0 sudo[224686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdbrruyoenqqupxfntnrdirdveodjwsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782894.5285156-79-76875478192588/AnsiballZ_stat.py'
Nov 22 03:41:34 compute-0 sudo[224686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:34 compute-0 python3.9[224688]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:41:34 compute-0 sudo[224686]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:35 compute-0 ceph-mon[75011]: pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:35 compute-0 sudo[224838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szcamkhgtopqauudnqpttbneryzbllux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782895.2134142-87-5601906612460/AnsiballZ_command.py'
Nov 22 03:41:35 compute-0 sudo[224838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:35 compute-0 python3.9[224840]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:41:35 compute-0 sudo[224838]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:36 compute-0 sudo[224991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usnpbxsobcymglqlpktbimkdugifanzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782895.867318-95-57097614767612/AnsiballZ_stat.py'
Nov 22 03:41:36 compute-0 sudo[224991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:41:36
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', '.mgr', 'backups', 'volumes']
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:41:36 compute-0 python3.9[224993]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:41:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:41:36 compute-0 sudo[224991]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:36 compute-0 sudo[225114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqchidfymmcbsxilrzkwazyomeqhfxvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782895.867318-95-57097614767612/AnsiballZ_copy.py'
Nov 22 03:41:36 compute-0 sudo[225114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:37 compute-0 ceph-mon[75011]: pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:37 compute-0 python3.9[225116]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782895.867318-95-57097614767612/.source.iscsi _original_basename=.dob9qtk0 follow=False checksum=4634988cf7f71c127766f076097fe7b840013348 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:37 compute-0 sudo[225114]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:37 compute-0 sudo[225266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oumfdmhnuesgvkxguapvyjjbuljqatzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782897.3523388-110-171999937837205/AnsiballZ_file.py'
Nov 22 03:41:37 compute-0 sudo[225266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:38 compute-0 python3.9[225268]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:38 compute-0 sudo[225266]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:38 compute-0 sudo[225418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxxbpusnbgwlsvfmflbypftvgzqgupms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782898.2274954-118-216797774461465/AnsiballZ_lineinfile.py'
Nov 22 03:41:38 compute-0 sudo[225418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:38 compute-0 python3.9[225420]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:38 compute-0 sudo[225418]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:39 compute-0 ceph-mon[75011]: pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:39 compute-0 sudo[225570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfvyljdbrlkvxcpfmnmxbgelqhbbbpfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782899.1951187-127-248841534312178/AnsiballZ_systemd_service.py'
Nov 22 03:41:39 compute-0 sudo[225570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:40 compute-0 python3.9[225572]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:41:40 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 22 03:41:40 compute-0 sudo[225570]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:40 compute-0 sudo[225726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptmezpgyuwyzmjeboppwtgxhbdemxsaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782900.4613566-135-186117303591519/AnsiballZ_systemd_service.py'
Nov 22 03:41:40 compute-0 sudo[225726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:41 compute-0 ceph-mon[75011]: pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:41 compute-0 python3.9[225728]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:41:41 compute-0 systemd[1]: Reloading.
Nov 22 03:41:41 compute-0 systemd-rc-local-generator[225755]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:41:41 compute-0 systemd-sysv-generator[225761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:41:41 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 03:41:41 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 22 03:41:41 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Nov 22 03:41:41 compute-0 systemd[1]: Started Open-iSCSI.
Nov 22 03:41:41 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 22 03:41:41 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 22 03:41:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:41 compute-0 sudo[225726]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:42 compute-0 sudo[225926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akjqoclkidunhuqmqbewkqsgnelbvgjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782902.2126567-146-6388577403914/AnsiballZ_service_facts.py'
Nov 22 03:41:42 compute-0 sudo[225926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:42 compute-0 python3.9[225928]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:41:42 compute-0 network[225945]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:41:42 compute-0 network[225946]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:41:42 compute-0 network[225947]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:41:43 compute-0 ceph-mon[75011]: pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:45 compute-0 ceph-mon[75011]: pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:41:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:41:46 compute-0 sudo[225926]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:47 compute-0 sudo[226217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixslivhlcywwrldmswlosswaqkkbqjww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782906.9071329-156-191508357857836/AnsiballZ_file.py'
Nov 22 03:41:47 compute-0 sudo[226217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:47 compute-0 ceph-mon[75011]: pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:47 compute-0 python3.9[226219]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 03:41:47 compute-0 sudo[226217]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:48 compute-0 sudo[226369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skoffdmawrjpssqnmggvwloiagtnpvrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782907.6691701-164-65133529372/AnsiballZ_modprobe.py'
Nov 22 03:41:48 compute-0 sudo[226369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:48 compute-0 python3.9[226371]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 22 03:41:48 compute-0 sudo[226369]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:48 compute-0 sudo[226525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnyucwphartaixsqdlppseovwewttusp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782908.6458366-172-272952082127351/AnsiballZ_stat.py'
Nov 22 03:41:48 compute-0 sudo[226525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:49 compute-0 python3.9[226527]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:49 compute-0 sudo[226525]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:49 compute-0 ceph-mon[75011]: pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:49 compute-0 sudo[226648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkwzqwurnnmceosahnsidgwucuzrwggv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782908.6458366-172-272952082127351/AnsiballZ_copy.py'
Nov 22 03:41:49 compute-0 sudo[226648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:49 compute-0 python3.9[226650]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782908.6458366-172-272952082127351/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:49 compute-0 sudo[226648]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:50 compute-0 sudo[226800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbqhpnodtqmlvftzfzrruyagdgiknvie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782910.0041292-188-163939689241027/AnsiballZ_lineinfile.py'
Nov 22 03:41:50 compute-0 sudo[226800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:50 compute-0 python3.9[226802]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:50 compute-0 sudo[226800]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:51 compute-0 ceph-mon[75011]: pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:51 compute-0 sudo[226952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slpkfmkaoyzekrqizlendptuebioifyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782910.732425-196-175372237484113/AnsiballZ_systemd.py'
Nov 22 03:41:51 compute-0 sudo[226952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:51 compute-0 python3.9[226954]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:41:51 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 03:41:51 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 22 03:41:51 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 22 03:41:51 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 03:41:51 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 03:41:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:51 compute-0 sudo[226952]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:51 compute-0 podman[226956]: 2025-11-22 03:41:51.784500026 +0000 UTC m=+0.106882274 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 22 03:41:52 compute-0 sudo[227134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahpxzjyfigqcavvhcqtwfhfkhcdcssdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782912.008909-204-247587084141308/AnsiballZ_file.py'
Nov 22 03:41:52 compute-0 sudo[227134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:52 compute-0 python3.9[227136]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:41:52 compute-0 sudo[227134]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:53 compute-0 sudo[227286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyjlrdmtglvhnqteybbucnxnpilfizol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782912.8131344-213-125062345444018/AnsiballZ_stat.py'
Nov 22 03:41:53 compute-0 sudo[227286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:53 compute-0 ceph-mon[75011]: pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:53 compute-0 podman[227289]: 2025-11-22 03:41:53.397006002 +0000 UTC m=+0.069202744 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:41:53 compute-0 python3.9[227288]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:41:53 compute-0 sudo[227286]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:53 compute-0 sudo[227457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hepvfvcbterqlyatnvcicvygipojzdwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782913.6366127-222-162644862555302/AnsiballZ_stat.py'
Nov 22 03:41:53 compute-0 sudo[227457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:54 compute-0 python3.9[227459]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:41:54 compute-0 sudo[227457]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:54 compute-0 sudo[227609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgxbbwojirsioqtzrkvhotapaccvedcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782914.2924495-230-126624362003001/AnsiballZ_stat.py'
Nov 22 03:41:54 compute-0 sudo[227609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:54 compute-0 python3.9[227611]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:41:54 compute-0 sudo[227609]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:55 compute-0 ceph-mon[75011]: pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:55 compute-0 sudo[227732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmaomocvabegpnuolukorjhajtlwysyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782914.2924495-230-126624362003001/AnsiballZ_copy.py'
Nov 22 03:41:55 compute-0 sudo[227732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:55 compute-0 python3.9[227734]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782914.2924495-230-126624362003001/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:55 compute-0 sudo[227732]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:56 compute-0 sudo[227884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pitknaxvtipsthwutylbanphtawpihdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782915.7476315-245-191383448934097/AnsiballZ_command.py'
Nov 22 03:41:56 compute-0 sudo[227884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:56 compute-0 python3.9[227886]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:41:56 compute-0 sudo[227884]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:56 compute-0 sudo[228037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgkllxntpcqggfbairyuikollopfaqlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782916.5879357-253-140982422024759/AnsiballZ_lineinfile.py'
Nov 22 03:41:56 compute-0 sudo[228037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:57 compute-0 python3.9[228039]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:57 compute-0 sudo[228037]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:57 compute-0 ceph-mon[75011]: pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:57 compute-0 sudo[228189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqjysfoztoxmgvxulfezpvynheslngqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782917.33235-261-206226912591952/AnsiballZ_replace.py'
Nov 22 03:41:57 compute-0 sudo[228189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:58 compute-0 python3.9[228191]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:58 compute-0 sudo[228189]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:58 compute-0 sudo[228341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmhazurelwpiqcruiwdeoidwsfvhsyqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782918.2091138-269-235675121599195/AnsiballZ_replace.py'
Nov 22 03:41:58 compute-0 sudo[228341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:58 compute-0 python3.9[228343]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:58 compute-0 sudo[228341]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:41:59 compute-0 sudo[228493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qutalchifquadpxhojbeclkudfeorfyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782918.8918226-278-217371506135759/AnsiballZ_lineinfile.py'
Nov 22 03:41:59 compute-0 sudo[228493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:41:59 compute-0 ceph-mon[75011]: pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:59 compute-0 python3.9[228495]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:41:59 compute-0 sudo[228493]: pam_unix(sudo:session): session closed for user root
Nov 22 03:41:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:41:59 compute-0 sudo[228645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofuyknnfhhasncvprytxtofwwlzgbpes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782919.603352-278-78578816898855/AnsiballZ_lineinfile.py'
Nov 22 03:41:59 compute-0 sudo[228645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:00 compute-0 python3.9[228647]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:00 compute-0 sudo[228645]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:00 compute-0 sudo[228797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njbsksqthfnfvomwmcvqqrgdwitzteto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782920.1899028-278-18028869526113/AnsiballZ_lineinfile.py'
Nov 22 03:42:00 compute-0 sudo[228797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:00 compute-0 python3.9[228799]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:00 compute-0 sudo[228797]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:01 compute-0 sudo[228949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbsclzilhhkejyztlottyjirrvcoyff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782920.8670201-278-225282745926120/AnsiballZ_lineinfile.py'
Nov 22 03:42:01 compute-0 sudo[228949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:01 compute-0 python3.9[228951]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:01 compute-0 ceph-mon[75011]: pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:01 compute-0 sudo[228949]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:01 compute-0 sudo[229101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erxxylnkxzrpizjcxnyvxorbmiscbcoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782921.5088468-307-19352035935964/AnsiballZ_stat.py'
Nov 22 03:42:01 compute-0 sudo[229101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:01 compute-0 python3.9[229103]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:42:02 compute-0 sudo[229101]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:02 compute-0 sudo[229255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnuzvhcemtxcoguoebnnurgmslorahnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782922.2626595-315-278881877021223/AnsiballZ_file.py'
Nov 22 03:42:02 compute-0 sudo[229255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:02 compute-0 python3.9[229257]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:02 compute-0 sudo[229255]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:03 compute-0 ceph-mon[75011]: pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:03 compute-0 sudo[229407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeotcxicgjdvpizcwgysfmsdyosvtffo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782923.087111-324-139162935137346/AnsiballZ_file.py'
Nov 22 03:42:03 compute-0 sudo[229407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:03 compute-0 python3.9[229409]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:03 compute-0 sudo[229407]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:04 compute-0 sudo[229559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfqhnxuszntuxtqmjwqnxmhmghgzceji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782923.899901-332-27788156915613/AnsiballZ_stat.py'
Nov 22 03:42:04 compute-0 sudo[229559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:04 compute-0 python3.9[229561]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:04 compute-0 sudo[229559]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:04 compute-0 sudo[229637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bspjezzyhbvoiobdmlhxqoprqjlesvmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782923.899901-332-27788156915613/AnsiballZ_file.py'
Nov 22 03:42:04 compute-0 sudo[229637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:04 compute-0 python3.9[229639]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:05 compute-0 sudo[229637]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:05 compute-0 ceph-mon[75011]: pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:05 compute-0 sudo[229789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phwfaucswwsnufqkymxihutlhoqvpsdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782925.2071874-332-136487439778900/AnsiballZ_stat.py'
Nov 22 03:42:05 compute-0 sudo[229789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:05 compute-0 python3.9[229791]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:05 compute-0 sudo[229789]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:06 compute-0 sudo[229867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iplojzqhydumqxvajyrdqkhlzbtpvmuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782925.2071874-332-136487439778900/AnsiballZ_file.py'
Nov 22 03:42:06 compute-0 sudo[229867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:06 compute-0 python3.9[229869]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:06 compute-0 sudo[229867]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:06 compute-0 sudo[230019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mokmmoowgvitkimwhbfyrlktbsikxeau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782926.4827216-355-133685909171723/AnsiballZ_file.py'
Nov 22 03:42:06 compute-0 sudo[230019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:06 compute-0 python3.9[230021]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:07 compute-0 sudo[230019]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:07 compute-0 ceph-mon[75011]: pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:07 compute-0 sudo[230171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akcyiykmsexrvfytgycpybgjblqneoww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782927.241797-363-117939129166931/AnsiballZ_stat.py'
Nov 22 03:42:07 compute-0 sudo[230171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:07 compute-0 python3.9[230174]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:07 compute-0 sudo[230171]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:08 compute-0 sudo[230251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsvzcvbqiuwtkhpxxpucptyteskhqdci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782927.241797-363-117939129166931/AnsiballZ_file.py'
Nov 22 03:42:08 compute-0 sudo[230251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:08 compute-0 sshd-session[230172]: Received disconnect from 80.94.93.119 port 30440:11:  [preauth]
Nov 22 03:42:08 compute-0 sshd-session[230172]: Disconnected from authenticating user root 80.94.93.119 port 30440 [preauth]
Nov 22 03:42:08 compute-0 python3.9[230253]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:08 compute-0 sudo[230251]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:08 compute-0 sudo[230403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paxeiceociesgchwugkimztgqhxsjiqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782928.6080651-375-104738777777530/AnsiballZ_stat.py'
Nov 22 03:42:08 compute-0 sudo[230403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:09 compute-0 python3.9[230405]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.141639) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782929141727, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1638, "num_deletes": 251, "total_data_size": 2704256, "memory_usage": 2736424, "flush_reason": "Manual Compaction"}
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782929159636, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1524789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11874, "largest_seqno": 13511, "table_properties": {"data_size": 1519381, "index_size": 2612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13595, "raw_average_key_size": 20, "raw_value_size": 1507467, "raw_average_value_size": 2223, "num_data_blocks": 121, "num_entries": 678, "num_filter_entries": 678, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763782744, "oldest_key_time": 1763782744, "file_creation_time": 1763782929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 18060 microseconds, and 7703 cpu microseconds.
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:42:09 compute-0 sudo[230403]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.159709) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1524789 bytes OK
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.159733) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.163685) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.163717) EVENT_LOG_v1 {"time_micros": 1763782929163709, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.163741) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2697243, prev total WAL file size 2697243, number of live WAL files 2.
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.165051) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1489KB)], [29(8028KB)]
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782929165126, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9746257, "oldest_snapshot_seqno": -1}
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4021 keys, 7576825 bytes, temperature: kUnknown
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782929226454, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7576825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7547927, "index_size": 17716, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 95967, "raw_average_key_size": 23, "raw_value_size": 7473430, "raw_average_value_size": 1858, "num_data_blocks": 770, "num_entries": 4021, "num_filter_entries": 4021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763782929, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.226756) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7576825 bytes
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.229282) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.6 rd, 123.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.8 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 4448, records dropped: 427 output_compression: NoCompression
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.229334) EVENT_LOG_v1 {"time_micros": 1763782929229315, "job": 12, "event": "compaction_finished", "compaction_time_micros": 61433, "compaction_time_cpu_micros": 18253, "output_level": 6, "num_output_files": 1, "total_output_size": 7576825, "num_input_records": 4448, "num_output_records": 4021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782929230229, "job": 12, "event": "table_file_deletion", "file_number": 31}
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763782929233477, "job": 12, "event": "table_file_deletion", "file_number": 29}
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.164950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.233531) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.233536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.233537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.233539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:42:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:42:09.233540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:42:09 compute-0 ceph-mon[75011]: pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:09 compute-0 sudo[230481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvsljhvwxawcgvkivovibcohrakhvymy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782928.6080651-375-104738777777530/AnsiballZ_file.py'
Nov 22 03:42:09 compute-0 sudo[230481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:09 compute-0 python3.9[230483]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:09 compute-0 sudo[230481]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:10 compute-0 sudo[230633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aarezcsspwaadunpssjjloukgibtonyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782929.8293378-387-74159209425419/AnsiballZ_systemd.py'
Nov 22 03:42:10 compute-0 sudo[230633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:10 compute-0 python3.9[230635]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:10 compute-0 systemd[1]: Reloading.
Nov 22 03:42:10 compute-0 systemd-rc-local-generator[230661]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:10 compute-0 systemd-sysv-generator[230664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:11 compute-0 sudo[230633]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:11 compute-0 ceph-mon[75011]: pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:11 compute-0 sudo[230821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhgtutnlaxfsdzocsipfoqbdjotipblb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782931.2927818-395-139852163670616/AnsiballZ_stat.py'
Nov 22 03:42:11 compute-0 sudo[230821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:11 compute-0 python3.9[230823]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:11 compute-0 sudo[230821]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:12 compute-0 sudo[230899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzcucvtxtecnnvdpqxtgzsoilrsppqfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782931.2927818-395-139852163670616/AnsiballZ_file.py'
Nov 22 03:42:12 compute-0 sudo[230899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:12 compute-0 python3.9[230901]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:12 compute-0 sudo[230899]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:12 compute-0 sudo[231051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucuppqrxxbxumjqyskmpqfkttaexsydm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782932.5889254-407-188959169936775/AnsiballZ_stat.py'
Nov 22 03:42:12 compute-0 sudo[231051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:13 compute-0 python3.9[231053]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:13 compute-0 sudo[231051]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:13 compute-0 ceph-mon[75011]: pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:13 compute-0 sudo[231129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfjtxkxnbdwzpnaagzcmnhukjcrildtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782932.5889254-407-188959169936775/AnsiballZ_file.py'
Nov 22 03:42:13 compute-0 sudo[231129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:13 compute-0 python3.9[231131]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:13 compute-0 sudo[231129]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:14 compute-0 sudo[231281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dogmpjanltaxaepwdcaaypsgojqpvbdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782933.9167585-419-48806717316567/AnsiballZ_systemd.py'
Nov 22 03:42:14 compute-0 sudo[231281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:14 compute-0 python3.9[231283]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:14 compute-0 systemd[1]: Reloading.
Nov 22 03:42:14 compute-0 systemd-sysv-generator[231308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:14 compute-0 systemd-rc-local-generator[231304]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:14 compute-0 systemd[1]: Starting Create netns directory...
Nov 22 03:42:14 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 22 03:42:14 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 22 03:42:14 compute-0 systemd[1]: Finished Create netns directory.
Nov 22 03:42:14 compute-0 sudo[231281]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:15 compute-0 sudo[231348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:15 compute-0 sudo[231348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:15 compute-0 sudo[231348]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:15 compute-0 sudo[231373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:42:15 compute-0 sudo[231373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:15 compute-0 sudo[231373]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:15 compute-0 sudo[231411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:15 compute-0 sudo[231411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:15 compute-0 sudo[231411]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:15 compute-0 ceph-mon[75011]: pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:15 compute-0 sudo[231467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:42:15 compute-0 sudo[231467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:15 compute-0 sudo[231586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouwgiltopbtsarryxchhqskeogvqbvof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782935.3449454-429-146420444244647/AnsiballZ_file.py'
Nov 22 03:42:15 compute-0 sudo[231586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:15 compute-0 python3.9[231588]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:15 compute-0 sudo[231586]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:15 compute-0 sudo[231467]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:42:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:42:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:42:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:42:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:42:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:42:15 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ff3a62e7-99cc-4a20-802f-7395c7cf1b20 does not exist
Nov 22 03:42:15 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 14170103-8706-4331-a24d-fb49f9ed6a7d does not exist
Nov 22 03:42:15 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 4064cda5-73b8-4e22-a4b7-040018a8a9d0 does not exist
Nov 22 03:42:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:42:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:42:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:42:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:42:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:42:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:42:16 compute-0 sudo[231630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:16 compute-0 sudo[231630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:16 compute-0 sudo[231630]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:16 compute-0 sudo[231673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:42:16 compute-0 sudo[231673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:16 compute-0 sudo[231673]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:16 compute-0 sudo[231726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:16 compute-0 sudo[231726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:16 compute-0 sudo[231726]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:16 compute-0 sudo[231774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:42:16 compute-0 sudo[231774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:16 compute-0 sudo[231855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inecgvnvbianjctmhubfutgsagvrqnog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782936.0839813-437-85784299260917/AnsiballZ_stat.py'
Nov 22 03:42:16 compute-0 sudo[231855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:42:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:42:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:42:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:42:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:42:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:42:16 compute-0 python3.9[231859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:16 compute-0 podman[231898]: 2025-11-22 03:42:16.562061509 +0000 UTC m=+0.054466036 container create a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:42:16 compute-0 sudo[231855]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:16 compute-0 systemd[1]: Started libpod-conmon-a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b.scope.
Nov 22 03:42:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:42:16 compute-0 podman[231898]: 2025-11-22 03:42:16.540299015 +0000 UTC m=+0.032703552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:16 compute-0 podman[231898]: 2025-11-22 03:42:16.63594843 +0000 UTC m=+0.128352977 container init a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef)
Nov 22 03:42:16 compute-0 podman[231898]: 2025-11-22 03:42:16.642197408 +0000 UTC m=+0.134601935 container start a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:42:16 compute-0 podman[231898]: 2025-11-22 03:42:16.646182836 +0000 UTC m=+0.138587393 container attach a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:16 compute-0 thirsty_wiles[231914]: 167 167
Nov 22 03:42:16 compute-0 systemd[1]: libpod-a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b.scope: Deactivated successfully.
Nov 22 03:42:16 compute-0 podman[231898]: 2025-11-22 03:42:16.648925512 +0000 UTC m=+0.141330039 container died a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:42:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecdda98c4397b11fa8c3f6be1400ba66535958e0c9b19eba9999594c79d962d0-merged.mount: Deactivated successfully.
Nov 22 03:42:16 compute-0 podman[231898]: 2025-11-22 03:42:16.683263175 +0000 UTC m=+0.175667712 container remove a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:42:16 compute-0 systemd[1]: libpod-conmon-a3c9ceebb000b03c742d33010bf9f030f4e60a6856553fd33a891d7ca661c12b.scope: Deactivated successfully.
Nov 22 03:42:16 compute-0 podman[232007]: 2025-11-22 03:42:16.838059524 +0000 UTC m=+0.040100223 container create 8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:16 compute-0 systemd[1]: Started libpod-conmon-8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08.scope.
Nov 22 03:42:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:42:16 compute-0 sudo[232077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrcrlalaayicmurukqokjplbjnszoxoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782936.0839813-437-85784299260917/AnsiballZ_copy.py'
Nov 22 03:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a0338fdc2b7b82acc6b95731cc1028074e113929384efdbef60250b407a6db9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a0338fdc2b7b82acc6b95731cc1028074e113929384efdbef60250b407a6db9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:16 compute-0 sudo[232077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a0338fdc2b7b82acc6b95731cc1028074e113929384efdbef60250b407a6db9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a0338fdc2b7b82acc6b95731cc1028074e113929384efdbef60250b407a6db9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a0338fdc2b7b82acc6b95731cc1028074e113929384efdbef60250b407a6db9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:16 compute-0 podman[232007]: 2025-11-22 03:42:16.822731746 +0000 UTC m=+0.024772465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:16 compute-0 podman[232007]: 2025-11-22 03:42:16.935124446 +0000 UTC m=+0.137165145 container init 8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:42:16 compute-0 podman[232007]: 2025-11-22 03:42:16.941629465 +0000 UTC m=+0.143670164 container start 8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:42:16 compute-0 podman[232007]: 2025-11-22 03:42:16.944211408 +0000 UTC m=+0.146252107 container attach 8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:42:17 compute-0 python3.9[232079]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763782936.0839813-437-85784299260917/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:17 compute-0 sudo[232077]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:17 compute-0 ceph-mon[75011]: pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:17 compute-0 sudo[232247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijgbzjoszgycmnlunujcfbxkrznduksy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782937.6490145-454-34875205923107/AnsiballZ_file.py'
Nov 22 03:42:17 compute-0 sudo[232247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:17 compute-0 eager_mayer[232064]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:42:17 compute-0 eager_mayer[232064]: --> relative data size: 1.0
Nov 22 03:42:17 compute-0 eager_mayer[232064]: --> All data devices are unavailable
Nov 22 03:42:18 compute-0 systemd[1]: libpod-8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08.scope: Deactivated successfully.
Nov 22 03:42:18 compute-0 podman[232258]: 2025-11-22 03:42:18.069377413 +0000 UTC m=+0.036364745 container died 8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Nov 22 03:42:18 compute-0 python3.9[232251]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:42:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a0338fdc2b7b82acc6b95731cc1028074e113929384efdbef60250b407a6db9-merged.mount: Deactivated successfully.
Nov 22 03:42:18 compute-0 sudo[232247]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:18 compute-0 podman[232258]: 2025-11-22 03:42:18.128312784 +0000 UTC m=+0.095300066 container remove 8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_mayer, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:42:18 compute-0 systemd[1]: libpod-conmon-8a3ce9b0ea52d23147d8d9febbc514ffff19d62d48e2f4149a54e3535f155b08.scope: Deactivated successfully.
Nov 22 03:42:18 compute-0 sudo[231774]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:18 compute-0 sudo[232293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:18 compute-0 sudo[232293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:18 compute-0 sudo[232293]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:18 compute-0 sudo[232322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:42:18 compute-0 sudo[232322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:18 compute-0 sudo[232322]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:18 compute-0 sudo[232359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:18 compute-0 sudo[232359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:18 compute-0 sudo[232359]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:18 compute-0 sudo[232414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:42:18 compute-0 sudo[232414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:18 compute-0 sudo[232538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lloyhtbqijmdzjyhkbrxbsnirfygfqvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782938.352721-462-212054928493396/AnsiballZ_stat.py'
Nov 22 03:42:18 compute-0 sudo[232538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:18 compute-0 podman[232564]: 2025-11-22 03:42:18.785279509 +0000 UTC m=+0.104671806 container create b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:42:18 compute-0 podman[232564]: 2025-11-22 03:42:18.702618028 +0000 UTC m=+0.022010304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:18 compute-0 python3.9[232548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:18 compute-0 sudo[232538]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:18 compute-0 systemd[1]: Started libpod-conmon-b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126.scope.
Nov 22 03:42:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:42:18 compute-0 podman[232564]: 2025-11-22 03:42:18.926814719 +0000 UTC m=+0.246207065 container init b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:18 compute-0 podman[232564]: 2025-11-22 03:42:18.934501878 +0000 UTC m=+0.253894144 container start b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:42:18 compute-0 serene_jang[232580]: 167 167
Nov 22 03:42:18 compute-0 systemd[1]: libpod-b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126.scope: Deactivated successfully.
Nov 22 03:42:19 compute-0 podman[232564]: 2025-11-22 03:42:19.01741053 +0000 UTC m=+0.336802876 container attach b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:42:19 compute-0 podman[232564]: 2025-11-22 03:42:19.018827377 +0000 UTC m=+0.338219683 container died b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:42:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:19 compute-0 sudo[232715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtkskmrqztlcrkqwbrvoyujgtgbtglmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782938.352721-462-212054928493396/AnsiballZ_copy.py'
Nov 22 03:42:19 compute-0 sudo[232715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:19 compute-0 python3.9[232717]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782938.352721-462-212054928493396/.source.json _original_basename=.hdw0i1ws follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:19 compute-0 sudo[232715]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde2bb9d77cb6e6a046803b8594bb84b927a8ab0c96a73fbb02b2a6a6a3f4c50-merged.mount: Deactivated successfully.
Nov 22 03:42:19 compute-0 ceph-mon[75011]: pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:19 compute-0 sudo[232868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgdizanxxhhwijrxlzfuvosyyzzyvkvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782939.6114793-477-201214180931819/AnsiballZ_file.py'
Nov 22 03:42:19 compute-0 sudo[232868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:19 compute-0 podman[232564]: 2025-11-22 03:42:19.90751693 +0000 UTC m=+1.226909216 container remove b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:42:19 compute-0 systemd[1]: libpod-conmon-b63d014255b314583c792e467eb88682a3061afd050f833a240bf52ed7c6b126.scope: Deactivated successfully.
Nov 22 03:42:20 compute-0 python3.9[232870]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:20 compute-0 sudo[232868]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:20 compute-0 podman[232878]: 2025-11-22 03:42:20.100650823 +0000 UTC m=+0.070976723 container create 3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hugle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:42:20 compute-0 podman[232878]: 2025-11-22 03:42:20.061901656 +0000 UTC m=+0.032227606 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:20 compute-0 systemd[1]: Started libpod-conmon-3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a.scope.
Nov 22 03:42:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68062c63cc11cf97e3e99bf27324abab926a4e5ef44bdf6c879f8dea8ff8337/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68062c63cc11cf97e3e99bf27324abab926a4e5ef44bdf6c879f8dea8ff8337/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68062c63cc11cf97e3e99bf27324abab926a4e5ef44bdf6c879f8dea8ff8337/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c68062c63cc11cf97e3e99bf27324abab926a4e5ef44bdf6c879f8dea8ff8337/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:20 compute-0 podman[232878]: 2025-11-22 03:42:20.321566469 +0000 UTC m=+0.291892349 container init 3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hugle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:42:20 compute-0 podman[232878]: 2025-11-22 03:42:20.328299554 +0000 UTC m=+0.298625414 container start 3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hugle, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:42:20 compute-0 podman[232878]: 2025-11-22 03:42:20.420834558 +0000 UTC m=+0.391160468 container attach 3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hugle, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 03:42:20 compute-0 sudo[233048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvojhngekontbjgcewlmqqbclkatbysi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782940.2931786-485-111467175340114/AnsiballZ_stat.py'
Nov 22 03:42:20 compute-0 sudo[233048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:20 compute-0 sudo[233048]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]: {
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:     "0": [
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:         {
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "devices": [
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "/dev/loop3"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             ],
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_name": "ceph_lv0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_size": "21470642176",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "name": "ceph_lv0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "tags": {
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cluster_name": "ceph",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.crush_device_class": "",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.encrypted": "0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osd_id": "0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.type": "block",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.vdo": "0"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             },
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "type": "block",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "vg_name": "ceph_vg0"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:         }
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:     ],
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:     "1": [
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:         {
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "devices": [
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "/dev/loop4"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             ],
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_name": "ceph_lv1",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_size": "21470642176",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "name": "ceph_lv1",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "tags": {
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cluster_name": "ceph",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.crush_device_class": "",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.encrypted": "0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osd_id": "1",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.type": "block",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.vdo": "0"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             },
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "type": "block",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "vg_name": "ceph_vg1"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:         }
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:     ],
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:     "2": [
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:         {
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "devices": [
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "/dev/loop5"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             ],
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_name": "ceph_lv2",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_size": "21470642176",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "name": "ceph_lv2",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "tags": {
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.cluster_name": "ceph",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.crush_device_class": "",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.encrypted": "0",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osd_id": "2",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.type": "block",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:                 "ceph.vdo": "0"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             },
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "type": "block",
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:             "vg_name": "ceph_vg2"
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:         }
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]:     ]
Nov 22 03:42:21 compute-0 quizzical_hugle[232918]: }
Nov 22 03:42:21 compute-0 systemd[1]: libpod-3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a.scope: Deactivated successfully.
Nov 22 03:42:21 compute-0 podman[232878]: 2025-11-22 03:42:21.145966436 +0000 UTC m=+1.116292296 container died 3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hugle, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:42:21 compute-0 sudo[233175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brvrtcifluhsqbsenmwyhesvsdpdznpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782940.2931786-485-111467175340114/AnsiballZ_copy.py'
Nov 22 03:42:21 compute-0 sudo[233175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c68062c63cc11cf97e3e99bf27324abab926a4e5ef44bdf6c879f8dea8ff8337-merged.mount: Deactivated successfully.
Nov 22 03:42:21 compute-0 podman[232878]: 2025-11-22 03:42:21.346567203 +0000 UTC m=+1.316893103 container remove 3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_hugle, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:42:21 compute-0 systemd[1]: libpod-conmon-3686330bee3c596efc244d2289f34af793b21aa016b7c63020a93c3259a3b90a.scope: Deactivated successfully.
Nov 22 03:42:21 compute-0 sudo[232414]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:21 compute-0 sudo[233175]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:21 compute-0 sudo[233192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:21 compute-0 sudo[233192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:21 compute-0 sudo[233192]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:21 compute-0 sudo[233241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:42:21 compute-0 sudo[233241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:21 compute-0 sudo[233241]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:21 compute-0 sudo[233266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:21 compute-0 sudo[233266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:21 compute-0 sudo[233266]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:21 compute-0 sudo[233291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:42:21 compute-0 sudo[233291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:21 compute-0 ceph-mon[75011]: pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:22 compute-0 podman[233409]: 2025-11-22 03:42:22.023279052 +0000 UTC m=+0.054245009 container create 121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:42:22 compute-0 systemd[1]: Started libpod-conmon-121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a.scope.
Nov 22 03:42:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:42:22 compute-0 podman[233409]: 2025-11-22 03:42:22.094009715 +0000 UTC m=+0.124975672 container init 121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:42:22 compute-0 podman[233409]: 2025-11-22 03:42:22.002855742 +0000 UTC m=+0.033821719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:22 compute-0 podman[233409]: 2025-11-22 03:42:22.102996937 +0000 UTC m=+0.133962884 container start 121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:42:22 compute-0 podman[233409]: 2025-11-22 03:42:22.106525873 +0000 UTC m=+0.137491820 container attach 121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banzai, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:42:22 compute-0 naughty_banzai[233449]: 167 167
Nov 22 03:42:22 compute-0 systemd[1]: libpod-121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a.scope: Deactivated successfully.
Nov 22 03:42:22 compute-0 podman[233409]: 2025-11-22 03:42:22.109599656 +0000 UTC m=+0.140565613 container died 121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-abb4d46192d7ff06cb71b26bfe38006595f7ba3a5e12e3e893f978cc7bc5bb9d-merged.mount: Deactivated successfully.
Nov 22 03:42:22 compute-0 podman[233409]: 2025-11-22 03:42:22.152441844 +0000 UTC m=+0.183407781 container remove 121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_banzai, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:42:22 compute-0 systemd[1]: libpod-conmon-121bb557da3abf5141a66a102bdf568a664561f42636b9e9c891539ddb143f6a.scope: Deactivated successfully.
Nov 22 03:42:22 compute-0 podman[233446]: 2025-11-22 03:42:22.172384077 +0000 UTC m=+0.113799856 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 22 03:42:22 compute-0 sudo[233561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zljgrnpqljntpyiyusakbbwnjrthkitf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782941.7670965-502-54248204736733/AnsiballZ_container_config_data.py'
Nov 22 03:42:22 compute-0 sudo[233561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:22 compute-0 podman[233530]: 2025-11-22 03:42:22.33006196 +0000 UTC m=+0.040184689 container create 0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldwasser, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:42:22 compute-0 systemd[1]: Started libpod-conmon-0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c.scope.
Nov 22 03:42:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad3a23abda7e00fb806e0d7d6e359f6e7a99bf1d5284f2788d45ef87b9f10474/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad3a23abda7e00fb806e0d7d6e359f6e7a99bf1d5284f2788d45ef87b9f10474/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad3a23abda7e00fb806e0d7d6e359f6e7a99bf1d5284f2788d45ef87b9f10474/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad3a23abda7e00fb806e0d7d6e359f6e7a99bf1d5284f2788d45ef87b9f10474/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:22 compute-0 podman[233530]: 2025-11-22 03:42:22.312545499 +0000 UTC m=+0.022668238 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:42:22 compute-0 podman[233530]: 2025-11-22 03:42:22.420162701 +0000 UTC m=+0.130285450 container init 0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldwasser, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:42:22 compute-0 podman[233530]: 2025-11-22 03:42:22.429051603 +0000 UTC m=+0.139174332 container start 0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldwasser, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:42:22 compute-0 podman[233530]: 2025-11-22 03:42:22.431911973 +0000 UTC m=+0.142034702 container attach 0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldwasser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:42:22 compute-0 python3.9[233565]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 22 03:42:22 compute-0 sudo[233561]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:22 compute-0 ceph-mon[75011]: pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:42:22.995 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:42:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:42:22.996 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:42:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:42:22.997 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:42:23 compute-0 sudo[233730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzsycuhkpfzphofivlfferfkplxcyqqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782942.7562313-511-233011516037263/AnsiballZ_container_config_hash.py'
Nov 22 03:42:23 compute-0 sudo[233730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:23 compute-0 python3.9[233732]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:42:23 compute-0 sudo[233730]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]: {
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "osd_id": 1,
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "type": "bluestore"
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:     },
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "osd_id": 0,
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "type": "bluestore"
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:     },
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "osd_id": 2,
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:         "type": "bluestore"
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]:     }
Nov 22 03:42:23 compute-0 jovial_goldwasser[233568]: }
Nov 22 03:42:23 compute-0 systemd[1]: libpod-0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c.scope: Deactivated successfully.
Nov 22 03:42:23 compute-0 podman[233530]: 2025-11-22 03:42:23.4017919 +0000 UTC m=+1.111914619 container died 0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad3a23abda7e00fb806e0d7d6e359f6e7a99bf1d5284f2788d45ef87b9f10474-merged.mount: Deactivated successfully.
Nov 22 03:42:23 compute-0 podman[233530]: 2025-11-22 03:42:23.475818571 +0000 UTC m=+1.185941320 container remove 0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldwasser, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:42:23 compute-0 systemd[1]: libpod-conmon-0571a91eac0c790d1350559c6494f8272f6a8d543b50886a0265665a7e70f35c.scope: Deactivated successfully.
Nov 22 03:42:23 compute-0 sudo[233291]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:42:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:42:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:42:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:42:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ec3b947f-a44d-42b6-aed6-28352769c1de does not exist
Nov 22 03:42:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 963990a7-dd3f-4884-b7fa-232a7c2fece1 does not exist
Nov 22 03:42:23 compute-0 podman[233789]: 2025-11-22 03:42:23.543322639 +0000 UTC m=+0.065277238 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 03:42:23 compute-0 sudo[233806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:42:23 compute-0 sudo[233806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:23 compute-0 sudo[233806]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:23 compute-0 sudo[233854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:42:23 compute-0 sudo[233854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:42:23 compute-0 sudo[233854]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:24 compute-0 sudo[233981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exozegeojlpjodhmttxrefeuyimjucaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782943.6025004-520-255445488029773/AnsiballZ_podman_container_info.py'
Nov 22 03:42:24 compute-0 sudo[233981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:24 compute-0 python3.9[233983]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 22 03:42:24 compute-0 sudo[233981]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:42:24 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:42:25 compute-0 ceph-mon[75011]: pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:25 compute-0 sudo[234158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjjsvluslbaoysrtkcgvcjvtualvjwws ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763782945.1089516-533-236409842573466/AnsiballZ_edpm_container_manage.py'
Nov 22 03:42:25 compute-0 sudo[234158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:25 compute-0 python3[234160]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:42:27 compute-0 podman[234174]: 2025-11-22 03:42:27.033690375 +0000 UTC m=+1.045398790 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 03:42:27 compute-0 podman[234229]: 2025-11-22 03:42:27.21450725 +0000 UTC m=+0.079026655 container create 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:42:27 compute-0 podman[234229]: 2025-11-22 03:42:27.168691551 +0000 UTC m=+0.033211016 image pull 5a87eb2d1bea5c4c3bce654551fc0b05a96cf5556b36110e17bddeee8189b072 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 03:42:27 compute-0 python3[234160]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 22 03:42:27 compute-0 sudo[234158]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:27 compute-0 ceph-mon[75011]: pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:27 compute-0 sudo[234415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pefznkzlsgdzyasbfbvoejefxnubksaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782947.583155-541-10479653692269/AnsiballZ_stat.py'
Nov 22 03:42:27 compute-0 sudo[234415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:28 compute-0 python3.9[234417]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:42:28 compute-0 sudo[234415]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:28 compute-0 sudo[234569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdwzjwbdsbugkwqrrohzcqstcofkugte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782948.402923-550-177177281515364/AnsiballZ_file.py'
Nov 22 03:42:28 compute-0 sudo[234569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:28 compute-0 python3.9[234571]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:28 compute-0 sudo[234569]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:29 compute-0 sudo[234645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoegzzypeanwzkdonoeihlxeqbztrayt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782948.402923-550-177177281515364/AnsiballZ_stat.py'
Nov 22 03:42:29 compute-0 sudo[234645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:29 compute-0 python3.9[234647]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:42:29 compute-0 sudo[234645]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:29 compute-0 ceph-mon[75011]: pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:30 compute-0 sudo[234796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jknbobnsqjfnddfipuwngcjlignmdsiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782949.5764506-550-132574155333606/AnsiballZ_copy.py'
Nov 22 03:42:30 compute-0 sudo[234796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:30 compute-0 python3.9[234798]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763782949.5764506-550-132574155333606/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:30 compute-0 sudo[234796]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:30 compute-0 sudo[234872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdtnrkppniovqrzewwaonagvbkkgdrox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782949.5764506-550-132574155333606/AnsiballZ_systemd.py'
Nov 22 03:42:30 compute-0 sudo[234872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:30 compute-0 python3.9[234874]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:42:30 compute-0 systemd[1]: Reloading.
Nov 22 03:42:31 compute-0 systemd-rc-local-generator[234899]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:31 compute-0 systemd-sysv-generator[234902]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:31 compute-0 sudo[234872]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:31 compute-0 ceph-mon[75011]: pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:31 compute-0 sudo[234983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwvumuivzbrbmisbprgwvjexxrsmddmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782949.5764506-550-132574155333606/AnsiballZ_systemd.py'
Nov 22 03:42:31 compute-0 sudo[234983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:31 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 22 03:42:32 compute-0 python3.9[234985]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:32 compute-0 systemd[1]: Reloading.
Nov 22 03:42:32 compute-0 systemd-rc-local-generator[235008]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:32 compute-0 systemd-sysv-generator[235014]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:32 compute-0 systemd[1]: Starting multipathd container...
Nov 22 03:42:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c624e8a87a7c34bfe3bad4bc9079af7166193ffc558906b5e862783a4e952bd3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c624e8a87a7c34bfe3bad4bc9079af7166193ffc558906b5e862783a4e952bd3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:32 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706.
Nov 22 03:42:32 compute-0 podman[235025]: 2025-11-22 03:42:32.561569041 +0000 UTC m=+0.139483070 container init 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 22 03:42:32 compute-0 multipathd[235041]: + sudo -E kolla_set_configs
Nov 22 03:42:32 compute-0 podman[235025]: 2025-11-22 03:42:32.598234736 +0000 UTC m=+0.176148745 container start 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:42:32 compute-0 podman[235025]: multipathd
Nov 22 03:42:32 compute-0 systemd[1]: Started multipathd container.
Nov 22 03:42:32 compute-0 sudo[235047]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 03:42:32 compute-0 sudo[235047]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 03:42:32 compute-0 sudo[235047]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 03:42:32 compute-0 sudo[234983]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:32 compute-0 multipathd[235041]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:42:32 compute-0 multipathd[235041]: INFO:__main__:Validating config file
Nov 22 03:42:32 compute-0 multipathd[235041]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:42:32 compute-0 multipathd[235041]: INFO:__main__:Writing out command to execute
Nov 22 03:42:32 compute-0 sudo[235047]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:32 compute-0 multipathd[235041]: ++ cat /run_command
Nov 22 03:42:32 compute-0 multipathd[235041]: + CMD='/usr/sbin/multipathd -d'
Nov 22 03:42:32 compute-0 multipathd[235041]: + ARGS=
Nov 22 03:42:32 compute-0 multipathd[235041]: + sudo kolla_copy_cacerts
Nov 22 03:42:32 compute-0 podman[235048]: 2025-11-22 03:42:32.691443253 +0000 UTC m=+0.082498158 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:42:32 compute-0 systemd[1]: 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706-32e12f5a7ca3fb28.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 03:42:32 compute-0 systemd[1]: 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706-32e12f5a7ca3fb28.service: Failed with result 'exit-code'.
Nov 22 03:42:32 compute-0 sudo[235077]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 03:42:32 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:42:32 compute-0 sudo[235077]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 03:42:32 compute-0 sudo[235077]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 03:42:32 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:42:32 compute-0 sudo[235077]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:32 compute-0 multipathd[235041]: + [[ ! -n '' ]]
Nov 22 03:42:32 compute-0 multipathd[235041]: + . kolla_extend_start
Nov 22 03:42:32 compute-0 multipathd[235041]: Running command: '/usr/sbin/multipathd -d'
Nov 22 03:42:32 compute-0 multipathd[235041]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 03:42:32 compute-0 multipathd[235041]: + umask 0022
Nov 22 03:42:32 compute-0 multipathd[235041]: + exec /usr/sbin/multipathd -d
Nov 22 03:42:32 compute-0 multipathd[235041]: 3205.385117 | --------start up--------
Nov 22 03:42:32 compute-0 multipathd[235041]: 3205.385134 | read /etc/multipath.conf
Nov 22 03:42:32 compute-0 multipathd[235041]: 3205.391425 | path checkers start up
Nov 22 03:42:33 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 22 03:42:33 compute-0 python3.9[235231]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:42:33 compute-0 ceph-mon[75011]: pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:33 compute-0 sudo[235384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcylfgsqsyojdxhnrgekspigiqzqixoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782953.5441017-586-175818719923529/AnsiballZ_command.py'
Nov 22 03:42:33 compute-0 sudo[235384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:34 compute-0 python3.9[235386]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:42:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:34 compute-0 sudo[235384]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:34 compute-0 sudo[235549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opujjhgyimrvutirgqpjoudoobcbitds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782954.419147-594-6877111409862/AnsiballZ_systemd.py'
Nov 22 03:42:34 compute-0 sudo[235549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:35 compute-0 python3.9[235551]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:42:35 compute-0 systemd[1]: Stopping multipathd container...
Nov 22 03:42:35 compute-0 multipathd[235041]: 3207.895040 | exit (signal)
Nov 22 03:42:35 compute-0 multipathd[235041]: 3207.895805 | --------shut down-------
Nov 22 03:42:35 compute-0 systemd[1]: libpod-66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706.scope: Deactivated successfully.
Nov 22 03:42:35 compute-0 podman[235555]: 2025-11-22 03:42:35.274211127 +0000 UTC m=+0.083190466 container died 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:42:35 compute-0 systemd[1]: 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706-32e12f5a7ca3fb28.timer: Deactivated successfully.
Nov 22 03:42:35 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706.
Nov 22 03:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706-userdata-shm.mount: Deactivated successfully.
Nov 22 03:42:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c624e8a87a7c34bfe3bad4bc9079af7166193ffc558906b5e862783a4e952bd3-merged.mount: Deactivated successfully.
Nov 22 03:42:35 compute-0 podman[235555]: 2025-11-22 03:42:35.320842346 +0000 UTC m=+0.129821665 container cleanup 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:42:35 compute-0 podman[235555]: multipathd
Nov 22 03:42:35 compute-0 podman[235580]: multipathd
Nov 22 03:42:35 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 22 03:42:35 compute-0 systemd[1]: Stopped multipathd container.
Nov 22 03:42:35 compute-0 systemd[1]: Starting multipathd container...
Nov 22 03:42:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c624e8a87a7c34bfe3bad4bc9079af7166193ffc558906b5e862783a4e952bd3/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c624e8a87a7c34bfe3bad4bc9079af7166193ffc558906b5e862783a4e952bd3/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 03:42:35 compute-0 ceph-mon[75011]: pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:35 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706.
Nov 22 03:42:35 compute-0 podman[235593]: 2025-11-22 03:42:35.616013199 +0000 UTC m=+0.168048605 container init 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:42:35 compute-0 multipathd[235609]: + sudo -E kolla_set_configs
Nov 22 03:42:35 compute-0 sudo[235615]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Nov 22 03:42:35 compute-0 podman[235593]: 2025-11-22 03:42:35.657726225 +0000 UTC m=+0.209761640 container start 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:42:35 compute-0 sudo[235615]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 03:42:35 compute-0 sudo[235615]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 03:42:35 compute-0 podman[235593]: multipathd
Nov 22 03:42:35 compute-0 systemd[1]: Started multipathd container.
Nov 22 03:42:35 compute-0 sudo[235549]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:35 compute-0 multipathd[235609]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:42:35 compute-0 multipathd[235609]: INFO:__main__:Validating config file
Nov 22 03:42:35 compute-0 multipathd[235609]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:42:35 compute-0 multipathd[235609]: INFO:__main__:Writing out command to execute
Nov 22 03:42:35 compute-0 sudo[235615]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:35 compute-0 multipathd[235609]: ++ cat /run_command
Nov 22 03:42:35 compute-0 multipathd[235609]: + CMD='/usr/sbin/multipathd -d'
Nov 22 03:42:35 compute-0 multipathd[235609]: + ARGS=
Nov 22 03:42:35 compute-0 multipathd[235609]: + sudo kolla_copy_cacerts
Nov 22 03:42:35 compute-0 sudo[235635]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Nov 22 03:42:35 compute-0 sudo[235635]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Nov 22 03:42:35 compute-0 sudo[235635]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Nov 22 03:42:35 compute-0 sudo[235635]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:35 compute-0 multipathd[235609]: + [[ ! -n '' ]]
Nov 22 03:42:35 compute-0 multipathd[235609]: + . kolla_extend_start
Nov 22 03:42:35 compute-0 multipathd[235609]: Running command: '/usr/sbin/multipathd -d'
Nov 22 03:42:35 compute-0 multipathd[235609]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 22 03:42:35 compute-0 multipathd[235609]: + umask 0022
Nov 22 03:42:35 compute-0 multipathd[235609]: + exec /usr/sbin/multipathd -d
Nov 22 03:42:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:35 compute-0 multipathd[235609]: 3208.426818 | --------start up--------
Nov 22 03:42:35 compute-0 multipathd[235609]: 3208.426833 | read /etc/multipath.conf
Nov 22 03:42:35 compute-0 multipathd[235609]: 3208.432995 | path checkers start up
Nov 22 03:42:35 compute-0 podman[235616]: 2025-11-22 03:42:35.780323605 +0000 UTC m=+0.103617625 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:42:35 compute-0 systemd[1]: 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706-2ff57d7c25deae49.service: Main process exited, code=exited, status=1/FAILURE
Nov 22 03:42:35 compute-0 systemd[1]: 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706-2ff57d7c25deae49.service: Failed with result 'exit-code'.
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:42:36
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.data', 'volumes', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'backups', 'images', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'default.rgw.control']
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:42:36 compute-0 sudo[235797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjiywhxjnmdosfrbeknfzfdyqjuxsiry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782955.8727765-602-240790540938318/AnsiballZ_file.py'
Nov 22 03:42:36 compute-0 sudo[235797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:42:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:42:36 compute-0 python3.9[235799]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:36 compute-0 sudo[235797]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:37 compute-0 sudo[235949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tulkhrxffsjbtynhnxfswbfzzmhbijni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782956.882574-614-275156314074322/AnsiballZ_file.py'
Nov 22 03:42:37 compute-0 sudo[235949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:37 compute-0 python3.9[235951]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 22 03:42:37 compute-0 sudo[235949]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:37 compute-0 ceph-mon[75011]: pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:37 compute-0 sudo[236101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlogtxfvxeundpixrlwtwtaloxoksgfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782957.640427-622-114028647000406/AnsiballZ_modprobe.py'
Nov 22 03:42:37 compute-0 sudo[236101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:38 compute-0 python3.9[236103]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 22 03:42:38 compute-0 kernel: Key type psk registered
Nov 22 03:42:38 compute-0 sudo[236101]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:38 compute-0 sudo[236263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpqxsqkpdblqqzyucmousbubmlwucegw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782958.3810365-630-80537953702119/AnsiballZ_stat.py'
Nov 22 03:42:38 compute-0 sudo[236263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:38 compute-0 python3.9[236265]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:42:38 compute-0 sudo[236263]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:39 compute-0 sudo[236386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrcbkcjozttbpxacmjmnwyvwpgzzzjqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782958.3810365-630-80537953702119/AnsiballZ_copy.py'
Nov 22 03:42:39 compute-0 sudo[236386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:39 compute-0 python3.9[236388]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1763782958.3810365-630-80537953702119/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:39 compute-0 sudo[236386]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:39 compute-0 ceph-mon[75011]: pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:39 compute-0 sudo[236538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlscrctnvvydfptlxtwllkflyvwnbrnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782959.6498637-646-200866314734669/AnsiballZ_lineinfile.py'
Nov 22 03:42:39 compute-0 sudo[236538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:40 compute-0 python3.9[236540]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:40 compute-0 sudo[236538]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:40 compute-0 sudo[236690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tinxrgdichpqwydcricugikxmnxxkcom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782960.3689382-654-72020755787217/AnsiballZ_systemd.py'
Nov 22 03:42:40 compute-0 sudo[236690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:41 compute-0 python3.9[236692]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:42:41 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 22 03:42:41 compute-0 systemd[1]: Stopped Load Kernel Modules.
Nov 22 03:42:41 compute-0 systemd[1]: Stopping Load Kernel Modules...
Nov 22 03:42:41 compute-0 systemd[1]: Starting Load Kernel Modules...
Nov 22 03:42:41 compute-0 systemd[1]: Finished Load Kernel Modules.
Nov 22 03:42:41 compute-0 sudo[236690]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:41 compute-0 ceph-mon[75011]: pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:41 compute-0 sudo[236846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omrhnislsnikhwjzxdryvnyixtppzfli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782961.3894708-662-181464618065494/AnsiballZ_dnf.py'
Nov 22 03:42:41 compute-0 sudo[236846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:42 compute-0 python3.9[236848]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 22 03:42:43 compute-0 ceph-mon[75011]: pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:43 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 22 03:42:43 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 22 03:42:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:44 compute-0 systemd[1]: Reloading.
Nov 22 03:42:44 compute-0 systemd-sysv-generator[236885]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:44 compute-0 systemd-rc-local-generator[236881]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:44 compute-0 systemd[1]: Reloading.
Nov 22 03:42:44 compute-0 systemd-rc-local-generator[236918]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:44 compute-0 systemd-sysv-generator[236922]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:44 compute-0 ceph-mon[75011]: pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:45 compute-0 systemd-logind[799]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 22 03:42:45 compute-0 systemd-logind[799]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 22 03:42:45 compute-0 lvm[236969]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 03:42:45 compute-0 lvm[236969]: VG ceph_vg2 finished
Nov 22 03:42:45 compute-0 lvm[236967]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 03:42:45 compute-0 lvm[236967]: VG ceph_vg0 finished
Nov 22 03:42:45 compute-0 lvm[236966]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 03:42:45 compute-0 lvm[236966]: VG ceph_vg1 finished
Nov 22 03:42:45 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 22 03:42:45 compute-0 systemd[1]: Starting man-db-cache-update.service...
Nov 22 03:42:45 compute-0 systemd[1]: Reloading.
Nov 22 03:42:45 compute-0 systemd-rc-local-generator[237018]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:45 compute-0 systemd-sysv-generator[237022]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:45 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 22 03:42:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:42:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:42:46 compute-0 sudo[236846]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:46 compute-0 sudo[238137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxdxeahdnneyvnzcwzuibyzbjgxppios ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782966.4453192-670-114082384257071/AnsiballZ_systemd_service.py'
Nov 22 03:42:46 compute-0 sudo[238137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:46 compute-0 ceph-mon[75011]: pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:47 compute-0 python3.9[238162]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:42:47 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 22 03:42:47 compute-0 systemd[1]: Finished man-db-cache-update.service.
Nov 22 03:42:47 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.668s CPU time.
Nov 22 03:42:47 compute-0 systemd[1]: run-r90260378f5a2464793327400ad910899.service: Deactivated successfully.
Nov 22 03:42:47 compute-0 systemd[1]: Stopping Open-iSCSI...
Nov 22 03:42:47 compute-0 iscsid[225768]: iscsid shutting down.
Nov 22 03:42:47 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Nov 22 03:42:47 compute-0 systemd[1]: Stopped Open-iSCSI.
Nov 22 03:42:47 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 22 03:42:47 compute-0 systemd[1]: Starting Open-iSCSI...
Nov 22 03:42:47 compute-0 systemd[1]: Started Open-iSCSI.
Nov 22 03:42:47 compute-0 sudo[238137]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:48 compute-0 python3.9[238467]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 22 03:42:48 compute-0 ceph-mon[75011]: pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:48 compute-0 sudo[238621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgjesbzjfhkuwzxuiymyvwjtdapcrtdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782968.563659-688-263822820690991/AnsiballZ_file.py'
Nov 22 03:42:48 compute-0 sudo[238621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:49 compute-0 python3.9[238623]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:42:49 compute-0 sudo[238621]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:49 compute-0 sudo[238773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tokdcqraafxqybfjpbfgotyhfdivldoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782969.48834-699-67360323379144/AnsiballZ_systemd_service.py'
Nov 22 03:42:49 compute-0 sudo[238773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:50 compute-0 python3.9[238775]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:42:50 compute-0 systemd[1]: Reloading.
Nov 22 03:42:50 compute-0 systemd-rc-local-generator[238799]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:42:50 compute-0 systemd-sysv-generator[238803]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:42:50 compute-0 sudo[238773]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:50 compute-0 ceph-mon[75011]: pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:51 compute-0 python3.9[238961]: ansible-ansible.builtin.service_facts Invoked
Nov 22 03:42:51 compute-0 network[238978]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 22 03:42:51 compute-0 network[238979]: 'network-scripts' will be removed from distribution in near future.
Nov 22 03:42:51 compute-0 network[238980]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 22 03:42:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:52 compute-0 podman[238991]: 2025-11-22 03:42:52.335340096 +0000 UTC m=+0.093245551 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 03:42:52 compute-0 ceph-mon[75011]: pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:53 compute-0 podman[239073]: 2025-11-22 03:42:53.679981713 +0000 UTC m=+0.088395061 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:42:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:54 compute-0 ceph-mon[75011]: pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:55 compute-0 sudo[239295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytasgefxlpbguemuhahelqrfwqseddg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782974.8684368-718-105105121649341/AnsiballZ_systemd_service.py'
Nov 22 03:42:55 compute-0 sudo[239295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:55 compute-0 python3.9[239297]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:55 compute-0 sudo[239295]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:55 compute-0 sudo[239448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgvkovdrudppmjixwyqhfgthnxsvjkck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782975.6370614-718-113002657194383/AnsiballZ_systemd_service.py'
Nov 22 03:42:55 compute-0 sudo[239448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:56 compute-0 python3.9[239450]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:56 compute-0 sudo[239448]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:56 compute-0 sudo[239601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yudhbgvgtggpteeowdusjczfxufnopds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782976.3767533-718-224845844035720/AnsiballZ_systemd_service.py'
Nov 22 03:42:56 compute-0 sudo[239601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:56 compute-0 ceph-mon[75011]: pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:56 compute-0 python3.9[239603]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:57 compute-0 sudo[239601]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:57 compute-0 sudo[239754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrgufyellvvwzzaswmpnveufkbdfxnlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782977.0981166-718-82928492103760/AnsiballZ_systemd_service.py'
Nov 22 03:42:57 compute-0 sudo[239754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:57 compute-0 python3.9[239756]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:57 compute-0 sudo[239754]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:58 compute-0 sudo[239907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xojdajnglmhfosppnqlrdcibsodytxql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782977.879086-718-216588218009412/AnsiballZ_systemd_service.py'
Nov 22 03:42:58 compute-0 sudo[239907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:58 compute-0 python3.9[239909]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:58 compute-0 sudo[239907]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:58 compute-0 sudo[240060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkbymxokjjtscwgakxewpetfloeldrpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782978.620304-718-149706251058156/AnsiballZ_systemd_service.py'
Nov 22 03:42:58 compute-0 sudo[240060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:42:59 compute-0 python3.9[240062]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:42:59 compute-0 ceph-mon[75011]: pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:42:59 compute-0 sudo[240060]: pam_unix(sudo:session): session closed for user root
Nov 22 03:42:59 compute-0 sudo[240213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvuvoboceftpoybbnlutkcnebbdfxdki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782979.381049-718-205915549831100/AnsiballZ_systemd_service.py'
Nov 22 03:42:59 compute-0 sudo[240213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:42:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:00 compute-0 python3.9[240215]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:43:00 compute-0 sudo[240213]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:00 compute-0 sudo[240366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ribyelyrqduwsnrwgiujobyiilcrdiys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782980.1577299-718-250078619671552/AnsiballZ_systemd_service.py'
Nov 22 03:43:00 compute-0 sudo[240366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:00 compute-0 python3.9[240368]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:43:00 compute-0 sudo[240366]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:01 compute-0 ceph-mon[75011]: pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:01 compute-0 sudo[240519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzxauaqwuonholfhbfjhtvryvoiwsoxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782981.1081457-777-39271233319951/AnsiballZ_file.py'
Nov 22 03:43:01 compute-0 sudo[240519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:01 compute-0 python3.9[240521]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:01 compute-0 sudo[240519]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:02 compute-0 sudo[240671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abkiczjjwjgfuwbgtcblhdixerribfyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782981.7307253-777-265948791294912/AnsiballZ_file.py'
Nov 22 03:43:02 compute-0 sudo[240671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:02 compute-0 python3.9[240673]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:02 compute-0 sudo[240671]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:02 compute-0 sudo[240823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhoxjdidiocrfusupzdflelwwmyjuxyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782982.5241442-777-232563727740423/AnsiballZ_file.py'
Nov 22 03:43:02 compute-0 sudo[240823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:03 compute-0 python3.9[240825]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:03 compute-0 sudo[240823]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:03 compute-0 ceph-mon[75011]: pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:03 compute-0 sudo[240975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwdjeflhmwpeimbjhxomiuksrahjlqkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782983.1647954-777-272236300830245/AnsiballZ_file.py'
Nov 22 03:43:03 compute-0 sudo[240975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:03 compute-0 python3.9[240977]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:03 compute-0 sudo[240975]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:04 compute-0 sudo[241127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbmeqiufoutszaucexjkiqvpbexfkljm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782983.8179016-777-150552675640822/AnsiballZ_file.py'
Nov 22 03:43:04 compute-0 sudo[241127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:04 compute-0 python3.9[241129]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:04 compute-0 sudo[241127]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:04 compute-0 sudo[241279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrqcdozainnpffbibcrvqnetilgqtwiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782984.464772-777-161079705907879/AnsiballZ_file.py'
Nov 22 03:43:04 compute-0 sudo[241279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:05 compute-0 python3.9[241281]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:05 compute-0 sudo[241279]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:05 compute-0 ceph-mon[75011]: pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:05 compute-0 sudo[241431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayfuycunujalogxihrlejqystzrcsaih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782985.1412516-777-84886103130948/AnsiballZ_file.py'
Nov 22 03:43:05 compute-0 sudo[241431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:05 compute-0 python3.9[241433]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:05 compute-0 sudo[241431]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:06 compute-0 sudo[241596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svqegtvdualrhnuhayivopzmxprwfurx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782985.8553019-777-101046088498735/AnsiballZ_file.py'
Nov 22 03:43:06 compute-0 sudo[241596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:06 compute-0 podman[241557]: 2025-11-22 03:43:06.237310648 +0000 UTC m=+0.070395275 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 03:43:06 compute-0 python3.9[241604]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:06 compute-0 sudo[241596]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:06 compute-0 sudo[241755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtndxpclitrmzwfvpaacqoknwpxrkllj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782986.5659883-834-16557337734143/AnsiballZ_file.py'
Nov 22 03:43:06 compute-0 sudo[241755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:07 compute-0 python3.9[241757]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:07 compute-0 sudo[241755]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:07 compute-0 ceph-mon[75011]: pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:07 compute-0 sudo[241907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gubzimkeqozucfkxmzczlkyzntvqbrzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782987.2046566-834-15605518814170/AnsiballZ_file.py'
Nov 22 03:43:07 compute-0 sudo[241907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:07 compute-0 python3.9[241909]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:07 compute-0 sudo[241907]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:08 compute-0 sudo[242059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkmsnvpmnzonxcieaaqsrwpujnntfsxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782987.8670034-834-147919320572949/AnsiballZ_file.py'
Nov 22 03:43:08 compute-0 sudo[242059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:08 compute-0 python3.9[242061]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:08 compute-0 sudo[242059]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:08 compute-0 sudo[242211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjvgvzrfiwosgtiuvnszgscjijnvvhgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782988.4717932-834-120491602644525/AnsiballZ_file.py'
Nov 22 03:43:08 compute-0 sudo[242211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:08 compute-0 python3.9[242213]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:08 compute-0 sudo[242211]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:09 compute-0 ceph-mon[75011]: pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:09 compute-0 sudo[242363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbmpaqbjlkgnlnhwesttcxfvfnufddzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782989.0726705-834-117633808782326/AnsiballZ_file.py'
Nov 22 03:43:09 compute-0 sudo[242363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:09 compute-0 python3.9[242365]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:09 compute-0 sudo[242363]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:09 compute-0 sudo[242515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rotyiofmuhgvxeqiwdaxokudrnrwcvvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782989.6673634-834-160868900585630/AnsiballZ_file.py'
Nov 22 03:43:09 compute-0 sudo[242515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:10 compute-0 python3.9[242517]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:10 compute-0 sudo[242515]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:10 compute-0 sudo[242667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fprqzrtsfkkjhojoiosyhfdwfruvzuqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782990.256563-834-166274387255792/AnsiballZ_file.py'
Nov 22 03:43:10 compute-0 sudo[242667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:10 compute-0 python3.9[242669]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:10 compute-0 sudo[242667]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:11 compute-0 sudo[242819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svjaaiatrfohfnabcamkpmkdmqtyglis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782990.8046138-834-265530427719967/AnsiballZ_file.py'
Nov 22 03:43:11 compute-0 sudo[242819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:11 compute-0 python3.9[242821]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:11 compute-0 sudo[242819]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:11 compute-0 ceph-mon[75011]: pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:11 compute-0 sudo[242971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxbxsktzrfvvhsyvvcttjpeeqxugdwki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782991.6018186-892-91216751695502/AnsiballZ_command.py'
Nov 22 03:43:11 compute-0 sudo[242971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:12 compute-0 python3.9[242973]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:12 compute-0 sudo[242971]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:12 compute-0 python3.9[243125]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 22 03:43:13 compute-0 ceph-mon[75011]: pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:13 compute-0 sudo[243275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzarrldnhtogmisnkejcgehbmwhqeilj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782993.2242768-910-221896511010614/AnsiballZ_systemd_service.py'
Nov 22 03:43:13 compute-0 sudo[243275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:14 compute-0 python3.9[243277]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:43:14 compute-0 systemd[1]: Reloading.
Nov 22 03:43:14 compute-0 systemd-sysv-generator[243308]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:43:14 compute-0 systemd-rc-local-generator[243304]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:43:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:14 compute-0 sudo[243275]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:14 compute-0 ceph-mon[75011]: pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:15 compute-0 sudo[243462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sapjholwhglerfgzistlblmfaiorbczx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782994.584739-918-83851745428982/AnsiballZ_command.py'
Nov 22 03:43:15 compute-0 sudo[243462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:15 compute-0 python3.9[243464]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:15 compute-0 sudo[243462]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:15 compute-0 sudo[243615]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgmqpnwrohfnadbkdugaczxipkoslung ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782995.3486836-918-163774200382248/AnsiballZ_command.py'
Nov 22 03:43:15 compute-0 sudo[243615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:15 compute-0 python3.9[243617]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:16 compute-0 sudo[243615]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:16 compute-0 sudo[243768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfjusodyluucrpatxilaraxpnhxxlqvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782996.1324542-918-258824650521732/AnsiballZ_command.py'
Nov 22 03:43:16 compute-0 sudo[243768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:16 compute-0 python3.9[243770]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:16 compute-0 sudo[243768]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:16 compute-0 ceph-mon[75011]: pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:17 compute-0 sudo[243921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzbyakajsdwoasxnktskkegcpflhavyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782996.751582-918-223664748986994/AnsiballZ_command.py'
Nov 22 03:43:17 compute-0 sudo[243921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:17 compute-0 python3.9[243923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:17 compute-0 sudo[243921]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:17 compute-0 sudo[244074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrbcmoytdulcnmoblqhgtwqfmxyovpwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782997.3969421-918-185696992239446/AnsiballZ_command.py'
Nov 22 03:43:17 compute-0 sudo[244074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:17 compute-0 python3.9[244076]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:17 compute-0 sudo[244074]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:18 compute-0 sudo[244227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oesjfnavuclderqqjjhzhebweiklwagn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782998.045628-918-117567205476675/AnsiballZ_command.py'
Nov 22 03:43:18 compute-0 sudo[244227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:18 compute-0 python3.9[244229]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:18 compute-0 sudo[244227]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:18 compute-0 ceph-mon[75011]: pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:19 compute-0 sudo[244380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qugghrwloykeiehcrihzujgqomdzjxvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782998.6751115-918-22527480304730/AnsiballZ_command.py'
Nov 22 03:43:19 compute-0 sudo[244380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:19 compute-0 python3.9[244382]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:19 compute-0 sudo[244380]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:19 compute-0 sudo[244533]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jurgwlrniqnylkgosgabnexyhjyigfae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763782999.3279927-918-249686691106427/AnsiballZ_command.py'
Nov 22 03:43:19 compute-0 sudo[244533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:19 compute-0 python3.9[244535]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 22 03:43:19 compute-0 sudo[244533]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:20 compute-0 ceph-mon[75011]: pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:21 compute-0 sudo[244686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuxsjyvhlpgyrravhgmuqpeholmlptoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783000.7170815-997-120544189234895/AnsiballZ_file.py'
Nov 22 03:43:21 compute-0 sudo[244686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:21 compute-0 python3.9[244688]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:21 compute-0 sudo[244686]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:21 compute-0 sudo[244838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwxmwxgiywjrdojuoakafswjvrlvausq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783001.3849137-997-35640352246780/AnsiballZ_file.py'
Nov 22 03:43:21 compute-0 sudo[244838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:21 compute-0 python3.9[244840]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:21 compute-0 sudo[244838]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:22 compute-0 sudo[244990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwcddnkqdcqbwrbwrvqfhkegkwcojpea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783002.049656-997-21029255576082/AnsiballZ_file.py'
Nov 22 03:43:22 compute-0 sudo[244990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:22 compute-0 podman[244992]: 2025-11-22 03:43:22.541302847 +0000 UTC m=+0.136654124 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 03:43:22 compute-0 python3.9[244993]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:22 compute-0 sudo[244990]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:43:22.996 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:43:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:43:22.996 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:43:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:43:22.997 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:43:23 compute-0 sudo[245166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wantoizprhnuitzibdniircbtrwplhve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783002.7682216-1019-37707627055010/AnsiballZ_file.py'
Nov 22 03:43:23 compute-0 sudo[245166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:23 compute-0 ceph-mon[75011]: pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:23 compute-0 python3.9[245168]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:23 compute-0 sudo[245166]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:23 compute-0 sudo[245245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:23 compute-0 sudo[245245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:23 compute-0 sudo[245245]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:23 compute-0 sudo[245293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:43:23 compute-0 sudo[245293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:23 compute-0 sudo[245293]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:23 compute-0 sudo[245405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhqbmnqqptrqubxbfetgvvnshpeumavc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783003.529876-1019-229726270162797/AnsiballZ_file.py'
Nov 22 03:43:23 compute-0 sudo[245349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:23 compute-0 sudo[245405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:23 compute-0 sudo[245349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:23 compute-0 sudo[245349]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:23 compute-0 podman[245337]: 2025-11-22 03:43:23.858312934 +0000 UTC m=+0.078843534 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:43:23 compute-0 sudo[245412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:43:23 compute-0 sudo[245412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:24 compute-0 python3.9[245410]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:24 compute-0 sudo[245405]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:24 compute-0 sudo[245412]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:43:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:43:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:43:24 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:43:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:43:24 compute-0 sudo[245617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btxbewncrkshliwmumgkxcabjhatvyja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783004.1087255-1019-100044746802099/AnsiballZ_file.py'
Nov 22 03:43:24 compute-0 sudo[245617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:24 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:43:24 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 543b8675-6613-4093-8152-f535728a7fff does not exist
Nov 22 03:43:24 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 4fb0dbd6-7385-4260-a5b6-d1946e0823dc does not exist
Nov 22 03:43:24 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d2b89a7c-e3b3-4f70-abd7-8a1884a8ac7e does not exist
Nov 22 03:43:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:43:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:43:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:43:24 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:43:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:43:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:43:24 compute-0 sudo[245620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:24 compute-0 sudo[245620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:24 compute-0 sudo[245620]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:24 compute-0 sudo[245645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:43:24 compute-0 sudo[245645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:24 compute-0 sudo[245645]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:24 compute-0 python3.9[245619]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:24 compute-0 sudo[245617]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:24 compute-0 sudo[245670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:24 compute-0 sudo[245670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:24 compute-0 sudo[245670]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:24 compute-0 sudo[245715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:43:24 compute-0 sudo[245715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:25 compute-0 sudo[245921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olsbjotkezxclrczefphucancusujrgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783004.780412-1019-76988175059460/AnsiballZ_file.py'
Nov 22 03:43:25 compute-0 sudo[245921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:25 compute-0 podman[245894]: 2025-11-22 03:43:25.180411012 +0000 UTC m=+0.113573470 container create 5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haibt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:43:25 compute-0 podman[245894]: 2025-11-22 03:43:25.088758076 +0000 UTC m=+0.021920554 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:25 compute-0 systemd[1]: Started libpod-conmon-5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d.scope.
Nov 22 03:43:25 compute-0 python3.9[245927]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:43:25 compute-0 sudo[245921]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:25 compute-0 ceph-mon[75011]: pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:25 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:43:25 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:43:25 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:43:25 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:43:25 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:43:25 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:43:25 compute-0 podman[245894]: 2025-11-22 03:43:25.457255493 +0000 UTC m=+0.390418031 container init 5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:43:25 compute-0 podman[245894]: 2025-11-22 03:43:25.469049306 +0000 UTC m=+0.402211754 container start 5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haibt, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:43:25 compute-0 optimistic_haibt[245930]: 167 167
Nov 22 03:43:25 compute-0 systemd[1]: libpod-5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d.scope: Deactivated successfully.
Nov 22 03:43:25 compute-0 podman[245894]: 2025-11-22 03:43:25.531023628 +0000 UTC m=+0.464186166 container attach 5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:43:25 compute-0 podman[245894]: 2025-11-22 03:43:25.531573817 +0000 UTC m=+0.464736295 container died 5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:43:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-665a1f30d7ae24405b2916e9800685fb7eafe58905bdbcec617cd54e7215bba9-merged.mount: Deactivated successfully.
Nov 22 03:43:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:25 compute-0 podman[245894]: 2025-11-22 03:43:25.823616699 +0000 UTC m=+0.756779177 container remove 5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_haibt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:43:25 compute-0 sudo[246098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxhrhbjcxgnwiilfzdvxvwasoxhkpsmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783005.4721718-1019-113002892422661/AnsiballZ_file.py'
Nov 22 03:43:25 compute-0 sudo[246098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:25 compute-0 systemd[1]: libpod-conmon-5773e7e148010d70a9e5033dfd1f4faf9d29bef7c4ddcd0e6af2bf6e26c6cb6d.scope: Deactivated successfully.
Nov 22 03:43:26 compute-0 python3.9[246100]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:26 compute-0 sudo[246098]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:26 compute-0 podman[246108]: 2025-11-22 03:43:26.068932328 +0000 UTC m=+0.059208195 container create ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:43:26 compute-0 systemd[1]: Started libpod-conmon-ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9.scope.
Nov 22 03:43:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b361bcaf05b3f7d9296a594bc2f9bbe19287008604e4b03d67dde7f4bfa12b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b361bcaf05b3f7d9296a594bc2f9bbe19287008604e4b03d67dde7f4bfa12b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b361bcaf05b3f7d9296a594bc2f9bbe19287008604e4b03d67dde7f4bfa12b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b361bcaf05b3f7d9296a594bc2f9bbe19287008604e4b03d67dde7f4bfa12b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b361bcaf05b3f7d9296a594bc2f9bbe19287008604e4b03d67dde7f4bfa12b6a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:26 compute-0 podman[246108]: 2025-11-22 03:43:26.052811289 +0000 UTC m=+0.043087186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:26 compute-0 podman[246108]: 2025-11-22 03:43:26.152507044 +0000 UTC m=+0.142782911 container init ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:43:26 compute-0 podman[246108]: 2025-11-22 03:43:26.160733735 +0000 UTC m=+0.151009602 container start ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:43:26 compute-0 podman[246108]: 2025-11-22 03:43:26.165253523 +0000 UTC m=+0.155529420 container attach ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:43:26 compute-0 sudo[246278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iydvgmtidxtusowfqhdhkzcqgnvcodtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783006.115722-1019-45220102688085/AnsiballZ_file.py'
Nov 22 03:43:26 compute-0 sudo[246278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:26 compute-0 python3.9[246280]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:26 compute-0 sudo[246278]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:27 compute-0 sudo[246444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tezvxntmdkrlqenromancytvqtzepvjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783006.7274218-1019-27684170988703/AnsiballZ_file.py'
Nov 22 03:43:27 compute-0 sudo[246444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:27 compute-0 reverent_shannon[246148]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:43:27 compute-0 reverent_shannon[246148]: --> relative data size: 1.0
Nov 22 03:43:27 compute-0 reverent_shannon[246148]: --> All data devices are unavailable
Nov 22 03:43:27 compute-0 systemd[1]: libpod-ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9.scope: Deactivated successfully.
Nov 22 03:43:27 compute-0 podman[246108]: 2025-11-22 03:43:27.206741523 +0000 UTC m=+1.197017410 container died ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:43:27 compute-0 python3.9[246448]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b361bcaf05b3f7d9296a594bc2f9bbe19287008604e4b03d67dde7f4bfa12b6a-merged.mount: Deactivated successfully.
Nov 22 03:43:27 compute-0 sudo[246444]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:27 compute-0 podman[246108]: 2025-11-22 03:43:27.267016228 +0000 UTC m=+1.257292095 container remove ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_shannon, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:27 compute-0 systemd[1]: libpod-conmon-ec7b04cb8727eb8508ba36f99b580c9b017c4f6d5a72c5279debcd60e2ceafd9.scope: Deactivated successfully.
Nov 22 03:43:27 compute-0 sudo[245715]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:27 compute-0 sudo[246490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:27 compute-0 sudo[246490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:27 compute-0 sudo[246490]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:27 compute-0 ceph-mon[75011]: pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:27 compute-0 sudo[246517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:43:27 compute-0 sudo[246517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:27 compute-0 sudo[246517]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:27 compute-0 sudo[246542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:27 compute-0 sudo[246542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:27 compute-0 sudo[246542]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:27 compute-0 sudo[246567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:43:27 compute-0 sudo[246567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:28 compute-0 podman[246633]: 2025-11-22 03:43:28.014658716 +0000 UTC m=+0.067969050 container create 3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:43:28 compute-0 systemd[1]: Started libpod-conmon-3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754.scope.
Nov 22 03:43:28 compute-0 podman[246633]: 2025-11-22 03:43:27.969229034 +0000 UTC m=+0.022539388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:43:28 compute-0 podman[246633]: 2025-11-22 03:43:28.126382074 +0000 UTC m=+0.179692428 container init 3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:43:28 compute-0 podman[246633]: 2025-11-22 03:43:28.134050564 +0000 UTC m=+0.187360898 container start 3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:43:28 compute-0 zen_montalcini[246649]: 167 167
Nov 22 03:43:28 compute-0 systemd[1]: libpod-3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754.scope: Deactivated successfully.
Nov 22 03:43:28 compute-0 podman[246633]: 2025-11-22 03:43:28.142336087 +0000 UTC m=+0.195646441 container attach 3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:43:28 compute-0 podman[246633]: 2025-11-22 03:43:28.143140421 +0000 UTC m=+0.196450765 container died 3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:43:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-818af3695a8a27d1a2bb170c85e46450878779801bd780719e9ffb284035f31e-merged.mount: Deactivated successfully.
Nov 22 03:43:28 compute-0 podman[246633]: 2025-11-22 03:43:28.482612777 +0000 UTC m=+0.535923141 container remove 3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:28 compute-0 systemd[1]: libpod-conmon-3b5172d97af5fa04d078e6dc5c4e3d36e983d75f9253839f4a083456ff78d754.scope: Deactivated successfully.
Nov 22 03:43:28 compute-0 podman[246674]: 2025-11-22 03:43:28.761013586 +0000 UTC m=+0.103397277 container create dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:43:28 compute-0 podman[246674]: 2025-11-22 03:43:28.698083207 +0000 UTC m=+0.040466938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:28 compute-0 systemd[1]: Started libpod-conmon-dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0.scope.
Nov 22 03:43:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1526163801a6898b288bdf38ac4a84d38dd8cb7ac3cd8b5b344db18a71f6742/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1526163801a6898b288bdf38ac4a84d38dd8cb7ac3cd8b5b344db18a71f6742/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1526163801a6898b288bdf38ac4a84d38dd8cb7ac3cd8b5b344db18a71f6742/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1526163801a6898b288bdf38ac4a84d38dd8cb7ac3cd8b5b344db18a71f6742/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:28 compute-0 podman[246674]: 2025-11-22 03:43:28.902899202 +0000 UTC m=+0.245282943 container init dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:43:28 compute-0 podman[246674]: 2025-11-22 03:43:28.914135623 +0000 UTC m=+0.256519314 container start dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:43:28 compute-0 podman[246674]: 2025-11-22 03:43:28.923595011 +0000 UTC m=+0.265978702 container attach dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:43:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:29 compute-0 ceph-mon[75011]: pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]: {
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:     "0": [
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:         {
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "devices": [
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "/dev/loop3"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             ],
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_name": "ceph_lv0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_size": "21470642176",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "name": "ceph_lv0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "tags": {
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cluster_name": "ceph",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.crush_device_class": "",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.encrypted": "0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osd_id": "0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.type": "block",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.vdo": "0"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             },
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "type": "block",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "vg_name": "ceph_vg0"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:         }
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:     ],
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:     "1": [
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:         {
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "devices": [
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "/dev/loop4"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             ],
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_name": "ceph_lv1",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_size": "21470642176",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "name": "ceph_lv1",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "tags": {
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cluster_name": "ceph",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.crush_device_class": "",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.encrypted": "0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osd_id": "1",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.type": "block",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.vdo": "0"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             },
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "type": "block",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "vg_name": "ceph_vg1"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:         }
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:     ],
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:     "2": [
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:         {
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "devices": [
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "/dev/loop5"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             ],
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_name": "ceph_lv2",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_size": "21470642176",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "name": "ceph_lv2",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "tags": {
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.cluster_name": "ceph",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.crush_device_class": "",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.encrypted": "0",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osd_id": "2",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.type": "block",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:                 "ceph.vdo": "0"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             },
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "type": "block",
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:             "vg_name": "ceph_vg2"
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:         }
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]:     ]
Nov 22 03:43:29 compute-0 thirsty_brahmagupta[246691]: }
Nov 22 03:43:29 compute-0 systemd[1]: libpod-dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0.scope: Deactivated successfully.
Nov 22 03:43:29 compute-0 podman[246674]: 2025-11-22 03:43:29.678708076 +0000 UTC m=+1.021091817 container died dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:43:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1526163801a6898b288bdf38ac4a84d38dd8cb7ac3cd8b5b344db18a71f6742-merged.mount: Deactivated successfully.
Nov 22 03:43:29 compute-0 podman[246674]: 2025-11-22 03:43:29.739647073 +0000 UTC m=+1.082030734 container remove dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 03:43:29 compute-0 systemd[1]: libpod-conmon-dc5ae1d56b91e31bc8c41d910ddedf50459fa10dadfa84dec559651367f681d0.scope: Deactivated successfully.
Nov 22 03:43:29 compute-0 sudo[246567]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:29 compute-0 sudo[246712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:29 compute-0 sudo[246712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:29 compute-0 sudo[246712]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:29 compute-0 sudo[246737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:43:29 compute-0 sudo[246737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:29 compute-0 sudo[246737]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:30 compute-0 sudo[246762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:30 compute-0 sudo[246762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:30 compute-0 sudo[246762]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:30 compute-0 sudo[246787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:43:30 compute-0 sudo[246787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:30 compute-0 podman[246851]: 2025-11-22 03:43:30.541163942 +0000 UTC m=+0.064938035 container create 44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:43:30 compute-0 systemd[1]: Started libpod-conmon-44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0.scope.
Nov 22 03:43:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:43:30 compute-0 podman[246851]: 2025-11-22 03:43:30.516763861 +0000 UTC m=+0.040538014 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:30 compute-0 podman[246851]: 2025-11-22 03:43:30.612263368 +0000 UTC m=+0.136037461 container init 44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:43:30 compute-0 podman[246851]: 2025-11-22 03:43:30.618584586 +0000 UTC m=+0.142358679 container start 44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:30 compute-0 podman[246851]: 2025-11-22 03:43:30.621831849 +0000 UTC m=+0.145605992 container attach 44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:43:30 compute-0 beautiful_edison[246867]: 167 167
Nov 22 03:43:30 compute-0 systemd[1]: libpod-44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0.scope: Deactivated successfully.
Nov 22 03:43:30 compute-0 podman[246851]: 2025-11-22 03:43:30.623888284 +0000 UTC m=+0.147662387 container died 44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-453555e77be5c8a665b8f5d1afd16fc7795e6542f950aa2eeb56b898d91f9ca8-merged.mount: Deactivated successfully.
Nov 22 03:43:30 compute-0 podman[246851]: 2025-11-22 03:43:30.663513349 +0000 UTC m=+0.187287452 container remove 44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_edison, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:43:30 compute-0 systemd[1]: libpod-conmon-44b4793d8983ba5714ccf68a920912d3367bce1334964372210058019ea183e0.scope: Deactivated successfully.
Nov 22 03:43:30 compute-0 podman[246891]: 2025-11-22 03:43:30.853412476 +0000 UTC m=+0.044085112 container create 80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:43:30 compute-0 systemd[1]: Started libpod-conmon-80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc.scope.
Nov 22 03:43:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82121ce60e5ea740d5500de50ac7edbd293d3a0eccabd092e19f5ea22d2449d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82121ce60e5ea740d5500de50ac7edbd293d3a0eccabd092e19f5ea22d2449d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82121ce60e5ea740d5500de50ac7edbd293d3a0eccabd092e19f5ea22d2449d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82121ce60e5ea740d5500de50ac7edbd293d3a0eccabd092e19f5ea22d2449d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:43:30 compute-0 podman[246891]: 2025-11-22 03:43:30.834792076 +0000 UTC m=+0.025464722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:43:30 compute-0 podman[246891]: 2025-11-22 03:43:30.941412921 +0000 UTC m=+0.132085557 container init 80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:43:30 compute-0 podman[246891]: 2025-11-22 03:43:30.952493838 +0000 UTC m=+0.143166444 container start 80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:43:30 compute-0 podman[246891]: 2025-11-22 03:43:30.955320009 +0000 UTC m=+0.145992615 container attach 80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:43:31 compute-0 ceph-mon[75011]: pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:31 compute-0 great_hellman[246908]: {
Nov 22 03:43:31 compute-0 great_hellman[246908]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "osd_id": 1,
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "type": "bluestore"
Nov 22 03:43:31 compute-0 great_hellman[246908]:     },
Nov 22 03:43:31 compute-0 great_hellman[246908]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "osd_id": 0,
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "type": "bluestore"
Nov 22 03:43:31 compute-0 great_hellman[246908]:     },
Nov 22 03:43:31 compute-0 great_hellman[246908]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "osd_id": 2,
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:43:31 compute-0 great_hellman[246908]:         "type": "bluestore"
Nov 22 03:43:31 compute-0 great_hellman[246908]:     }
Nov 22 03:43:31 compute-0 great_hellman[246908]: }
Nov 22 03:43:31 compute-0 systemd[1]: libpod-80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc.scope: Deactivated successfully.
Nov 22 03:43:31 compute-0 podman[246891]: 2025-11-22 03:43:31.944986665 +0000 UTC m=+1.135659261 container died 80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-82121ce60e5ea740d5500de50ac7edbd293d3a0eccabd092e19f5ea22d2449d8-merged.mount: Deactivated successfully.
Nov 22 03:43:32 compute-0 podman[246891]: 2025-11-22 03:43:32.038611557 +0000 UTC m=+1.229284153 container remove 80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:43:32 compute-0 systemd[1]: libpod-conmon-80cbcf8d02eba0009787f5d53f27a7a9940153561d1fef73832bd8f1713e44fc.scope: Deactivated successfully.
Nov 22 03:43:32 compute-0 sudo[246787]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:43:32 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:43:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:43:32 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:43:32 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 859084ac-422c-49da-83cf-05441d5d0522 does not exist
Nov 22 03:43:32 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 07063cab-5a2b-4730-b705-75b1ecd31088 does not exist
Nov 22 03:43:32 compute-0 sudo[247007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:43:32 compute-0 sudo[247007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:32 compute-0 sudo[247007]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:32 compute-0 sudo[247032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:43:32 compute-0 sudo[247032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:43:32 compute-0 sudo[247032]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:32 compute-0 sudo[247130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgxuukcvzwtrhxfehjlzvtyibpnhvhat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783011.9206095-1208-255293827027033/AnsiballZ_getent.py'
Nov 22 03:43:32 compute-0 sudo[247130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:32 compute-0 python3.9[247132]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 22 03:43:32 compute-0 sudo[247130]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:33 compute-0 ceph-mon[75011]: pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:43:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:43:33 compute-0 sudo[247283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whnrizucckxqbgwwtsvkymfxglkkwnww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783012.748105-1216-246794909883805/AnsiballZ_group.py'
Nov 22 03:43:33 compute-0 sudo[247283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:33 compute-0 python3.9[247285]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 22 03:43:33 compute-0 groupadd[247286]: group added to /etc/group: name=nova, GID=42436
Nov 22 03:43:33 compute-0 groupadd[247286]: group added to /etc/gshadow: name=nova
Nov 22 03:43:33 compute-0 groupadd[247286]: new group: name=nova, GID=42436
Nov 22 03:43:33 compute-0 sudo[247283]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:34 compute-0 sudo[247441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvpgakckgnnkiazsahngijilxbsvjzjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783013.7494078-1224-130764455056552/AnsiballZ_user.py'
Nov 22 03:43:34 compute-0 sudo[247441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:34 compute-0 python3.9[247443]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 22 03:43:34 compute-0 useradd[247445]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Nov 22 03:43:34 compute-0 useradd[247445]: add 'nova' to group 'libvirt'
Nov 22 03:43:34 compute-0 useradd[247445]: add 'nova' to shadow group 'libvirt'
Nov 22 03:43:34 compute-0 sudo[247441]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:35 compute-0 ceph-mon[75011]: pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:35 compute-0 sshd-session[247476]: Accepted publickey for zuul from 192.168.122.30 port 41262 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 03:43:35 compute-0 systemd-logind[799]: New session 51 of user zuul.
Nov 22 03:43:35 compute-0 systemd[1]: Started Session 51 of User zuul.
Nov 22 03:43:35 compute-0 sshd-session[247476]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 03:43:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:35 compute-0 sshd-session[247479]: Received disconnect from 192.168.122.30 port 41262:11: disconnected by user
Nov 22 03:43:35 compute-0 sshd-session[247479]: Disconnected from user zuul 192.168.122.30 port 41262
Nov 22 03:43:35 compute-0 sshd-session[247476]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:43:35 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Nov 22 03:43:35 compute-0 systemd-logind[799]: Session 51 logged out. Waiting for processes to exit.
Nov 22 03:43:35 compute-0 systemd-logind[799]: Removed session 51.
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:43:36
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', '.rgw.root', 'images']
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:43:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:43:36 compute-0 podman[247579]: 2025-11-22 03:43:36.425550965 +0000 UTC m=+0.091345727 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:43:36 compute-0 python3.9[247650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:37 compute-0 ceph-mon[75011]: pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:37 compute-0 python3.9[247771]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763783016.0196235-1249-87465708630737/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:38 compute-0 python3.9[247921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:38 compute-0 python3.9[247997]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:39 compute-0 python3.9[248147]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:39 compute-0 ceph-mon[75011]: pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:39 compute-0 python3.9[248268]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763783018.596238-1249-31553439440960/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:40 compute-0 python3.9[248418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:41 compute-0 python3.9[248539]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763783019.9806764-1249-251080359549678/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:41 compute-0 ceph-mon[75011]: pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:41 compute-0 python3.9[248689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:42 compute-0 python3.9[248810]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763783021.3113086-1249-186882539710380/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:43 compute-0 python3.9[248960]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:43 compute-0 ceph-mon[75011]: pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:43 compute-0 python3.9[249081]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763783022.5160174-1249-216350458166284/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:44 compute-0 sudo[249231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avrqmzjrpglcydjdddwnsrgwzasujdbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783023.9061596-1332-260854892454978/AnsiballZ_file.py'
Nov 22 03:43:44 compute-0 sudo[249231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:44 compute-0 python3.9[249233]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:44 compute-0 sudo[249231]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:45 compute-0 sudo[249383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yekhsodqenknotztbzdezaouhorukuab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783024.6939452-1340-87015299341895/AnsiballZ_copy.py'
Nov 22 03:43:45 compute-0 sudo[249383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:45 compute-0 ceph-mon[75011]: pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:45 compute-0 python3.9[249385]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:43:45 compute-0 sudo[249383]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:45 compute-0 sudo[249535]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loknwlhpypwrefsaftssjjtsbpkropkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783025.4679623-1348-183197906443974/AnsiballZ_stat.py'
Nov 22 03:43:45 compute-0 sudo[249535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:43:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:43:46 compute-0 python3.9[249537]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:43:46 compute-0 sudo[249535]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:46 compute-0 sudo[249687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azhndwxfgsihgfvibizygjwbkbecxxch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783026.276742-1356-112457509355119/AnsiballZ_stat.py'
Nov 22 03:43:46 compute-0 sudo[249687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:46 compute-0 python3.9[249689]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:46 compute-0 sudo[249687]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:47 compute-0 ceph-mon[75011]: pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:47 compute-0 sudo[249810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgbxdlofqlsejvikupgpmeetgzrolsce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783026.276742-1356-112457509355119/AnsiballZ_copy.py'
Nov 22 03:43:47 compute-0 sudo[249810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:47 compute-0 python3.9[249812]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1763783026.276742-1356-112457509355119/.source _original_basename=.k6iwtj_w follow=False checksum=408433926ccbe78147268cf49f2d0e686da43a67 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 22 03:43:47 compute-0 sudo[249810]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:48 compute-0 python3.9[249964]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:43:49 compute-0 python3.9[250116]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:49 compute-0 ceph-mon[75011]: pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:49 compute-0 python3.9[250237]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763783028.4572709-1382-192492890966639/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:50 compute-0 python3.9[250387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 22 03:43:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:43:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3239 writes, 14K keys, 3239 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3239 writes, 3239 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1270 writes, 5526 keys, 1270 commit groups, 1.0 writes per commit group, ingest: 8.41 MB, 0.01 MB/s
                                           Interval WAL: 1270 writes, 1270 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     92.6      0.16              0.05         6    0.026       0      0       0.0       0.0
                                             L6      1/0    7.23 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    125.4    103.7      0.34              0.10         5    0.068     19K   2193       0.0       0.0
                                            Sum      1/0    7.23 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     85.8    100.2      0.50              0.15        11    0.045     19K   2193       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6     97.4     99.1      0.27              0.08         6    0.046     12K   1455       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    125.4    103.7      0.34              0.10         5    0.068     19K   2193       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     93.7      0.16              0.05         5    0.031       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5574942991f0#2 capacity: 308.00 MB usage: 1.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(86,1.19 MB,0.386904%) FilterBlock(12,63.11 KB,0.0200098%) IndexBlock(12,130.14 KB,0.0412631%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 03:43:50 compute-0 python3.9[250508]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1763783029.7771044-1397-121940484955300/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 22 03:43:51 compute-0 ceph-mon[75011]: pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:51 compute-0 sudo[250658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axnmwvngaalqxdsdhvkhmgkmhrfcujyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783031.287065-1414-118238894194274/AnsiballZ_container_config_data.py'
Nov 22 03:43:51 compute-0 sudo[250658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:51 compute-0 python3.9[250660]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 22 03:43:51 compute-0 sudo[250658]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:52 compute-0 sudo[250810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssmobiudcqmwipxtrzpfoisjvxuonnfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783032.1867678-1423-265565680491555/AnsiballZ_container_config_hash.py'
Nov 22 03:43:52 compute-0 sudo[250810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:52 compute-0 python3.9[250812]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:43:52 compute-0 sudo[250810]: pam_unix(sudo:session): session closed for user root
Nov 22 03:43:53 compute-0 podman[250912]: 2025-11-22 03:43:53.422627544 +0000 UTC m=+0.098791298 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118)
Nov 22 03:43:53 compute-0 sudo[250987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmxwzrkswkiusvmumtzbqczryltdwpss ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763783033.06452-1433-96058614426665/AnsiballZ_edpm_container_manage.py'
Nov 22 03:43:53 compute-0 sudo[250987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:43:53 compute-0 ceph-mon[75011]: pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:53 compute-0 python3[250989]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:43:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:54 compute-0 podman[251015]: 2025-11-22 03:43:54.3947607 +0000 UTC m=+0.061198721 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:43:55 compute-0 ceph-mon[75011]: pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:57 compute-0 ceph-mon[75011]: pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:59 compute-0 ceph-mon[75011]: pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:43:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:43:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:02 compute-0 ceph-mon[75011]: pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:03 compute-0 ceph-mon[75011]: pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:04 compute-0 podman[251002]: 2025-11-22 03:44:04.470241195 +0000 UTC m=+10.605055782 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 03:44:04 compute-0 podman[251103]: 2025-11-22 03:44:04.653466246 +0000 UTC m=+0.085165639 container create c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=nova_compute_init, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Nov 22 03:44:04 compute-0 podman[251103]: 2025-11-22 03:44:04.607726916 +0000 UTC m=+0.039426379 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 03:44:04 compute-0 python3[250989]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 22 03:44:04 compute-0 ceph-mon[75011]: pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:04 compute-0 sudo[250987]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:05 compute-0 sudo[251290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yemlbmbmtbwyzvadgzojufahrpnbqrxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783044.9696925-1441-188907432835741/AnsiballZ_stat.py'
Nov 22 03:44:05 compute-0 sudo[251290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:05 compute-0 python3.9[251292]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:44:05 compute-0 sudo[251290]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:06 compute-0 sudo[251444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmffgydhoapxwthovvrvorvbnrpoxdom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783045.9372265-1453-145888557243339/AnsiballZ_container_config_data.py'
Nov 22 03:44:06 compute-0 sudo[251444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:06 compute-0 python3.9[251446]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 22 03:44:06 compute-0 sudo[251444]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:06 compute-0 ceph-mon[75011]: pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:07 compute-0 sudo[251608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfseipuvldtomjligvpsdxioybgdchbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783046.8576617-1462-192503713329081/AnsiballZ_container_config_hash.py'
Nov 22 03:44:07 compute-0 sudo[251608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:07 compute-0 podman[251570]: 2025-11-22 03:44:07.380748929 +0000 UTC m=+0.110349486 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:44:07 compute-0 python3.9[251616]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 22 03:44:07 compute-0 sudo[251608]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:08 compute-0 sudo[251767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhwxqzucnregtlblipxizlqbjbzifvya ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1763783047.9488895-1472-253453262688043/AnsiballZ_edpm_container_manage.py'
Nov 22 03:44:08 compute-0 sudo[251767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:08 compute-0 python3[251769]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 22 03:44:08 compute-0 podman[251808]: 2025-11-22 03:44:08.808305131 +0000 UTC m=+0.049656810 container create e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251118, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:44:08 compute-0 podman[251808]: 2025-11-22 03:44:08.779940128 +0000 UTC m=+0.021291787 image pull 8e31b7b83c8d26bacd9598fdae1b287d27f8fa7d1d3cf4270dd8e435ff2f6a66 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 22 03:44:08 compute-0 python3[251769]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 22 03:44:08 compute-0 ceph-mon[75011]: pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:08 compute-0 sudo[251767]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:09 compute-0 sudo[251996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvvguwnahjstiwziwaapwwwxursxhyhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783049.1063218-1480-209620997757416/AnsiballZ_stat.py'
Nov 22 03:44:09 compute-0 sudo[251996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:09 compute-0 python3.9[251998]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:44:09 compute-0 sudo[251996]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:10 compute-0 sudo[252150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sizqyeejydfavfjhttpvmoptybresvrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783049.933202-1489-185357283200169/AnsiballZ_file.py'
Nov 22 03:44:10 compute-0 sudo[252150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:10 compute-0 python3.9[252152]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:10 compute-0 sudo[252150]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:10 compute-0 ceph-mon[75011]: pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:11 compute-0 sudo[252301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwiinytoyspjsibtidxouqrwuhsnoxxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783050.5636587-1489-167771946652635/AnsiballZ_copy.py'
Nov 22 03:44:11 compute-0 sudo[252301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:11 compute-0 python3.9[252303]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1763783050.5636587-1489-167771946652635/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 22 03:44:11 compute-0 sudo[252301]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:11 compute-0 sudo[252377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjhvhclgxiydcpdtoomfdqusnewoesrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783050.5636587-1489-167771946652635/AnsiballZ_systemd.py'
Nov 22 03:44:11 compute-0 sudo[252377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:11 compute-0 python3.9[252379]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 22 03:44:11 compute-0 systemd[1]: Reloading.
Nov 22 03:44:12 compute-0 systemd-rc-local-generator[252405]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:44:12 compute-0 systemd-sysv-generator[252410]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:44:12 compute-0 sudo[252377]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:12 compute-0 sudo[252489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kurykfqjmagsreygxjpiellfxrjzjyrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783050.5636587-1489-167771946652635/AnsiballZ_systemd.py'
Nov 22 03:44:12 compute-0 sudo[252489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:12 compute-0 ceph-mon[75011]: pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:12 compute-0 python3.9[252491]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 22 03:44:13 compute-0 systemd[1]: Reloading.
Nov 22 03:44:13 compute-0 systemd-sysv-generator[252519]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 22 03:44:13 compute-0 systemd-rc-local-generator[252515]: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 22 03:44:13 compute-0 systemd[1]: Starting nova_compute container...
Nov 22 03:44:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:13 compute-0 podman[252531]: 2025-11-22 03:44:13.52251031 +0000 UTC m=+0.139613959 container init e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, managed_by=edpm_ansible)
Nov 22 03:44:13 compute-0 podman[252531]: 2025-11-22 03:44:13.539720542 +0000 UTC m=+0.156824141 container start e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 03:44:13 compute-0 podman[252531]: nova_compute
Nov 22 03:44:13 compute-0 nova_compute[252546]: + sudo -E kolla_set_configs
Nov 22 03:44:13 compute-0 systemd[1]: Started nova_compute container.
Nov 22 03:44:13 compute-0 sudo[252489]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Validating config file
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying service configuration files
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Deleting /etc/ceph
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Creating directory /etc/ceph
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Writing out command to execute
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:44:13 compute-0 nova_compute[252546]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 03:44:13 compute-0 nova_compute[252546]: ++ cat /run_command
Nov 22 03:44:13 compute-0 nova_compute[252546]: + CMD=nova-compute
Nov 22 03:44:13 compute-0 nova_compute[252546]: + ARGS=
Nov 22 03:44:13 compute-0 nova_compute[252546]: + sudo kolla_copy_cacerts
Nov 22 03:44:13 compute-0 nova_compute[252546]: + [[ ! -n '' ]]
Nov 22 03:44:13 compute-0 nova_compute[252546]: + . kolla_extend_start
Nov 22 03:44:13 compute-0 nova_compute[252546]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 03:44:13 compute-0 nova_compute[252546]: Running command: 'nova-compute'
Nov 22 03:44:13 compute-0 nova_compute[252546]: + umask 0022
Nov 22 03:44:13 compute-0 nova_compute[252546]: + exec nova-compute
Nov 22 03:44:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:14 compute-0 python3.9[252707]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:44:14 compute-0 ceph-mon[75011]: pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:15 compute-0 python3.9[252858]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:44:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:15 compute-0 nova_compute[252546]: 2025-11-22 03:44:15.958 252550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 03:44:15 compute-0 nova_compute[252546]: 2025-11-22 03:44:15.958 252550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 03:44:15 compute-0 nova_compute[252546]: 2025-11-22 03:44:15.958 252550 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 03:44:15 compute-0 nova_compute[252546]: 2025-11-22 03:44:15.959 252550 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.117 252550 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.146 252550 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.147 252550 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 22 03:44:16 compute-0 python3.9[253010]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.760 252550 INFO nova.virt.driver [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.915 252550 INFO nova.compute.provider_config [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.940 252550 DEBUG oslo_concurrency.lockutils [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.941 252550 DEBUG oslo_concurrency.lockutils [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.941 252550 DEBUG oslo_concurrency.lockutils [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.941 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.941 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.942 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.942 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.942 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.942 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.942 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.942 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.943 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.943 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.943 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.943 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.943 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.943 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.943 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.944 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.944 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.944 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.944 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.944 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.944 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.945 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.945 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.945 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.945 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.945 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.946 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.946 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.946 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.946 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.946 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.947 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.947 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.947 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.947 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.947 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.948 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.948 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.948 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.948 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.948 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.949 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.949 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.949 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.949 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.949 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.950 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.950 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.950 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.950 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.950 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.951 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.951 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.951 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.951 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.951 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.952 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.952 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.952 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.952 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.952 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.953 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.953 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.953 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 ceph-mon[75011]: pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.953 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.953 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.953 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.954 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.954 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.954 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.954 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.954 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.955 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.955 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.955 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.955 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.955 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.955 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.956 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.956 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.956 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.956 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.956 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.957 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.957 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.957 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.957 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.957 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.958 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.958 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.958 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.958 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.958 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.959 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.959 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.959 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.959 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.959 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.959 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.960 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.960 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.960 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.960 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.960 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.961 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.961 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.961 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.961 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.961 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.962 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.962 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.962 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.962 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.962 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.962 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.963 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.963 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.963 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.963 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.963 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.964 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.964 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.964 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.964 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.964 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.964 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.965 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.965 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.965 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.965 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.965 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.966 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.966 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.966 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.966 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.966 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.966 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.967 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.967 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.967 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.967 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.967 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.968 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.968 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.968 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.968 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.968 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.969 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.969 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.969 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.969 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.969 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.969 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.970 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.970 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.970 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.970 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.970 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.970 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.971 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.971 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.971 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.971 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.971 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.972 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.972 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.972 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.972 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.972 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.972 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.973 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.973 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.973 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.973 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.973 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.973 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.974 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.974 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.974 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.974 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.974 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.975 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.975 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.975 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.975 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.975 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.976 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.976 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.976 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.976 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.976 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.976 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.977 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.977 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.977 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.977 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.977 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.977 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.978 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.978 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.978 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.978 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.978 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.978 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.978 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.979 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.979 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.979 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.979 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.979 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.979 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.980 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.980 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.980 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.980 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.980 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.980 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.980 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.981 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.981 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.981 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.981 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.981 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.981 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.981 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.982 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.982 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.982 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.982 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.982 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.983 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.983 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.983 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.983 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.983 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.984 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.984 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.984 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.984 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.984 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.985 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.985 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.985 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.985 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.985 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.985 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.986 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.986 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.986 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.986 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.986 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.986 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.987 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.987 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.987 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.987 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.987 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.988 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.988 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.988 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.988 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.988 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.989 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.989 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.989 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.989 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.989 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.989 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.990 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.990 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.990 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.990 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.990 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.990 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.991 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.991 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.991 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.991 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.991 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.991 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.991 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.992 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.992 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.992 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.992 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.992 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.992 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.993 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.993 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.993 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.993 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.993 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.993 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.994 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.994 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.994 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.994 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.994 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.994 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.995 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.995 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.995 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.995 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.995 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.995 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.995 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.996 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.996 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.996 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.996 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.996 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.996 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.996 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.997 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.997 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.997 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.997 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.997 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.997 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.998 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.998 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.998 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.998 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.998 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.998 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.999 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.999 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.999 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.999 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.999 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:16 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.999 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:16.999 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.000 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.000 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.000 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.000 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.000 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.000 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.000 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.001 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.001 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.001 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.001 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.001 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.001 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.002 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.002 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.002 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.002 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.002 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.002 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.002 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.003 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.003 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.003 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.003 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.003 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.004 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.004 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.004 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.004 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.004 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.004 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.005 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.005 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.005 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.005 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.005 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.005 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.005 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.006 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.006 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.006 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.006 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.006 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.006 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.007 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.007 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.007 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.007 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.007 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.007 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.008 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.008 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.008 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.008 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.008 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.008 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.009 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.009 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.009 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.009 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.009 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.010 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.010 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.010 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.010 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.010 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.010 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.011 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.011 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.011 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.011 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.011 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.011 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.012 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.012 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.012 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.012 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.012 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.012 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.012 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.013 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.013 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.013 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.013 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.013 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.013 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.013 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.014 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.014 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.014 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.014 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.014 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.014 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.014 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.015 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.015 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.015 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.015 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.015 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.015 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.015 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.016 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.016 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.016 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.016 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.016 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.016 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.016 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.017 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.017 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.017 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.017 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.017 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.017 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.018 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.018 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.018 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.018 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.018 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.018 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.019 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.019 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.019 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.019 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.019 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.019 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.020 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.020 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.020 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.020 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.020 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.020 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.020 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.021 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.021 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.021 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.021 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.021 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.021 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.021 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.022 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.022 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.022 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.022 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.022 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.022 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.022 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.023 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.023 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.023 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.023 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.023 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.023 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.023 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.024 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.024 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.024 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.024 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.024 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.024 252550 WARNING oslo_config.cfg [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 03:44:17 compute-0 nova_compute[252546]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 03:44:17 compute-0 nova_compute[252546]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 03:44:17 compute-0 nova_compute[252546]: and ``live_migration_inbound_addr`` respectively.
Nov 22 03:44:17 compute-0 nova_compute[252546]: ).  Its value may be silently ignored in the future.
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.025 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.025 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.025 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.025 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.025 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.025 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.026 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.026 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.026 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.026 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.026 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.026 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.027 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.027 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.027 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.027 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.027 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.027 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.027 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rbd_secret_uuid        = 7adcc38b-6484-5de6-b879-33a0309153df log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.028 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.028 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.028 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.028 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.028 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.028 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.029 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.029 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.029 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.029 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.029 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.029 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.030 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.030 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.030 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.030 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.030 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.031 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.031 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.031 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.031 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.031 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.031 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.031 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.032 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.032 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.032 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.032 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.032 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.032 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.032 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.033 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.033 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.033 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.033 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.033 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.033 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.034 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.034 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.034 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.034 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.034 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.034 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.034 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.035 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.035 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.035 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.035 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.035 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.035 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.036 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.036 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.036 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.036 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.036 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.036 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.037 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.037 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.037 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.037 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.037 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.037 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.037 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.038 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.038 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.038 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.038 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.038 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.038 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.039 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.039 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.039 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.039 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.039 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.039 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.039 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.040 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.040 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.040 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.040 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.040 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.041 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.041 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.041 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.041 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.041 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.041 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.041 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.042 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.042 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.042 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.042 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.042 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.042 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.042 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.043 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.043 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.043 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.043 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.043 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.043 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.043 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.044 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.044 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.044 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.044 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.044 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.044 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.045 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.045 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.045 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.045 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.045 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.045 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.045 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.046 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.046 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.046 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.046 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.046 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.046 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.047 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.047 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.047 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.047 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.047 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.047 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.048 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.048 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.048 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.048 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.048 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.048 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.048 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.049 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.049 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.049 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.049 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.049 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.050 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.050 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.050 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.050 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.050 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.050 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.050 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.051 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.051 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.051 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.051 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.051 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 sudo[253162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esznephbpfsjrdahdukpehqdhooogagm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783056.3921654-1549-17313909523850/AnsiballZ_podman_container.py'
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.051 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.051 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.052 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.052 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.052 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.052 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.052 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.052 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.053 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.053 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.053 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.053 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.053 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.053 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.054 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.054 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.054 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.054 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.054 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.054 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.054 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.055 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.055 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.055 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.055 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.055 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.055 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 sudo[253162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.055 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.056 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.056 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.056 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.056 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.056 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.056 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.057 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.057 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.057 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.057 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.057 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.057 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.057 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.058 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.058 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.058 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.058 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.058 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.058 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.058 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.059 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.059 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.059 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.059 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.059 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.059 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.059 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.060 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.060 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.060 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.060 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.060 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.060 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.060 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.061 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.061 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.061 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.061 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.061 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.061 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.061 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.062 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.062 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.062 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.062 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.062 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.062 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.062 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.063 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.063 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.063 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.063 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.063 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.063 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.064 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.064 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.064 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.064 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.064 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.064 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.064 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.065 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.065 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.065 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.065 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.065 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.065 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.065 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.066 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.066 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.066 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.066 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.066 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.066 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.066 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.067 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.067 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.067 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.067 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.067 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.067 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.067 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.068 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.068 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.068 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.068 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.068 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.068 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.069 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.069 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.069 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.069 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.069 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.069 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.069 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.070 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.070 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.070 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.070 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.070 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.070 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.070 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.071 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.071 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.071 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.071 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.071 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.071 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.072 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.072 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.072 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.072 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.072 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.072 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.072 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.073 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.073 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.073 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.073 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.073 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.073 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.073 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.074 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.074 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.074 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.074 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.074 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.074 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.074 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.075 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.075 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.075 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.075 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.075 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.075 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.076 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.076 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.076 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.076 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.076 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.076 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.076 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.077 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.077 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.077 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.077 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.077 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.077 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.077 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.078 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.078 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.078 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.078 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.078 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.078 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.079 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.079 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.079 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.079 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.079 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.079 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.079 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.080 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.080 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.080 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.080 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.080 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.080 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.080 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.081 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.081 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.081 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.081 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.081 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.081 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.081 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.082 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.082 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.082 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.082 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.082 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.082 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.082 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.083 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.083 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.083 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.083 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.083 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.083 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.083 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.084 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.084 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.084 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.084 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.084 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.085 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.085 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.085 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.085 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.085 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.085 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.086 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.086 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.086 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.086 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.086 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.086 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.086 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.087 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.087 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.087 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.087 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.087 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.087 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.087 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.088 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.088 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.088 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.088 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.088 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.088 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.089 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.089 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.089 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.089 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.089 252550 DEBUG oslo_service.service [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.090 252550 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.132 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.133 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.133 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.134 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 22 03:44:17 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Nov 22 03:44:17 compute-0 systemd[1]: Started libvirt QEMU daemon.
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.208 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7faafa688fa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.212 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7faafa688fa0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.213 252550 INFO nova.virt.libvirt.driver [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Connection event '1' reason 'None'
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.228 252550 WARNING nova.virt.libvirt.driver [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 03:44:17 compute-0 nova_compute[252546]: 2025-11-22 03:44:17.229 252550 DEBUG nova.virt.libvirt.volume.mount [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 22 03:44:17 compute-0 python3.9[253164]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 03:44:17 compute-0 sudo[253162]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:44:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:44:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:18 compute-0 sudo[253397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kghbdybdfbzxmxrdwfuzouvrofxgihdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783057.6537702-1557-50462620934689/AnsiballZ_systemd.py'
Nov 22 03:44:18 compute-0 sudo[253397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.096 252550 INFO nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]: 
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <host>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <uuid>cc28b99b-cca8-4899-a39d-03c6a80b1444</uuid>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <arch>x86_64</arch>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model>EPYC-Rome-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <vendor>AMD</vendor>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <microcode version='16777317'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <signature family='23' model='49' stepping='0'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='x2apic'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='tsc-deadline'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='osxsave'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='hypervisor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='tsc_adjust'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='spec-ctrl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='stibp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='arch-capabilities'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='cmp_legacy'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='topoext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='virt-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='lbrv'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='tsc-scale'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='vmcb-clean'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='pause-filter'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='pfthreshold'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='svme-addr-chk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='rdctl-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='skip-l1dfl-vmentry'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='mds-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature name='pschange-mc-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <pages unit='KiB' size='4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <pages unit='KiB' size='2048'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <pages unit='KiB' size='1048576'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <power_management>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <suspend_mem/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </power_management>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <iommu support='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <migration_features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <live/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <uri_transports>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <uri_transport>tcp</uri_transport>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <uri_transport>rdma</uri_transport>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </uri_transports>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </migration_features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <topology>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <cells num='1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <cell id='0'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:           <memory unit='KiB'>7864308</memory>
Nov 22 03:44:18 compute-0 nova_compute[252546]:           <pages unit='KiB' size='4'>1966077</pages>
Nov 22 03:44:18 compute-0 nova_compute[252546]:           <pages unit='KiB' size='2048'>0</pages>
Nov 22 03:44:18 compute-0 nova_compute[252546]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 22 03:44:18 compute-0 nova_compute[252546]:           <distances>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <sibling id='0' value='10'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:           </distances>
Nov 22 03:44:18 compute-0 nova_compute[252546]:           <cpus num='8'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:           </cpus>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         </cell>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </cells>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </topology>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <cache>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </cache>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <secmodel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model>selinux</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <doi>0</doi>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </secmodel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <secmodel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model>dac</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <doi>0</doi>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </secmodel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </host>
Nov 22 03:44:18 compute-0 nova_compute[252546]: 
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <guest>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <os_type>hvm</os_type>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <arch name='i686'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <wordsize>32</wordsize>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <domain type='qemu'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <domain type='kvm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </arch>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <pae/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <nonpae/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <acpi default='on' toggle='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <apic default='on' toggle='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <cpuselection/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <deviceboot/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <disksnapshot default='on' toggle='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <externalSnapshot/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </guest>
Nov 22 03:44:18 compute-0 nova_compute[252546]: 
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <guest>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <os_type>hvm</os_type>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <arch name='x86_64'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <wordsize>64</wordsize>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <domain type='qemu'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <domain type='kvm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </arch>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <acpi default='on' toggle='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <apic default='on' toggle='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <cpuselection/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <deviceboot/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <disksnapshot default='on' toggle='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <externalSnapshot/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </guest>
Nov 22 03:44:18 compute-0 nova_compute[252546]: 
Nov 22 03:44:18 compute-0 nova_compute[252546]: </capabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]: 
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.105 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.125 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 03:44:18 compute-0 nova_compute[252546]: <domainCapabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <domain>kvm</domain>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <arch>i686</arch>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <vcpu max='4096'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <iothreads supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <os supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <enum name='firmware'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <loader supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>rom</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pflash</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='readonly'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>yes</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>no</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='secure'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>no</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </loader>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </os>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='host-passthrough' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='hostPassthroughMigratable'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>on</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>off</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='maximum' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='maximumMigratable'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>on</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>off</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='host-model' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <vendor>AMD</vendor>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='x2apic'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='hypervisor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='stibp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='overflow-recov'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='succor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='lbrv'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc-scale'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='flushbyasid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='pause-filter'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='pfthreshold'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='disable' name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='custom' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Dhyana-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Genoa'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='auto-ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='auto-ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-128'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-256'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-512'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v6'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v7'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='KnightsMill'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512er'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512pf'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='KnightsMill-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512er'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512pf'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G4-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tbm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G5-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tbm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SierraForest'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cmpccxadd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SierraForest-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cmpccxadd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='athlon'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='athlon-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='core2duo'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='core2duo-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='coreduo'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='coreduo-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='n270'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='n270-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='phenom'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='phenom-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <memoryBacking supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <enum name='sourceType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>file</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>anonymous</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>memfd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </memoryBacking>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <devices>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <disk supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='diskDevice'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>disk</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>cdrom</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>floppy</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>lun</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='bus'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>fdc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>scsi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>sata</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-non-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </disk>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <graphics supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vnc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>egl-headless</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dbus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </graphics>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <video supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='modelType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vga</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>cirrus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>none</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>bochs</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ramfb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </video>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <hostdev supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='mode'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>subsystem</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='startupPolicy'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>default</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>mandatory</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>requisite</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>optional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='subsysType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pci</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>scsi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='capsType'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='pciBackend'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </hostdev>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <rng supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-non-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>random</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>egd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>builtin</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </rng>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <filesystem supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='driverType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>path</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>handle</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtiofs</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </filesystem>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <tpm supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tpm-tis</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tpm-crb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>emulator</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>external</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendVersion'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>2.0</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </tpm>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <redirdev supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='bus'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </redirdev>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <channel supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pty</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>unix</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </channel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <crypto supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>qemu</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>builtin</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </crypto>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <interface supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>default</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>passt</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </interface>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <panic supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>isa</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>hyperv</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </panic>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <console supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>null</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pty</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dev</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>file</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pipe</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>stdio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>udp</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tcp</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>unix</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>qemu-vdagent</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dbus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </console>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </devices>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <gic supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <vmcoreinfo supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <genid supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <backingStoreInput supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <backup supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <async-teardown supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <ps2 supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <sev supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <sgx supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <hyperv supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='features'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>relaxed</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vapic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>spinlocks</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vpindex</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>runtime</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>synic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>stimer</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>reset</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vendor_id</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>frequencies</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>reenlightenment</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tlbflush</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ipi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>avic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>emsr_bitmap</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>xmm_input</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <defaults>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <spinlocks>4095</spinlocks>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <stimer_direct>on</stimer_direct>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </defaults>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </hyperv>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <launchSecurity supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='sectype'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tdx</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </launchSecurity>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </features>
Nov 22 03:44:18 compute-0 nova_compute[252546]: </domainCapabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.151 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 03:44:18 compute-0 nova_compute[252546]: <domainCapabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <domain>kvm</domain>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <arch>i686</arch>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <vcpu max='240'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <iothreads supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <os supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <enum name='firmware'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <loader supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>rom</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pflash</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='readonly'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>yes</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>no</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='secure'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>no</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </loader>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </os>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='host-passthrough' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='hostPassthroughMigratable'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>on</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>off</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='maximum' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='maximumMigratable'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>on</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>off</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='host-model' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <vendor>AMD</vendor>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='x2apic'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='hypervisor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='stibp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='overflow-recov'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='succor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='lbrv'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc-scale'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='flushbyasid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='pause-filter'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='pfthreshold'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='disable' name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='custom' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Dhyana-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Genoa'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='auto-ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='auto-ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-128'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-256'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-512'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v6'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v7'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='KnightsMill'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512er'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512pf'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='KnightsMill-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512er'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512pf'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G4-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tbm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G5-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tbm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SierraForest'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cmpccxadd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SierraForest-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cmpccxadd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='athlon'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='athlon-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='core2duo'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='core2duo-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='coreduo'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='coreduo-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='n270'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='n270-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='phenom'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='phenom-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <memoryBacking supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <enum name='sourceType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>file</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>anonymous</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>memfd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </memoryBacking>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <devices>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <disk supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='diskDevice'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>disk</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>cdrom</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>floppy</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>lun</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='bus'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ide</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>fdc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>scsi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>sata</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-non-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </disk>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <graphics supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vnc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>egl-headless</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dbus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </graphics>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <video supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='modelType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vga</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>cirrus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>none</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>bochs</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ramfb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </video>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <hostdev supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='mode'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>subsystem</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='startupPolicy'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>default</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>mandatory</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>requisite</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>optional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='subsysType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pci</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>scsi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='capsType'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='pciBackend'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </hostdev>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <rng supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-non-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>random</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>egd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>builtin</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </rng>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <filesystem supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='driverType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>path</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>handle</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtiofs</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </filesystem>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <tpm supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tpm-tis</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tpm-crb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>emulator</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>external</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendVersion'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>2.0</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </tpm>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <redirdev supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='bus'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </redirdev>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <channel supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pty</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>unix</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </channel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <crypto supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>qemu</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>builtin</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </crypto>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <interface supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>default</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>passt</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </interface>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <panic supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>isa</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>hyperv</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </panic>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <console supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>null</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pty</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dev</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>file</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pipe</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>stdio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>udp</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tcp</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>unix</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>qemu-vdagent</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dbus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </console>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </devices>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <gic supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <vmcoreinfo supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <genid supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <backingStoreInput supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <backup supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <async-teardown supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <ps2 supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <sev supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <sgx supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <hyperv supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='features'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>relaxed</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vapic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>spinlocks</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vpindex</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>runtime</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>synic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>stimer</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>reset</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vendor_id</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>frequencies</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>reenlightenment</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tlbflush</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ipi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>avic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>emsr_bitmap</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>xmm_input</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <defaults>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <spinlocks>4095</spinlocks>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <stimer_direct>on</stimer_direct>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </defaults>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </hyperv>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <launchSecurity supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='sectype'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tdx</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </launchSecurity>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </features>
Nov 22 03:44:18 compute-0 nova_compute[252546]: </domainCapabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.204 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.209 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 03:44:18 compute-0 nova_compute[252546]: <domainCapabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <domain>kvm</domain>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <arch>x86_64</arch>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <vcpu max='4096'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <iothreads supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <os supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <enum name='firmware'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>efi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <loader supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>rom</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pflash</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='readonly'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>yes</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>no</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='secure'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>yes</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>no</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </loader>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </os>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='host-passthrough' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='hostPassthroughMigratable'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>on</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>off</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='maximum' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='maximumMigratable'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>on</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>off</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='host-model' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <vendor>AMD</vendor>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='x2apic'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='hypervisor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='stibp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='overflow-recov'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='succor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='lbrv'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc-scale'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='flushbyasid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='pause-filter'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='pfthreshold'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='disable' name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='custom' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Dhyana-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Genoa'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='auto-ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='auto-ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-128'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-256'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-512'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v6'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v7'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='KnightsMill'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512er'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512pf'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='KnightsMill-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512er'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512pf'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G4-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tbm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G5-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tbm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SierraForest'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cmpccxadd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SierraForest-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cmpccxadd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='athlon'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='athlon-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='core2duo'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='core2duo-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='coreduo'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='coreduo-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='n270'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='n270-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='phenom'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='phenom-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <memoryBacking supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <enum name='sourceType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>file</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>anonymous</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>memfd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </memoryBacking>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <devices>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <disk supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='diskDevice'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>disk</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>cdrom</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>floppy</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>lun</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='bus'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>fdc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>scsi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>sata</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-non-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </disk>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <graphics supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vnc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>egl-headless</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dbus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </graphics>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <video supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='modelType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vga</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>cirrus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>none</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>bochs</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ramfb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </video>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <hostdev supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='mode'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>subsystem</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='startupPolicy'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>default</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>mandatory</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>requisite</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>optional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='subsysType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pci</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>scsi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='capsType'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='pciBackend'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </hostdev>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <rng supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-non-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>random</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>egd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>builtin</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </rng>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <filesystem supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='driverType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>path</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>handle</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtiofs</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </filesystem>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <tpm supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tpm-tis</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tpm-crb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>emulator</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>external</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendVersion'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>2.0</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </tpm>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <redirdev supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='bus'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </redirdev>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <channel supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pty</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>unix</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </channel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <crypto supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>qemu</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>builtin</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </crypto>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <interface supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>default</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>passt</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </interface>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <panic supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>isa</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>hyperv</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </panic>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <console supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>null</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pty</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dev</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>file</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pipe</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>stdio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>udp</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tcp</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>unix</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>qemu-vdagent</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dbus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </console>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </devices>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <gic supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <vmcoreinfo supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <genid supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <backingStoreInput supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <backup supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <async-teardown supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <ps2 supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <sev supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <sgx supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <hyperv supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='features'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>relaxed</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vapic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>spinlocks</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vpindex</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>runtime</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>synic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>stimer</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>reset</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vendor_id</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>frequencies</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>reenlightenment</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tlbflush</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ipi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>avic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>emsr_bitmap</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>xmm_input</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <defaults>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <spinlocks>4095</spinlocks>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <stimer_direct>on</stimer_direct>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </defaults>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </hyperv>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <launchSecurity supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='sectype'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tdx</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </launchSecurity>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </features>
Nov 22 03:44:18 compute-0 nova_compute[252546]: </domainCapabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.286 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 03:44:18 compute-0 nova_compute[252546]: <domainCapabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <domain>kvm</domain>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <arch>x86_64</arch>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <vcpu max='240'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <iothreads supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <os supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <enum name='firmware'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <loader supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>rom</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pflash</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='readonly'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>yes</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>no</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='secure'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>no</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </loader>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </os>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='host-passthrough' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='hostPassthroughMigratable'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>on</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>off</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='maximum' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='maximumMigratable'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>on</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>off</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='host-model' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <vendor>AMD</vendor>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='x2apic'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='hypervisor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='stibp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='overflow-recov'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='succor'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='lbrv'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='tsc-scale'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='flushbyasid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='pause-filter'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='pfthreshold'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <feature policy='disable' name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <mode name='custom' supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Broadwell-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Cooperlake-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Denverton-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Dhyana-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Genoa'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='auto-ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='auto-ibrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Milan-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amd-psfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='stibp-always-on'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-Rome-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='EPYC-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='GraniteRapids-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-128'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-256'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx10-512'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='prefetchiti'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Haswell-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 python3.9[253399]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v6'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Icelake-Server-v7'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='IvyBridge-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='KnightsMill'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512er'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512pf'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='KnightsMill-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512er'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512pf'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G4-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tbm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Opteron_G5-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fma4'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tbm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xop'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SapphireRapids-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='amx-tile'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-bf16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-fp16'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bitalg'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrc'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fzrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='la57'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='taa-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xfd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SierraForest'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cmpccxadd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='SierraForest-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ifma'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cmpccxadd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fbsdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='fsrs'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ibrs-all'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mcdt-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pbrsb-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='psdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='serialize'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vaes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Client-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='hle'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='rtm'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Skylake-Server-v5'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512bw'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512cd'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512dq'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512f'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='avx512vl'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='invpcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pcid'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='pku'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='mpx'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v2'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v3'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='core-capability'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='split-lock-detect'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='Snowridge-v4'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='cldemote'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='erms'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='gfni'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdir64b'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='movdiri'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='xsaves'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='athlon'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='athlon-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='core2duo'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='core2duo-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='coreduo'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='coreduo-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='n270'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='n270-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='ss'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='phenom'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <blockers model='phenom-v1'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnow'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <feature name='3dnowext'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </blockers>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </mode>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </cpu>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <memoryBacking supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <enum name='sourceType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>file</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>anonymous</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <value>memfd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </memoryBacking>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <devices>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <disk supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='diskDevice'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>disk</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>cdrom</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>floppy</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>lun</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='bus'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ide</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>fdc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>scsi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>sata</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-non-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </disk>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <graphics supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vnc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>egl-headless</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dbus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </graphics>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <video supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='modelType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vga</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>cirrus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>none</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>bochs</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ramfb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </video>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <hostdev supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='mode'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>subsystem</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='startupPolicy'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>default</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>mandatory</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>requisite</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>optional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='subsysType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pci</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>scsi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='capsType'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='pciBackend'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </hostdev>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <rng supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtio-non-transitional</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>random</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>egd</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>builtin</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </rng>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <filesystem supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='driverType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>path</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>handle</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>virtiofs</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </filesystem>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <tpm supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tpm-tis</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tpm-crb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>emulator</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>external</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendVersion'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>2.0</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </tpm>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <redirdev supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='bus'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>usb</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </redirdev>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <channel supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pty</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>unix</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </channel>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <crypto supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>qemu</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendModel'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>builtin</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </crypto>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <interface supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='backendType'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>default</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>passt</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </interface>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <panic supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='model'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>isa</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>hyperv</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </panic>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <console supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='type'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>null</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vc</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pty</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dev</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>file</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>pipe</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>stdio</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>udp</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tcp</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>unix</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>qemu-vdagent</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>dbus</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </console>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </devices>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   <features>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <gic supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <vmcoreinfo supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <genid supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <backingStoreInput supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <backup supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <async-teardown supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <ps2 supported='yes'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <sev supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <sgx supported='no'/>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <hyperv supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='features'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>relaxed</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vapic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>spinlocks</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vpindex</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>runtime</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>synic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>stimer</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>reset</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>vendor_id</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>frequencies</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>reenlightenment</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tlbflush</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>ipi</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>avic</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>emsr_bitmap</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>xmm_input</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <defaults>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <spinlocks>4095</spinlocks>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <stimer_direct>on</stimer_direct>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </defaults>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </hyperv>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     <launchSecurity supported='yes'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       <enum name='sectype'>
Nov 22 03:44:18 compute-0 nova_compute[252546]:         <value>tdx</value>
Nov 22 03:44:18 compute-0 nova_compute[252546]:       </enum>
Nov 22 03:44:18 compute-0 nova_compute[252546]:     </launchSecurity>
Nov 22 03:44:18 compute-0 nova_compute[252546]:   </features>
Nov 22 03:44:18 compute-0 nova_compute[252546]: </domainCapabilities>
Nov 22 03:44:18 compute-0 nova_compute[252546]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.347 252550 DEBUG nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.348 252550 INFO nova.virt.libvirt.host [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Secure Boot support detected
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.351 252550 INFO nova.virt.libvirt.driver [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.351 252550 INFO nova.virt.libvirt.driver [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.363 252550 DEBUG nova.virt.libvirt.driver [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.438 252550 INFO nova.virt.node [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Determined node identity 62e18608-eaef-4f09-8e92-06d41e51d580 from /var/lib/nova/compute_id
Nov 22 03:44:18 compute-0 systemd[1]: Stopping nova_compute container...
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.458 252550 WARNING nova.compute.manager [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Compute nodes ['62e18608-eaef-4f09-8e92-06d41e51d580'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.516 252550 INFO nova.compute.manager [None req-139a4850-2b5d-4c48-9182-97993c5153b2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.534 252550 DEBUG oslo_concurrency.lockutils [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.534 252550 DEBUG oslo_concurrency.lockutils [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:44:18 compute-0 nova_compute[252546]: 2025-11-22 03:44:18.534 252550 DEBUG oslo_concurrency.lockutils [None req-aa7995bf-138b-4dc7-b555-aaddcd182b77 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:44:18 compute-0 virtqemud[253186]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 22 03:44:18 compute-0 virtqemud[253186]: hostname: compute-0
Nov 22 03:44:18 compute-0 virtqemud[253186]: End of file while reading data: Input/output error
Nov 22 03:44:18 compute-0 systemd[1]: libpod-e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a.scope: Deactivated successfully.
Nov 22 03:44:18 compute-0 systemd[1]: libpod-e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a.scope: Consumed 3.035s CPU time.
Nov 22 03:44:18 compute-0 podman[253407]: 2025-11-22 03:44:18.931698158 +0000 UTC m=+0.464590266 container died e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 03:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a-userdata-shm.mount: Deactivated successfully.
Nov 22 03:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c-merged.mount: Deactivated successfully.
Nov 22 03:44:18 compute-0 ceph-mon[75011]: pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.546397) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783059546499, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1504, "num_deletes": 505, "total_data_size": 1937929, "memory_usage": 1972960, "flush_reason": "Manual Compaction"}
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Nov 22 03:44:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783059860195, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1908687, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13512, "largest_seqno": 15015, "table_properties": {"data_size": 1902112, "index_size": 3266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 15978, "raw_average_key_size": 18, "raw_value_size": 1887008, "raw_average_value_size": 2141, "num_data_blocks": 150, "num_entries": 881, "num_filter_entries": 881, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763782929, "oldest_key_time": 1763782929, "file_creation_time": 1763783059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 313954 microseconds, and 6392 cpu microseconds.
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.860395) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1908687 bytes OK
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.860514) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.894901) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.894954) EVENT_LOG_v1 {"time_micros": 1763783059894940, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.894982) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1930280, prev total WAL file size 1930280, number of live WAL files 2.
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.896824) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1863KB)], [32(7399KB)]
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783059896908, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9485512, "oldest_snapshot_seqno": -1}
Nov 22 03:44:19 compute-0 podman[253407]: 2025-11-22 03:44:19.907768143 +0000 UTC m=+1.440660261 container cleanup e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:44:19 compute-0 podman[253407]: nova_compute
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3879 keys, 7484446 bytes, temperature: kUnknown
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783059949063, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7484446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7456355, "index_size": 17293, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9733, "raw_key_size": 95032, "raw_average_key_size": 24, "raw_value_size": 7383920, "raw_average_value_size": 1903, "num_data_blocks": 732, "num_entries": 3879, "num_filter_entries": 3879, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783059, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.949349) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7484446 bytes
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.950763) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.5 rd, 143.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.2 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(8.9) write-amplify(3.9) OK, records in: 4902, records dropped: 1023 output_compression: NoCompression
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.950784) EVENT_LOG_v1 {"time_micros": 1763783059950774, "job": 14, "event": "compaction_finished", "compaction_time_micros": 52273, "compaction_time_cpu_micros": 27806, "output_level": 6, "num_output_files": 1, "total_output_size": 7484446, "num_input_records": 4902, "num_output_records": 3879, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783059951509, "job": 14, "event": "table_file_deletion", "file_number": 34}
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783059953188, "job": 14, "event": "table_file_deletion", "file_number": 32}
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.896591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.953341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.953351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.953355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.953359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:44:19 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:44:19.953363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:44:19 compute-0 podman[253433]: nova_compute
Nov 22 03:44:19 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 22 03:44:19 compute-0 systemd[1]: Stopped nova_compute container.
Nov 22 03:44:19 compute-0 systemd[1]: Starting nova_compute container...
Nov 22 03:44:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96ae4e3b25d927c09f1a64c460008d54d21570a398643a44180def3f659cb7c/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:20 compute-0 podman[253446]: 2025-11-22 03:44:20.1073917 +0000 UTC m=+0.101551840 container init e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 03:44:20 compute-0 podman[253446]: 2025-11-22 03:44:20.114530902 +0000 UTC m=+0.108690972 container start e98ab054d4d67ffd17609366ae0a12c515f6d3f4257c5865dc3b9eb0e600c32a (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm)
Nov 22 03:44:20 compute-0 podman[253446]: nova_compute
Nov 22 03:44:20 compute-0 nova_compute[253461]: + sudo -E kolla_set_configs
Nov 22 03:44:20 compute-0 systemd[1]: Started nova_compute container.
Nov 22 03:44:20 compute-0 sudo[253397]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Validating config file
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying service configuration files
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /etc/ceph
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Creating directory /etc/ceph
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/ceph
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Writing out command to execute
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:44:20 compute-0 nova_compute[253461]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 22 03:44:20 compute-0 nova_compute[253461]: ++ cat /run_command
Nov 22 03:44:20 compute-0 nova_compute[253461]: + CMD=nova-compute
Nov 22 03:44:20 compute-0 nova_compute[253461]: + ARGS=
Nov 22 03:44:20 compute-0 nova_compute[253461]: + sudo kolla_copy_cacerts
Nov 22 03:44:20 compute-0 nova_compute[253461]: + [[ ! -n '' ]]
Nov 22 03:44:20 compute-0 nova_compute[253461]: + . kolla_extend_start
Nov 22 03:44:20 compute-0 nova_compute[253461]: Running command: 'nova-compute'
Nov 22 03:44:20 compute-0 nova_compute[253461]: + echo 'Running command: '\''nova-compute'\'''
Nov 22 03:44:20 compute-0 nova_compute[253461]: + umask 0022
Nov 22 03:44:20 compute-0 nova_compute[253461]: + exec nova-compute
Nov 22 03:44:20 compute-0 sudo[253622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhmqhnsyglnsfcupoxctuiraivylqblk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1763783060.2996628-1566-273459735425040/AnsiballZ_podman_container.py'
Nov 22 03:44:20 compute-0 sudo[253622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 03:44:20 compute-0 python3.9[253624]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 22 03:44:21 compute-0 systemd[1]: Started libpod-conmon-c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5.scope.
Nov 22 03:44:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0763dbe5a8a5bb7f8106a192b6a1d5de9a4f1c11a0a9aaec5846debd78b70e97/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0763dbe5a8a5bb7f8106a192b6a1d5de9a4f1c11a0a9aaec5846debd78b70e97/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0763dbe5a8a5bb7f8106a192b6a1d5de9a4f1c11a0a9aaec5846debd78b70e97/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:21 compute-0 podman[253650]: 2025-11-22 03:44:21.175204636 +0000 UTC m=+0.140396022 container init c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true)
Nov 22 03:44:21 compute-0 podman[253650]: 2025-11-22 03:44:21.182769674 +0000 UTC m=+0.147961030 container start c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 03:44:21 compute-0 python3.9[253624]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Applying nova statedir ownership
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 22 03:44:21 compute-0 nova_compute_init[253671]: INFO:nova_statedir:Nova statedir ownership complete
Nov 22 03:44:21 compute-0 systemd[1]: libpod-c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5.scope: Deactivated successfully.
Nov 22 03:44:21 compute-0 podman[253686]: 2025-11-22 03:44:21.292799754 +0000 UTC m=+0.026306894 container died c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5-userdata-shm.mount: Deactivated successfully.
Nov 22 03:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0763dbe5a8a5bb7f8106a192b6a1d5de9a4f1c11a0a9aaec5846debd78b70e97-merged.mount: Deactivated successfully.
Nov 22 03:44:21 compute-0 podman[253686]: 2025-11-22 03:44:21.325125605 +0000 UTC m=+0.058632725 container cleanup c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.license=GPLv2)
Nov 22 03:44:21 compute-0 sudo[253622]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:21 compute-0 systemd[1]: libpod-conmon-c31d790b78d583c424d5e481b25a4679507529d097401cb64ea6cc943d6acbd5.scope: Deactivated successfully.
Nov 22 03:44:21 compute-0 ceph-mon[75011]: pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:21 compute-0 sshd-session[223507]: Connection closed by 192.168.122.30 port 33180
Nov 22 03:44:21 compute-0 sshd-session[223504]: pam_unix(sshd:session): session closed for user zuul
Nov 22 03:44:21 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Nov 22 03:44:21 compute-0 systemd[1]: session-50.scope: Consumed 2min 29.362s CPU time.
Nov 22 03:44:21 compute-0 systemd-logind[799]: Session 50 logged out. Waiting for processes to exit.
Nov 22 03:44:21 compute-0 systemd-logind[799]: Removed session 50.
Nov 22 03:44:22 compute-0 nova_compute[253461]: 2025-11-22 03:44:22.255 253465 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 03:44:22 compute-0 nova_compute[253461]: 2025-11-22 03:44:22.255 253465 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 03:44:22 compute-0 nova_compute[253461]: 2025-11-22 03:44:22.256 253465 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Nov 22 03:44:22 compute-0 nova_compute[253461]: 2025-11-22 03:44:22.256 253465 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Nov 22 03:44:22 compute-0 nova_compute[253461]: 2025-11-22 03:44:22.391 253465 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:44:22 compute-0 nova_compute[253461]: 2025-11-22 03:44:22.419 253465 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:44:22 compute-0 nova_compute[253461]: 2025-11-22 03:44:22.420 253465 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Nov 22 03:44:22 compute-0 nova_compute[253461]: 2025-11-22 03:44:22.923 253465 INFO nova.virt.driver [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Nov 22 03:44:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:44:22.997 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:44:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:44:22.998 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:44:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:44:22.998 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.060 253465 INFO nova.compute.provider_config [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.083 253465 DEBUG oslo_concurrency.lockutils [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.084 253465 DEBUG oslo_concurrency.lockutils [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.084 253465 DEBUG oslo_concurrency.lockutils [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.084 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.084 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.085 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.085 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.085 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.085 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.085 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.085 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.085 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.086 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.086 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.086 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.086 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.086 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.086 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.086 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.087 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.087 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.087 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.087 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.087 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.087 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.088 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.088 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.088 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.088 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.088 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.088 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.088 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.089 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.089 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.089 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.089 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.089 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.089 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.089 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.090 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.090 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.090 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.090 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.090 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.090 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.091 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.091 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.091 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.091 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.091 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.091 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.092 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.092 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.092 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.092 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.092 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.092 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.093 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.093 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.093 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.093 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.093 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.093 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.094 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.094 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.094 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.094 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.094 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.094 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.094 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.095 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.095 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.095 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.095 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.095 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.095 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.095 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.096 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.096 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.096 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.096 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.096 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.096 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.096 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.097 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.097 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.097 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.097 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.097 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.097 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.097 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.098 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.098 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.098 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.098 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.098 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.098 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.098 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.099 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.099 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.099 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.099 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.099 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.099 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.099 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.099 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.100 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.100 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.100 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.100 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.100 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.100 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.100 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.101 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.101 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.101 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.101 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.101 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.101 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.101 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.102 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.102 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.102 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.102 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.102 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.103 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.103 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.103 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.103 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.103 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.103 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.104 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.104 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.104 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.104 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.104 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.104 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.104 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.105 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.105 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.105 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.105 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.105 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.105 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.105 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.106 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.106 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.106 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.106 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.106 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.106 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.106 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.107 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.107 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.107 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.107 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.107 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.107 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.107 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.108 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.108 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.108 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.108 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.108 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.108 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.109 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.109 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.109 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.109 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.109 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.109 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.110 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.110 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.110 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.110 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.110 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.110 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.111 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.111 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.111 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.111 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.111 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.112 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.112 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.112 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.112 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.112 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.113 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.113 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.113 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.113 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.113 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.114 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.114 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.114 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.114 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.114 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.115 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.115 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.115 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.115 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.115 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.116 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.116 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.116 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.116 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.116 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.117 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.117 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.117 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.117 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.117 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.117 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.118 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.118 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.118 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.118 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.118 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.119 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.119 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.119 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.119 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.119 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.120 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.120 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.120 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.120 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.120 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.120 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.121 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.121 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.121 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.121 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.121 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.121 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.122 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.122 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.122 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.122 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.122 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.123 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.123 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.123 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.123 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.123 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.123 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.124 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.124 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.124 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.124 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.124 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.124 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.124 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.125 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.125 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.125 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.125 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.125 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.125 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.125 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.126 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.126 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.126 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.126 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.126 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.126 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.127 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.127 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.127 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.127 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.127 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.127 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.128 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.128 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.128 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.128 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.128 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.128 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.129 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.129 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.129 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.129 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.129 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.129 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.129 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.130 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.130 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.130 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.130 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.130 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.130 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.130 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.131 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.131 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.131 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.131 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.131 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.131 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.131 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.132 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.132 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.132 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.132 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.132 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.132 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.132 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.133 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.133 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.133 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.133 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.133 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.133 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.133 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.134 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.134 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.134 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.134 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.134 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.134 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.135 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.135 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.135 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.135 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.136 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.136 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.136 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.136 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.136 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.137 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.137 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.137 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.137 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.137 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.137 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.138 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.138 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.138 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.138 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.138 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.138 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.138 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.139 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.139 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.139 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.139 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.139 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.139 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.139 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.140 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.140 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.140 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.140 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.140 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.140 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.141 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.141 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.141 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.141 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.141 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.142 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.142 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.142 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.142 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.142 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.142 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.142 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.143 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.143 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.143 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.143 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.143 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.143 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.143 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.144 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.144 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.144 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.144 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.144 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.144 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.144 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.145 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.145 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.145 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.145 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.145 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.145 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.145 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.145 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.146 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.146 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.146 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.146 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.146 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.146 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.146 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.147 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.147 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.147 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.147 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.147 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.147 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.147 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.147 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.148 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.148 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.148 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.148 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.148 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.148 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.148 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.149 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.149 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.149 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.149 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.149 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.149 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.149 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.149 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.150 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.150 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.150 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.150 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.150 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.150 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.150 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.151 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.151 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.151 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.151 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.151 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.151 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.151 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.151 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.152 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.152 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.152 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.152 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.152 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.152 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.152 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.152 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.153 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.153 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.153 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.153 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.153 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.153 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.154 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.154 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.154 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.154 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.154 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.154 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.154 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.154 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.155 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.155 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.155 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.155 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.155 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.155 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.155 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.156 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.156 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.156 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.156 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.156 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.156 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.156 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.156 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.157 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.157 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.157 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.157 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.157 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.157 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.157 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.158 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.158 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.158 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.158 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.158 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.158 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.158 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.158 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.159 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.159 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.159 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.159 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.159 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.159 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.159 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.160 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.160 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.160 253465 WARNING oslo_config.cfg [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 22 03:44:23 compute-0 nova_compute[253461]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 22 03:44:23 compute-0 nova_compute[253461]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 22 03:44:23 compute-0 nova_compute[253461]: and ``live_migration_inbound_addr`` respectively.
Nov 22 03:44:23 compute-0 nova_compute[253461]: ).  Its value may be silently ignored in the future.
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.160 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.160 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.160 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.161 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.161 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.161 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.161 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.161 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.161 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.161 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.162 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.162 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.162 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.162 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.162 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.162 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.162 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.163 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.163 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rbd_secret_uuid        = 7adcc38b-6484-5de6-b879-33a0309153df log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.163 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.163 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.163 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.163 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.163 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.164 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.164 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.164 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.164 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.164 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.164 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.164 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.165 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.165 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.165 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.165 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.165 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.165 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.165 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.166 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.166 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.166 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.166 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.166 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.166 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.166 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.166 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.167 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.167 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.167 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.167 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.167 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.167 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.167 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.168 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.168 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.168 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.168 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.168 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.168 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.168 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.169 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.169 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.169 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.169 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.169 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.169 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.169 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.169 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.170 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.170 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.170 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.170 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.170 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.170 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.170 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.171 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.171 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.171 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.171 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.171 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.171 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.171 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.171 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.172 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.172 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.172 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.172 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.172 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.172 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.172 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.173 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.173 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.173 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.173 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.173 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.173 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.173 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.173 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.174 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.174 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.174 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.174 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.174 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.174 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.175 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.175 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.175 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.175 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.175 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.176 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.176 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.176 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.176 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.176 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.177 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.177 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.177 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.177 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.177 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.178 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.178 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.178 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.178 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.178 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.179 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.179 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.179 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.179 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.180 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.180 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.180 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.180 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.180 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.180 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.181 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.181 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.181 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.181 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.182 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.182 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.182 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.182 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.183 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.183 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.183 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.183 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.183 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.184 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.184 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.184 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.184 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.185 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.185 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.185 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.185 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.186 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.186 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.186 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.186 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.187 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.187 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.187 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.187 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.187 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.188 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.188 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.188 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.188 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.188 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.189 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.189 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.189 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.189 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.189 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.190 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.190 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.190 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.190 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.191 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.191 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.191 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.191 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.191 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.192 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.192 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.192 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.192 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.193 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.193 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.193 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.193 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.193 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.194 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.194 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.194 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.194 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.194 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.195 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.195 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.195 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.195 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.196 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.196 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.196 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.196 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.197 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.197 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.197 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.197 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.197 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.197 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.198 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.198 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.198 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.198 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.198 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.199 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.199 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.199 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.199 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.199 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.200 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.200 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.200 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.200 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.201 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.201 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.201 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.201 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.201 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.201 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.202 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.202 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.202 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.202 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.203 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.203 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.203 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.203 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.203 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.203 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.204 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.204 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.204 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.204 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.205 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.205 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.205 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.205 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.205 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.206 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.206 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.206 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.206 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.207 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.207 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.207 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.207 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.207 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.208 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.208 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.208 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.208 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.209 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.209 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.209 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.209 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.209 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.209 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.210 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.210 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.210 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.210 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.210 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.211 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.211 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.211 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.211 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.211 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.212 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.212 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.212 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.212 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.213 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.213 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.213 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.213 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.213 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.214 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.214 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.214 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.214 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.214 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.215 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.215 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.215 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.215 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.215 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.216 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.216 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.216 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.216 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.217 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.217 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.217 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.218 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.218 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.218 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.218 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.219 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.219 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.219 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.219 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.219 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.220 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.220 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.220 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.220 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.220 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.221 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.221 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.221 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.221 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.222 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.222 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.222 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.222 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.223 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.223 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.223 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.223 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.223 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.224 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.224 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.224 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.224 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.224 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.225 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.225 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.225 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.225 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.226 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.226 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.226 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.226 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.227 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.227 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.227 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.227 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.227 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.228 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.228 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.228 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.228 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.229 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.229 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.229 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.229 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.230 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.230 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.230 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.230 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.231 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.231 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.231 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.232 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.232 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.232 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.232 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.233 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.233 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.233 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.234 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.234 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.234 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.235 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.235 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.235 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.235 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.236 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.236 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.237 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.237 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.237 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.237 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.238 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.238 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.238 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.238 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.239 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.239 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.240 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.240 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.240 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.240 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.241 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.241 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.241 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.241 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.242 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.242 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.242 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.243 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.243 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.243 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.244 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.244 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.245 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.245 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.246 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.246 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.246 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.246 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.247 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.247 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.247 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.247 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.247 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.248 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.248 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.248 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.248 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.248 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.249 253465 DEBUG oslo_service.service [None req-47ad97d9-5a12-434c-88ef-834c5562d25d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.250 253465 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.267 253465 INFO nova.virt.node [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Determined node identity 62e18608-eaef-4f09-8e92-06d41e51d580 from /var/lib/nova/compute_id
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.268 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.269 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.270 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.270 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.284 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7effd8ecab80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.287 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7effd8ecab80> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.288 253465 INFO nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Connection event '1' reason 'None'
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.295 253465 INFO nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Libvirt host capabilities <capabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]: 
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <host>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <uuid>cc28b99b-cca8-4899-a39d-03c6a80b1444</uuid>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <arch>x86_64</arch>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model>EPYC-Rome-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <vendor>AMD</vendor>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <microcode version='16777317'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <signature family='23' model='49' stepping='0'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <maxphysaddr mode='emulate' bits='40'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='x2apic'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='tsc-deadline'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='osxsave'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='hypervisor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='tsc_adjust'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='spec-ctrl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='stibp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='arch-capabilities'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='cmp_legacy'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='topoext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='virt-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='lbrv'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='tsc-scale'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='vmcb-clean'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='pause-filter'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='pfthreshold'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='svme-addr-chk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='rdctl-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='skip-l1dfl-vmentry'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='mds-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature name='pschange-mc-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <pages unit='KiB' size='4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <pages unit='KiB' size='2048'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <pages unit='KiB' size='1048576'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <power_management>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <suspend_mem/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </power_management>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <iommu support='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <migration_features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <live/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <uri_transports>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <uri_transport>tcp</uri_transport>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <uri_transport>rdma</uri_transport>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </uri_transports>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </migration_features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <topology>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <cells num='1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <cell id='0'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:           <memory unit='KiB'>7864308</memory>
Nov 22 03:44:23 compute-0 nova_compute[253461]:           <pages unit='KiB' size='4'>1966077</pages>
Nov 22 03:44:23 compute-0 nova_compute[253461]:           <pages unit='KiB' size='2048'>0</pages>
Nov 22 03:44:23 compute-0 nova_compute[253461]:           <pages unit='KiB' size='1048576'>0</pages>
Nov 22 03:44:23 compute-0 nova_compute[253461]:           <distances>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <sibling id='0' value='10'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:           </distances>
Nov 22 03:44:23 compute-0 nova_compute[253461]:           <cpus num='8'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:           </cpus>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         </cell>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </cells>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </topology>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <cache>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </cache>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <secmodel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model>selinux</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <doi>0</doi>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </secmodel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <secmodel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model>dac</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <doi>0</doi>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <baselabel type='kvm'>+107:+107</baselabel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <baselabel type='qemu'>+107:+107</baselabel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </secmodel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </host>
Nov 22 03:44:23 compute-0 nova_compute[253461]: 
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <guest>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <os_type>hvm</os_type>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <arch name='i686'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <wordsize>32</wordsize>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <domain type='qemu'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <domain type='kvm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </arch>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <pae/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <nonpae/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <acpi default='on' toggle='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <apic default='on' toggle='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <cpuselection/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <deviceboot/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <disksnapshot default='on' toggle='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <externalSnapshot/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </guest>
Nov 22 03:44:23 compute-0 nova_compute[253461]: 
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <guest>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <os_type>hvm</os_type>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <arch name='x86_64'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <wordsize>64</wordsize>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <domain type='qemu'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <domain type='kvm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </arch>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <acpi default='on' toggle='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <apic default='on' toggle='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <cpuselection/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <deviceboot/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <disksnapshot default='on' toggle='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <externalSnapshot/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </guest>
Nov 22 03:44:23 compute-0 nova_compute[253461]: 
Nov 22 03:44:23 compute-0 nova_compute[253461]: </capabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]: 
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.306 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.308 253465 DEBUG nova.virt.libvirt.volume.mount [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.313 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 22 03:44:23 compute-0 nova_compute[253461]: <domainCapabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <domain>kvm</domain>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <arch>i686</arch>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <vcpu max='4096'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <iothreads supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <os supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <enum name='firmware'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <loader supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>rom</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pflash</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='readonly'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>yes</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>no</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='secure'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>no</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </loader>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </os>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='host-passthrough' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='hostPassthroughMigratable'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>on</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>off</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='maximum' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='maximumMigratable'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>on</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>off</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='host-model' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <vendor>AMD</vendor>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='x2apic'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='hypervisor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='stibp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='overflow-recov'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='succor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='lbrv'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc-scale'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='flushbyasid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='pause-filter'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='pfthreshold'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='disable' name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='custom' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Dhyana-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Genoa'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='auto-ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='auto-ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-128'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-256'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-512'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v6'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v7'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='KnightsMill'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512er'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512pf'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='KnightsMill-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512er'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512pf'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G4-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tbm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G5-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tbm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SierraForest'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cmpccxadd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SierraForest-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cmpccxadd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='athlon'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='athlon-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='core2duo'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='core2duo-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='coreduo'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='coreduo-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='n270'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='n270-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='phenom'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='phenom-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <memoryBacking supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <enum name='sourceType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>file</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>anonymous</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>memfd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </memoryBacking>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <disk supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='diskDevice'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>disk</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>cdrom</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>floppy</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>lun</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='bus'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>fdc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>scsi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>sata</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-non-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <graphics supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vnc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>egl-headless</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dbus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </graphics>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <video supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='modelType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vga</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>cirrus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>none</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>bochs</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ramfb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </video>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <hostdev supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='mode'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>subsystem</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='startupPolicy'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>default</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>mandatory</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>requisite</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>optional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='subsysType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pci</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>scsi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='capsType'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='pciBackend'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </hostdev>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <rng supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-non-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>random</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>egd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>builtin</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <filesystem supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='driverType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>path</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>handle</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtiofs</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </filesystem>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <tpm supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tpm-tis</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tpm-crb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>emulator</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>external</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendVersion'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>2.0</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </tpm>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <redirdev supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='bus'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </redirdev>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <channel supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pty</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>unix</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </channel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <crypto supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>qemu</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>builtin</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </crypto>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <interface supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>default</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>passt</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <panic supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>isa</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>hyperv</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </panic>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <console supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>null</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pty</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dev</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>file</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pipe</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>stdio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>udp</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tcp</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>unix</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>qemu-vdagent</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dbus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </console>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <gic supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <vmcoreinfo supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <genid supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <backingStoreInput supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <backup supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <async-teardown supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <ps2 supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <sev supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <sgx supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <hyperv supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='features'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>relaxed</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vapic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>spinlocks</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vpindex</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>runtime</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>synic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>stimer</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>reset</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vendor_id</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>frequencies</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>reenlightenment</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tlbflush</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ipi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>avic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>emsr_bitmap</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>xmm_input</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <defaults>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <spinlocks>4095</spinlocks>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <stimer_direct>on</stimer_direct>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </defaults>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </hyperv>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <launchSecurity supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='sectype'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tdx</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </launchSecurity>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </features>
Nov 22 03:44:23 compute-0 nova_compute[253461]: </domainCapabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.322 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 22 03:44:23 compute-0 nova_compute[253461]: <domainCapabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <domain>kvm</domain>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <arch>i686</arch>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <vcpu max='240'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <iothreads supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <os supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <enum name='firmware'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <loader supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>rom</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pflash</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='readonly'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>yes</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>no</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='secure'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>no</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </loader>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </os>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='host-passthrough' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='hostPassthroughMigratable'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>on</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>off</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='maximum' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='maximumMigratable'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>on</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>off</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='host-model' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <vendor>AMD</vendor>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='x2apic'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='hypervisor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='stibp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='overflow-recov'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='succor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='lbrv'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc-scale'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='flushbyasid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='pause-filter'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='pfthreshold'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='disable' name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='custom' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Dhyana-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Genoa'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='auto-ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='auto-ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-128'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-256'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-512'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v6'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v7'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='KnightsMill'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512er'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512pf'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='KnightsMill-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512er'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512pf'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G4-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tbm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G5-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tbm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SierraForest'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cmpccxadd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SierraForest-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cmpccxadd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='athlon'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='athlon-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='core2duo'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='core2duo-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='coreduo'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='coreduo-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='n270'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='n270-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='phenom'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='phenom-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <memoryBacking supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <enum name='sourceType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>file</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>anonymous</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>memfd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </memoryBacking>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <disk supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='diskDevice'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>disk</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>cdrom</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>floppy</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>lun</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='bus'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ide</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>fdc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>scsi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>sata</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-non-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <graphics supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vnc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>egl-headless</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dbus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </graphics>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <video supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='modelType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vga</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>cirrus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>none</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>bochs</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ramfb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </video>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <hostdev supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='mode'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>subsystem</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='startupPolicy'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>default</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>mandatory</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>requisite</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>optional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='subsysType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pci</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>scsi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='capsType'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='pciBackend'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </hostdev>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <rng supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-non-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>random</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>egd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>builtin</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <filesystem supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='driverType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>path</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>handle</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtiofs</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </filesystem>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <tpm supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tpm-tis</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tpm-crb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>emulator</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>external</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendVersion'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>2.0</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </tpm>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <redirdev supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='bus'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </redirdev>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <channel supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pty</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>unix</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </channel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <crypto supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>qemu</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>builtin</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </crypto>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <interface supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>default</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>passt</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <panic supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>isa</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>hyperv</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </panic>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <console supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>null</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pty</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dev</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>file</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pipe</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>stdio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>udp</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tcp</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>unix</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>qemu-vdagent</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dbus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </console>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <gic supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <vmcoreinfo supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <genid supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <backingStoreInput supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <backup supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <async-teardown supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <ps2 supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <sev supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <sgx supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <hyperv supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='features'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>relaxed</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vapic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>spinlocks</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vpindex</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>runtime</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>synic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>stimer</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>reset</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vendor_id</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>frequencies</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>reenlightenment</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tlbflush</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ipi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>avic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>emsr_bitmap</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>xmm_input</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <defaults>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <spinlocks>4095</spinlocks>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <stimer_direct>on</stimer_direct>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </defaults>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </hyperv>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <launchSecurity supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='sectype'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tdx</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </launchSecurity>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </features>
Nov 22 03:44:23 compute-0 nova_compute[253461]: </domainCapabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.372 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.385 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 22 03:44:23 compute-0 nova_compute[253461]: <domainCapabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <domain>kvm</domain>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <machine>pc-q35-rhel9.8.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <arch>x86_64</arch>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <vcpu max='4096'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <iothreads supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <os supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <enum name='firmware'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>efi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <loader supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>rom</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pflash</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='readonly'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>yes</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>no</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='secure'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>yes</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>no</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </loader>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </os>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='host-passthrough' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='hostPassthroughMigratable'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>on</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>off</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='maximum' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='maximumMigratable'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>on</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>off</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='host-model' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <vendor>AMD</vendor>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='x2apic'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='hypervisor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='stibp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='overflow-recov'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='succor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='lbrv'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc-scale'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='flushbyasid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='pause-filter'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='pfthreshold'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='disable' name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='custom' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Dhyana-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Genoa'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='auto-ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='auto-ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-128'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-256'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-512'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v6'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v7'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='KnightsMill'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512er'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512pf'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='KnightsMill-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512er'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512pf'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G4-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tbm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G5-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tbm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SierraForest'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cmpccxadd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SierraForest-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cmpccxadd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='athlon'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='athlon-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='core2duo'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='core2duo-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='coreduo'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='coreduo-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='n270'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='n270-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='phenom'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='phenom-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <memoryBacking supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <enum name='sourceType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>file</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>anonymous</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>memfd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </memoryBacking>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <disk supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='diskDevice'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>disk</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>cdrom</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>floppy</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>lun</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='bus'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>fdc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>scsi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>sata</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-non-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <graphics supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vnc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>egl-headless</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dbus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </graphics>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <video supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='modelType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vga</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>cirrus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>none</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>bochs</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ramfb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </video>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <hostdev supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='mode'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>subsystem</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='startupPolicy'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>default</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>mandatory</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>requisite</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>optional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='subsysType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pci</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>scsi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='capsType'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='pciBackend'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </hostdev>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <rng supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-non-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>random</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>egd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>builtin</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <filesystem supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='driverType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>path</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>handle</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtiofs</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </filesystem>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <tpm supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tpm-tis</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tpm-crb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>emulator</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>external</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendVersion'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>2.0</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </tpm>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <redirdev supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='bus'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </redirdev>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <channel supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pty</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>unix</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </channel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <crypto supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>qemu</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>builtin</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </crypto>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <interface supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>default</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>passt</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <panic supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>isa</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>hyperv</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </panic>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <console supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>null</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pty</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dev</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>file</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pipe</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>stdio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>udp</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tcp</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>unix</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>qemu-vdagent</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dbus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </console>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <gic supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <vmcoreinfo supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <genid supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <backingStoreInput supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <backup supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <async-teardown supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <ps2 supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <sev supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <sgx supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <hyperv supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='features'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>relaxed</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vapic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>spinlocks</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vpindex</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>runtime</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>synic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>stimer</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>reset</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vendor_id</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>frequencies</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>reenlightenment</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tlbflush</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ipi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>avic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>emsr_bitmap</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>xmm_input</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <defaults>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <spinlocks>4095</spinlocks>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <stimer_direct>on</stimer_direct>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </defaults>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </hyperv>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <launchSecurity supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='sectype'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tdx</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </launchSecurity>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </features>
Nov 22 03:44:23 compute-0 nova_compute[253461]: </domainCapabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.492 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 22 03:44:23 compute-0 nova_compute[253461]: <domainCapabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <path>/usr/libexec/qemu-kvm</path>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <domain>kvm</domain>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <machine>pc-i440fx-rhel7.6.0</machine>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <arch>x86_64</arch>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <vcpu max='240'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <iothreads supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <os supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <enum name='firmware'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <loader supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>rom</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pflash</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='readonly'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>yes</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>no</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='secure'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>no</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </loader>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </os>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='host-passthrough' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='hostPassthroughMigratable'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>on</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>off</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='maximum' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='maximumMigratable'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>on</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>off</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='host-model' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model fallback='forbid'>EPYC-Rome</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <vendor>AMD</vendor>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <maxphysaddr mode='passthrough' limit='40'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='x2apic'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc-deadline'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='hypervisor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc_adjust'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='spec-ctrl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='stibp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='cmp_legacy'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='overflow-recov'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='succor'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='amd-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='virt-ssbd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='lbrv'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='tsc-scale'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='vmcb-clean'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='flushbyasid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='pause-filter'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='pfthreshold'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='svme-addr-chk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='require' name='lfence-always-serializing'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <feature policy='disable' name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <mode name='custom' supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 ceph-mon[75011]: pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Broadwell-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cascadelake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Cooperlake-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Denverton-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Dhyana-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Genoa'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='auto-ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Genoa-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='auto-ibrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Milan-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amd-psfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='no-nested-data-bp'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='null-sel-clr-base'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='stibp-always-on'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-Rome-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='EPYC-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='GraniteRapids-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-128'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-256'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx10-512'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='prefetchiti'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Haswell-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-noTSX'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v6'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Icelake-Server-v7'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='IvyBridge-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='KnightsMill'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512er'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512pf'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='KnightsMill-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4fmaps'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-4vnniw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512er'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512pf'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G4-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tbm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Opteron_G5-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fma4'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tbm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xop'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SapphireRapids-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='amx-tile'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-bf16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-fp16'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512-vpopcntdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bitalg'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vbmi2'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrc'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fzrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='la57'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='taa-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='tsx-ldtrk'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xfd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SierraForest'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cmpccxadd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='SierraForest-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ifma'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-ne-convert'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx-vnni-int8'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='bus-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cmpccxadd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fbsdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='fsrs'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ibrs-all'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mcdt-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pbrsb-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='psdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='sbdr-ssdp-no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='serialize'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vaes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='vpclmulqdq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Client-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='hle'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='rtm'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Skylake-Server-v5'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512bw'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512cd'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512dq'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512f'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='avx512vl'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='invpcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pcid'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='pku'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='mpx'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v2'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v3'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='core-capability'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='split-lock-detect'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='Snowridge-v4'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='cldemote'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='erms'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='gfni'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdir64b'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='movdiri'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='xsaves'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='athlon'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='athlon-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='core2duo'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='core2duo-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='coreduo'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='coreduo-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='n270'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='n270-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='ss'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='phenom'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <blockers model='phenom-v1'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnow'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <feature name='3dnowext'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </blockers>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </mode>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <memoryBacking supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <enum name='sourceType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>file</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>anonymous</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <value>memfd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </memoryBacking>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <disk supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='diskDevice'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>disk</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>cdrom</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>floppy</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>lun</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='bus'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ide</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>fdc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>scsi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>sata</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-non-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <graphics supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vnc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>egl-headless</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dbus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </graphics>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <video supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='modelType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vga</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>cirrus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>none</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>bochs</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ramfb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </video>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <hostdev supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='mode'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>subsystem</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='startupPolicy'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>default</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>mandatory</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>requisite</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>optional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='subsysType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pci</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>scsi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='capsType'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='pciBackend'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </hostdev>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <rng supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtio-non-transitional</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>random</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>egd</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>builtin</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <filesystem supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='driverType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>path</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>handle</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>virtiofs</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </filesystem>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <tpm supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tpm-tis</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tpm-crb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>emulator</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>external</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendVersion'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>2.0</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </tpm>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <redirdev supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='bus'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>usb</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </redirdev>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <channel supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pty</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>unix</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </channel>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <crypto supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>qemu</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendModel'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>builtin</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </crypto>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <interface supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='backendType'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>default</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>passt</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <panic supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='model'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>isa</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>hyperv</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </panic>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <console supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='type'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>null</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vc</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pty</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dev</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>file</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>pipe</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>stdio</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>udp</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tcp</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>unix</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>qemu-vdagent</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>dbus</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </console>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   <features>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <gic supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <vmcoreinfo supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <genid supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <backingStoreInput supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <backup supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <async-teardown supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <ps2 supported='yes'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <sev supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <sgx supported='no'/>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <hyperv supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='features'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>relaxed</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vapic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>spinlocks</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vpindex</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>runtime</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>synic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>stimer</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>reset</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>vendor_id</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>frequencies</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>reenlightenment</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tlbflush</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>ipi</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>avic</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>emsr_bitmap</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>xmm_input</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <defaults>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <spinlocks>4095</spinlocks>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <stimer_direct>on</stimer_direct>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <tlbflush_direct>on</tlbflush_direct>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <tlbflush_extended>on</tlbflush_extended>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <vendor_id>Linux KVM Hv</vendor_id>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </defaults>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </hyperv>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     <launchSecurity supported='yes'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       <enum name='sectype'>
Nov 22 03:44:23 compute-0 nova_compute[253461]:         <value>tdx</value>
Nov 22 03:44:23 compute-0 nova_compute[253461]:       </enum>
Nov 22 03:44:23 compute-0 nova_compute[253461]:     </launchSecurity>
Nov 22 03:44:23 compute-0 nova_compute[253461]:   </features>
Nov 22 03:44:23 compute-0 nova_compute[253461]: </domainCapabilities>
Nov 22 03:44:23 compute-0 nova_compute[253461]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.552 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.552 253465 INFO nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Secure Boot support detected
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.554 253465 INFO nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.554 253465 INFO nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.562 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.580 253465 INFO nova.virt.node [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Determined node identity 62e18608-eaef-4f09-8e92-06d41e51d580 from /var/lib/nova/compute_id
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.596 253465 WARNING nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Compute nodes ['62e18608-eaef-4f09-8e92-06d41e51d580'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.626 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.647 253465 WARNING nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.647 253465 DEBUG oslo_concurrency.lockutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.647 253465 DEBUG oslo_concurrency.lockutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.647 253465 DEBUG oslo_concurrency.lockutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.647 253465 DEBUG nova.compute.resource_tracker [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:44:23 compute-0 nova_compute[253461]: 2025-11-22 03:44:23.648 253465 DEBUG oslo_concurrency.processutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:44:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:44:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979190805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.197 253465 DEBUG oslo_concurrency.processutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:44:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:24 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Nov 22 03:44:24 compute-0 systemd[1]: Started libvirt nodedev daemon.
Nov 22 03:44:24 compute-0 podman[253780]: 2025-11-22 03:44:24.405999046 +0000 UTC m=+0.148392036 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:44:24 compute-0 podman[253826]: 2025-11-22 03:44:24.523053085 +0000 UTC m=+0.073429972 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent)
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.562 253465 WARNING nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.564 253465 DEBUG nova.compute.resource_tracker [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.564 253465 DEBUG oslo_concurrency.lockutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.564 253465 DEBUG oslo_concurrency.lockutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:44:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2979190805' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.589 253465 WARNING nova.compute.resource_tracker [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] No compute node record for compute-0.ctlplane.example.com:62e18608-eaef-4f09-8e92-06d41e51d580: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 62e18608-eaef-4f09-8e92-06d41e51d580 could not be found.
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.616 253465 INFO nova.compute.resource_tracker [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 62e18608-eaef-4f09-8e92-06d41e51d580
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.685 253465 DEBUG nova.compute.resource_tracker [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:44:24 compute-0 nova_compute[253461]: 2025-11-22 03:44:24.685 253465 DEBUG nova.compute.resource_tracker [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:44:25 compute-0 nova_compute[253461]: 2025-11-22 03:44:25.564 253465 INFO nova.scheduler.client.report [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [req-dcad19bd-4ff1-4d39-882e-f858c7220bfa] Created resource provider record via placement API for resource provider with UUID 62e18608-eaef-4f09-8e92-06d41e51d580 and name compute-0.ctlplane.example.com.
Nov 22 03:44:25 compute-0 ceph-mon[75011]: pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:25 compute-0 nova_compute[253461]: 2025-11-22 03:44:25.949 253465 DEBUG oslo_concurrency.processutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:44:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:44:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1070412966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.430 253465 DEBUG oslo_concurrency.processutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.438 253465 DEBUG nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 22 03:44:26 compute-0 nova_compute[253461]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.438 253465 INFO nova.virt.libvirt.host [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] kernel doesn't support AMD SEV
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.440 253465 DEBUG nova.compute.provider_tree [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.441 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.508 253465 DEBUG nova.scheduler.client.report [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Updated inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.509 253465 DEBUG nova.compute.provider_tree [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Updating resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.509 253465 DEBUG nova.compute.provider_tree [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 03:44:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1070412966' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.633 253465 DEBUG nova.compute.provider_tree [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Updating resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.676 253465 DEBUG nova.compute.resource_tracker [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.676 253465 DEBUG oslo_concurrency.lockutils [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.676 253465 DEBUG nova.service [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.765 253465 DEBUG nova.service [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Nov 22 03:44:26 compute-0 nova_compute[253461]: 2025-11-22 03:44:26.766 253465 DEBUG nova.servicegroup.drivers.db [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Nov 22 03:44:27 compute-0 ceph-mon[75011]: pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:29 compute-0 ceph-mon[75011]: pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:31 compute-0 ceph-mon[75011]: pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:32 compute-0 sudo[253868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:32 compute-0 sudo[253868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:32 compute-0 sudo[253868]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:32 compute-0 sudo[253893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:44:32 compute-0 sudo[253893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:32 compute-0 sudo[253893]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:32 compute-0 sudo[253918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:32 compute-0 sudo[253918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:32 compute-0 sudo[253918]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:32 compute-0 sudo[253943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:44:32 compute-0 sudo[253943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:33 compute-0 sudo[253943]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:44:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:44:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:44:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:44:33 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 575fbf2b-cf2d-48af-b4ec-a0e71d9fa98c does not exist
Nov 22 03:44:33 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev bef4e5ce-9e81-4f69-869d-4effca1c671b does not exist
Nov 22 03:44:33 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b20d5330-c816-46f0-b768-94c6b7de619f does not exist
Nov 22 03:44:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:44:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:44:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:44:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:44:33 compute-0 sudo[253998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:33 compute-0 sudo[253998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:33 compute-0 sudo[253998]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:33 compute-0 sudo[254023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:44:33 compute-0 sudo[254023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:33 compute-0 sudo[254023]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:33 compute-0 sudo[254048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:33 compute-0 sudo[254048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:33 compute-0 sudo[254048]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:33 compute-0 sudo[254073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:44:33 compute-0 sudo[254073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:33 compute-0 ceph-mon[75011]: pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:44:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:44:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:33 compute-0 podman[254139]: 2025-11-22 03:44:33.854837095 +0000 UTC m=+0.059336589 container create 4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:44:33 compute-0 systemd[1]: Started libpod-conmon-4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f.scope.
Nov 22 03:44:33 compute-0 podman[254139]: 2025-11-22 03:44:33.835377649 +0000 UTC m=+0.039877173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:33 compute-0 podman[254139]: 2025-11-22 03:44:33.970313611 +0000 UTC m=+0.174813185 container init 4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:44:33 compute-0 podman[254139]: 2025-11-22 03:44:33.981400885 +0000 UTC m=+0.185900409 container start 4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:44:33 compute-0 podman[254139]: 2025-11-22 03:44:33.98864949 +0000 UTC m=+0.193149064 container attach 4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:44:33 compute-0 musing_jones[254155]: 167 167
Nov 22 03:44:33 compute-0 podman[254139]: 2025-11-22 03:44:33.991527347 +0000 UTC m=+0.196026901 container died 4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:44:33 compute-0 systemd[1]: libpod-4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f.scope: Deactivated successfully.
Nov 22 03:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-51c97191b5636380cf5f8445b992c116343aa46587280b9e2f5696434a38ba3f-merged.mount: Deactivated successfully.
Nov 22 03:44:34 compute-0 podman[254139]: 2025-11-22 03:44:34.035539636 +0000 UTC m=+0.240039170 container remove 4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_jones, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:44:34 compute-0 systemd[1]: libpod-conmon-4710a9af3e782b4f0797806d51e7dca0f767d160e4d995689052849337fc5f4f.scope: Deactivated successfully.
Nov 22 03:44:34 compute-0 podman[254180]: 2025-11-22 03:44:34.220884349 +0000 UTC m=+0.063486291 container create 0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:44:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:34 compute-0 systemd[1]: Started libpod-conmon-0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36.scope.
Nov 22 03:44:34 compute-0 podman[254180]: 2025-11-22 03:44:34.195822829 +0000 UTC m=+0.038424801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcad103293197eedab7d1024aa2e8416fac0e9301501ed4c713d66e960e6fb07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcad103293197eedab7d1024aa2e8416fac0e9301501ed4c713d66e960e6fb07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcad103293197eedab7d1024aa2e8416fac0e9301501ed4c713d66e960e6fb07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcad103293197eedab7d1024aa2e8416fac0e9301501ed4c713d66e960e6fb07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcad103293197eedab7d1024aa2e8416fac0e9301501ed4c713d66e960e6fb07/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:34 compute-0 podman[254180]: 2025-11-22 03:44:34.316531116 +0000 UTC m=+0.159133098 container init 0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:44:34 compute-0 podman[254180]: 2025-11-22 03:44:34.333030152 +0000 UTC m=+0.175632094 container start 0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:44:34 compute-0 podman[254180]: 2025-11-22 03:44:34.337358733 +0000 UTC m=+0.179960675 container attach 0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:44:34 compute-0 ceph-mon[75011]: pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:35 compute-0 silly_villani[254197]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:44:35 compute-0 silly_villani[254197]: --> relative data size: 1.0
Nov 22 03:44:35 compute-0 silly_villani[254197]: --> All data devices are unavailable
Nov 22 03:44:35 compute-0 systemd[1]: libpod-0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36.scope: Deactivated successfully.
Nov 22 03:44:35 compute-0 podman[254180]: 2025-11-22 03:44:35.426030067 +0000 UTC m=+1.268632009 container died 0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 03:44:35 compute-0 systemd[1]: libpod-0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36.scope: Consumed 1.038s CPU time.
Nov 22 03:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcad103293197eedab7d1024aa2e8416fac0e9301501ed4c713d66e960e6fb07-merged.mount: Deactivated successfully.
Nov 22 03:44:35 compute-0 podman[254180]: 2025-11-22 03:44:35.490887879 +0000 UTC m=+1.333489821 container remove 0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_villani, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:44:35 compute-0 systemd[1]: libpod-conmon-0fe50ba5ed0d70c2d8b6968beb6b340ddde4aeb5a2ab6e72c470be84cfb05d36.scope: Deactivated successfully.
Nov 22 03:44:35 compute-0 sudo[254073]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:35 compute-0 sudo[254240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:35 compute-0 sudo[254240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:35 compute-0 sudo[254240]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:35 compute-0 sudo[254265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:44:35 compute-0 sudo[254265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:35 compute-0 sudo[254265]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:35 compute-0 sudo[254290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:35 compute-0 sudo[254290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:35 compute-0 sudo[254290]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:35 compute-0 sudo[254315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:44:35 compute-0 sudo[254315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:44:36
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.data', '.mgr', 'backups', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'images', 'vms', '.rgw.root', 'volumes']
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:44:36 compute-0 podman[254381]: 2025-11-22 03:44:36.266036644 +0000 UTC m=+0.052955922 container create b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:44:36 compute-0 systemd[1]: Started libpod-conmon-b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e.scope.
Nov 22 03:44:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:36 compute-0 podman[254381]: 2025-11-22 03:44:36.248318814 +0000 UTC m=+0.035238072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:36 compute-0 podman[254381]: 2025-11-22 03:44:36.358185011 +0000 UTC m=+0.145104269 container init b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:44:36 compute-0 podman[254381]: 2025-11-22 03:44:36.369530007 +0000 UTC m=+0.156449295 container start b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:44:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:44:36 compute-0 podman[254381]: 2025-11-22 03:44:36.375057472 +0000 UTC m=+0.161976750 container attach b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:44:36 compute-0 pensive_varahamihira[254397]: 167 167
Nov 22 03:44:36 compute-0 systemd[1]: libpod-b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e.scope: Deactivated successfully.
Nov 22 03:44:36 compute-0 podman[254381]: 2025-11-22 03:44:36.377972745 +0000 UTC m=+0.164891993 container died b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:44:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a526130365f77c6e9b8d636bddcccf9267a6d341e9bb6d9b14e9c6ccf9e79d9-merged.mount: Deactivated successfully.
Nov 22 03:44:36 compute-0 podman[254381]: 2025-11-22 03:44:36.411889314 +0000 UTC m=+0.198808552 container remove b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:44:36 compute-0 systemd[1]: libpod-conmon-b5754ba6aa717728a9fead21c428f017b2e0d1eaab368a69272d6370ad436c0e.scope: Deactivated successfully.
Nov 22 03:44:36 compute-0 podman[254420]: 2025-11-22 03:44:36.58294875 +0000 UTC m=+0.041625013 container create 68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:44:36 compute-0 systemd[1]: Started libpod-conmon-68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad.scope.
Nov 22 03:44:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22591a372ddf2dbbff7a6f9a2aacef276fe369c31b4ff21111f4804cee02e90/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22591a372ddf2dbbff7a6f9a2aacef276fe369c31b4ff21111f4804cee02e90/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22591a372ddf2dbbff7a6f9a2aacef276fe369c31b4ff21111f4804cee02e90/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a22591a372ddf2dbbff7a6f9a2aacef276fe369c31b4ff21111f4804cee02e90/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:36 compute-0 podman[254420]: 2025-11-22 03:44:36.564608889 +0000 UTC m=+0.023285122 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:36 compute-0 podman[254420]: 2025-11-22 03:44:36.662038508 +0000 UTC m=+0.120714741 container init 68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:44:36 compute-0 podman[254420]: 2025-11-22 03:44:36.669049019 +0000 UTC m=+0.127725252 container start 68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:44:36 compute-0 podman[254420]: 2025-11-22 03:44:36.673256843 +0000 UTC m=+0.131933056 container attach 68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:44:36 compute-0 ceph-mon[75011]: pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:37 compute-0 objective_wilson[254436]: {
Nov 22 03:44:37 compute-0 objective_wilson[254436]:     "0": [
Nov 22 03:44:37 compute-0 objective_wilson[254436]:         {
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "devices": [
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "/dev/loop3"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             ],
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_name": "ceph_lv0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_size": "21470642176",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "name": "ceph_lv0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "tags": {
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cluster_name": "ceph",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.crush_device_class": "",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.encrypted": "0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osd_id": "0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.type": "block",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.vdo": "0"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             },
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "type": "block",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "vg_name": "ceph_vg0"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:         }
Nov 22 03:44:37 compute-0 objective_wilson[254436]:     ],
Nov 22 03:44:37 compute-0 objective_wilson[254436]:     "1": [
Nov 22 03:44:37 compute-0 objective_wilson[254436]:         {
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "devices": [
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "/dev/loop4"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             ],
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_name": "ceph_lv1",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_size": "21470642176",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "name": "ceph_lv1",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "tags": {
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cluster_name": "ceph",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.crush_device_class": "",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.encrypted": "0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osd_id": "1",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.type": "block",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.vdo": "0"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             },
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "type": "block",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "vg_name": "ceph_vg1"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:         }
Nov 22 03:44:37 compute-0 objective_wilson[254436]:     ],
Nov 22 03:44:37 compute-0 objective_wilson[254436]:     "2": [
Nov 22 03:44:37 compute-0 objective_wilson[254436]:         {
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "devices": [
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "/dev/loop5"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             ],
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_name": "ceph_lv2",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_size": "21470642176",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "name": "ceph_lv2",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "tags": {
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.cluster_name": "ceph",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.crush_device_class": "",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.encrypted": "0",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osd_id": "2",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.type": "block",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:                 "ceph.vdo": "0"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             },
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "type": "block",
Nov 22 03:44:37 compute-0 objective_wilson[254436]:             "vg_name": "ceph_vg2"
Nov 22 03:44:37 compute-0 objective_wilson[254436]:         }
Nov 22 03:44:37 compute-0 objective_wilson[254436]:     ]
Nov 22 03:44:37 compute-0 objective_wilson[254436]: }
Nov 22 03:44:37 compute-0 systemd[1]: libpod-68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad.scope: Deactivated successfully.
Nov 22 03:44:37 compute-0 podman[254420]: 2025-11-22 03:44:37.446956686 +0000 UTC m=+0.905632899 container died 68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilson, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 03:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-a22591a372ddf2dbbff7a6f9a2aacef276fe369c31b4ff21111f4804cee02e90-merged.mount: Deactivated successfully.
Nov 22 03:44:37 compute-0 podman[254420]: 2025-11-22 03:44:37.518633695 +0000 UTC m=+0.977309948 container remove 68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:44:37 compute-0 systemd[1]: libpod-conmon-68ed7b58d5f0dc9fa7124552708d98ac8a3529bb7843c7ddbc1a98268a9601ad.scope: Deactivated successfully.
Nov 22 03:44:37 compute-0 sudo[254315]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:37 compute-0 podman[254445]: 2025-11-22 03:44:37.580332351 +0000 UTC m=+0.092713249 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:44:37 compute-0 sudo[254473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:37 compute-0 sudo[254473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:37 compute-0 sudo[254473]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:37 compute-0 sudo[254499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:44:37 compute-0 sudo[254499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:37 compute-0 sudo[254499]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:37 compute-0 sudo[254524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:37 compute-0 sudo[254524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:37 compute-0 sudo[254524]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:37 compute-0 sudo[254549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:44:37 compute-0 sudo[254549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:38 compute-0 podman[254615]: 2025-11-22 03:44:38.175507641 +0000 UTC m=+0.041211836 container create 6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:44:38 compute-0 systemd[1]: Started libpod-conmon-6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b.scope.
Nov 22 03:44:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:38 compute-0 podman[254615]: 2025-11-22 03:44:38.156277604 +0000 UTC m=+0.021981789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:38 compute-0 podman[254615]: 2025-11-22 03:44:38.255058643 +0000 UTC m=+0.120762808 container init 6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:44:38 compute-0 podman[254615]: 2025-11-22 03:44:38.262984305 +0000 UTC m=+0.128688470 container start 6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_booth, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:44:38 compute-0 podman[254615]: 2025-11-22 03:44:38.266544984 +0000 UTC m=+0.132249149 container attach 6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_booth, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:44:38 compute-0 wonderful_booth[254631]: 167 167
Nov 22 03:44:38 compute-0 systemd[1]: libpod-6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b.scope: Deactivated successfully.
Nov 22 03:44:38 compute-0 podman[254615]: 2025-11-22 03:44:38.268065397 +0000 UTC m=+0.133769572 container died 6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_booth, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ef291ad309d2c7ecc304f4a5335cd71e66ba80e9ecf9b91377604f67fd5417f-merged.mount: Deactivated successfully.
Nov 22 03:44:38 compute-0 podman[254615]: 2025-11-22 03:44:38.313997456 +0000 UTC m=+0.179701661 container remove 6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:44:38 compute-0 systemd[1]: libpod-conmon-6c55cafddec1a95313594a283d45dac9e09eb31cb0220a11c441bce32e34fe6b.scope: Deactivated successfully.
Nov 22 03:44:38 compute-0 podman[254654]: 2025-11-22 03:44:38.506148417 +0000 UTC m=+0.062224329 container create eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dhawan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:44:38 compute-0 systemd[1]: Started libpod-conmon-eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a.scope.
Nov 22 03:44:38 compute-0 podman[254654]: 2025-11-22 03:44:38.484668933 +0000 UTC m=+0.040744855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:44:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a0151a54010e91b38c415a3de2a20c2e2a9c0c7ddbccd80cd8c2bcec31f2d6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a0151a54010e91b38c415a3de2a20c2e2a9c0c7ddbccd80cd8c2bcec31f2d6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a0151a54010e91b38c415a3de2a20c2e2a9c0c7ddbccd80cd8c2bcec31f2d6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a0151a54010e91b38c415a3de2a20c2e2a9c0c7ddbccd80cd8c2bcec31f2d6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:44:38 compute-0 podman[254654]: 2025-11-22 03:44:38.619758922 +0000 UTC m=+0.175834834 container init eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dhawan, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:44:38 compute-0 podman[254654]: 2025-11-22 03:44:38.636787062 +0000 UTC m=+0.192862984 container start eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dhawan, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:44:38 compute-0 podman[254654]: 2025-11-22 03:44:38.641465759 +0000 UTC m=+0.197541712 container attach eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dhawan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:44:38 compute-0 ceph-mon[75011]: pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]: {
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "osd_id": 1,
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "type": "bluestore"
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:     },
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "osd_id": 0,
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "type": "bluestore"
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:     },
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "osd_id": 2,
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:         "type": "bluestore"
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]:     }
Nov 22 03:44:39 compute-0 crazy_dhawan[254670]: }
Nov 22 03:44:39 compute-0 systemd[1]: libpod-eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a.scope: Deactivated successfully.
Nov 22 03:44:39 compute-0 podman[254654]: 2025-11-22 03:44:39.670588724 +0000 UTC m=+1.226664626 container died eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dhawan, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:44:39 compute-0 systemd[1]: libpod-eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a.scope: Consumed 1.036s CPU time.
Nov 22 03:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a0151a54010e91b38c415a3de2a20c2e2a9c0c7ddbccd80cd8c2bcec31f2d6c-merged.mount: Deactivated successfully.
Nov 22 03:44:39 compute-0 podman[254654]: 2025-11-22 03:44:39.734866202 +0000 UTC m=+1.290942084 container remove eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 03:44:39 compute-0 systemd[1]: libpod-conmon-eeac34ede9c088e91a7c7a9f25aa8a6c75303e6c10f188cb4e7dd2f0b877a50a.scope: Deactivated successfully.
Nov 22 03:44:39 compute-0 sudo[254549]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:44:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:44:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:44:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:44:39 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 4abbea7a-a2ac-4310-9c67-e7bc50ebb6da does not exist
Nov 22 03:44:39 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a17b854a-8159-43e2-bf4c-5e56d35c9a7e does not exist
Nov 22 03:44:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:39 compute-0 sudo[254714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:44:39 compute-0 sudo[254714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:39 compute-0 sudo[254714]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:39 compute-0 sudo[254739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:44:39 compute-0 sudo[254739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:44:39 compute-0 sudo[254739]: pam_unix(sudo:session): session closed for user root
Nov 22 03:44:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:44:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:44:40 compute-0 ceph-mon[75011]: pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:42 compute-0 ceph-mon[75011]: pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:44 compute-0 ceph-mon[75011]: pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:44:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1758541557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:44:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:44:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1758541557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:44:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:44:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3164335226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:44:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:44:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3164335226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:44:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1758541557' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:44:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1758541557' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:44:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:44:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:44:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3640418896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:44:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:44:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3640418896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:44:46 compute-0 ceph-mon[75011]: pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3164335226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:44:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3164335226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:44:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3640418896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:44:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3640418896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:44:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:48 compute-0 ceph-mon[75011]: pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:50 compute-0 ceph-mon[75011]: pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:52 compute-0 ceph-mon[75011]: pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:54 compute-0 ceph-mon[75011]: pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:55 compute-0 podman[254764]: 2025-11-22 03:44:55.413472661 +0000 UTC m=+0.086663686 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 03:44:55 compute-0 podman[254765]: 2025-11-22 03:44:55.458482275 +0000 UTC m=+0.129478532 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Nov 22 03:44:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:57 compute-0 ceph-mon[75011]: pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:59 compute-0 ceph-mon[75011]: pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:44:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:44:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:01 compute-0 ceph-mon[75011]: pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:03 compute-0 ceph-mon[75011]: pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:05 compute-0 ceph-mon[75011]: pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:07 compute-0 ceph-mon[75011]: pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:08 compute-0 podman[254807]: 2025-11-22 03:45:08.380861825 +0000 UTC m=+0.057886336 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 22 03:45:09 compute-0 ceph-mon[75011]: pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:11 compute-0 ceph-mon[75011]: pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:11 compute-0 nova_compute[253461]: 2025-11-22 03:45:11.767 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:11 compute-0 nova_compute[253461]: 2025-11-22 03:45:11.786 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:13 compute-0 ceph-mon[75011]: pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:15 compute-0 ceph-mon[75011]: pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:17 compute-0 ceph-mon[75011]: pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:19 compute-0 ceph-mon[75011]: pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:21 compute-0 ceph-mon[75011]: pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.431 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.431 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.431 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.452 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.453 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.453 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.454 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.454 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.454 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.455 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.455 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.455 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.494 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.494 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.495 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.495 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:45:22 compute-0 nova_compute[253461]: 2025-11-22 03:45:22.495 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:45:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:45:22.998 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:45:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:45:22.998 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:45:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:45:22.999 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:45:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:45:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928608974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.027 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.196 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.197 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5196MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.197 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.198 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:45:23 compute-0 ceph-mon[75011]: pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1928608974' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.292 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.292 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.311 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:45:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:45:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3280487363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.739 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.746 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.829 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:45:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.895 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:45:23 compute-0 nova_compute[253461]: 2025-11-22 03:45:23.895 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:45:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3280487363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:45:25 compute-0 ceph-mon[75011]: pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 03:45:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3434746483' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 03:45:25 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14343 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 03:45:25 compute-0 ceph-mgr[75294]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 03:45:25 compute-0 ceph-mgr[75294]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 03:45:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3434746483' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 03:45:26 compute-0 ceph-mon[75011]: from='client.14343 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 03:45:26 compute-0 podman[254871]: 2025-11-22 03:45:26.387256702 +0000 UTC m=+0.068254535 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 22 03:45:26 compute-0 podman[254872]: 2025-11-22 03:45:26.477194422 +0000 UTC m=+0.147067275 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:45:27 compute-0 ceph-mon[75011]: pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:29 compute-0 ceph-mon[75011]: pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:31 compute-0 ceph-mon[75011]: pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:33 compute-0 ceph-mon[75011]: pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:35 compute-0 ceph-mon[75011]: pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:45:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5758 writes, 24K keys, 5758 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5758 writes, 932 syncs, 6.18 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56036d49f1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:45:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:45:36
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root']
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:45:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:45:37 compute-0 ceph-mon[75011]: pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:39 compute-0 ceph-mon[75011]: pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:39 compute-0 podman[254914]: 2025-11-22 03:45:39.395532664 +0000 UTC m=+0.069042799 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:45:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:39 compute-0 sudo[254934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:39 compute-0 sudo[254934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:39 compute-0 sudo[254934]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:40 compute-0 sudo[254959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:45:40 compute-0 sudo[254959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:40 compute-0 sudo[254959]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:40 compute-0 sudo[254984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:40 compute-0 sudo[254984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:40 compute-0 sudo[254984]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:40 compute-0 sudo[255009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 03:45:40 compute-0 sudo[255009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:40 compute-0 sudo[255009]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:45:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:45:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:40 compute-0 sudo[255054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:40 compute-0 sudo[255054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:40 compute-0 sudo[255054]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:40 compute-0 sudo[255079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:45:40 compute-0 sudo[255079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:40 compute-0 sudo[255079]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:40 compute-0 sudo[255104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:40 compute-0 sudo[255104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:40 compute-0 sudo[255104]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:40 compute-0 sudo[255129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:45:40 compute-0 sudo[255129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:45:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 6986 writes, 29K keys, 6986 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6986 writes, 1221 syncs, 5.72 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 271 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70430#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x56094de70dd0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:45:41 compute-0 sudo[255129]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:45:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:45:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:45:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 0d056a0c-ed37-4fd1-a7d0-405e20483da3 does not exist
Nov 22 03:45:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 174041ce-abe8-4f10-8ff8-95cdd028a1a7 does not exist
Nov 22 03:45:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8d62f19d-bef9-465c-a300-3e6db0534da9 does not exist
Nov 22 03:45:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:45:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Nov 22 03:45:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3001789333' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:45:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:45:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.14345 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mgr[75294]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 03:45:41 compute-0 ceph-mgr[75294]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Nov 22 03:45:41 compute-0 sudo[255185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:41 compute-0 ceph-mon[75011]: pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3001789333' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:45:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:45:41 compute-0 sudo[255185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:41 compute-0 sudo[255185]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:41 compute-0 sudo[255210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:45:41 compute-0 sudo[255210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:41 compute-0 sudo[255210]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:41 compute-0 sudo[255235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:41 compute-0 sudo[255235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:41 compute-0 sudo[255235]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:41 compute-0 sudo[255260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:45:41 compute-0 sudo[255260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:42 compute-0 podman[255324]: 2025-11-22 03:45:42.159338604 +0000 UTC m=+0.058522040 container create f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:45:42 compute-0 systemd[1]: Started libpod-conmon-f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3.scope.
Nov 22 03:45:42 compute-0 podman[255324]: 2025-11-22 03:45:42.140173508 +0000 UTC m=+0.039356934 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:45:42 compute-0 podman[255324]: 2025-11-22 03:45:42.26237556 +0000 UTC m=+0.161559046 container init f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:45:42 compute-0 podman[255324]: 2025-11-22 03:45:42.27301294 +0000 UTC m=+0.172196336 container start f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shannon, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:45:42 compute-0 hopeful_shannon[255341]: 167 167
Nov 22 03:45:42 compute-0 systemd[1]: libpod-f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3.scope: Deactivated successfully.
Nov 22 03:45:42 compute-0 conmon[255341]: conmon f130d8e7195e7c21b307 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3.scope/container/memory.events
Nov 22 03:45:42 compute-0 podman[255324]: 2025-11-22 03:45:42.288369818 +0000 UTC m=+0.187553324 container attach f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:45:42 compute-0 podman[255324]: 2025-11-22 03:45:42.289011727 +0000 UTC m=+0.188195173 container died f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9bf98129b4e8c9ae1654b4ea08847365dd366c38e739c66175cf6a4fdf84af25-merged.mount: Deactivated successfully.
Nov 22 03:45:42 compute-0 podman[255324]: 2025-11-22 03:45:42.324449111 +0000 UTC m=+0.223632507 container remove f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:45:42 compute-0 systemd[1]: libpod-conmon-f130d8e7195e7c21b307767c88762b8bf72aa562db280055ebb5e1fbf2eac1b3.scope: Deactivated successfully.
Nov 22 03:45:42 compute-0 ceph-mon[75011]: from='client.14345 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Nov 22 03:45:42 compute-0 podman[255365]: 2025-11-22 03:45:42.531510994 +0000 UTC m=+0.071131185 container create 410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 03:45:42 compute-0 systemd[1]: Started libpod-conmon-410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710.scope.
Nov 22 03:45:42 compute-0 podman[255365]: 2025-11-22 03:45:42.50026483 +0000 UTC m=+0.039885061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b57692f6fa6497ca64e26614e3cba82607d1f022e206273bb6024f3a2044074/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b57692f6fa6497ca64e26614e3cba82607d1f022e206273bb6024f3a2044074/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b57692f6fa6497ca64e26614e3cba82607d1f022e206273bb6024f3a2044074/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b57692f6fa6497ca64e26614e3cba82607d1f022e206273bb6024f3a2044074/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b57692f6fa6497ca64e26614e3cba82607d1f022e206273bb6024f3a2044074/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:42 compute-0 podman[255365]: 2025-11-22 03:45:42.658819195 +0000 UTC m=+0.198439436 container init 410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:45:42 compute-0 podman[255365]: 2025-11-22 03:45:42.667760055 +0000 UTC m=+0.207380236 container start 410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:45:42 compute-0 podman[255365]: 2025-11-22 03:45:42.671499453 +0000 UTC m=+0.211119704 container attach 410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 03:45:43 compute-0 ceph-mon[75011]: pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:43 compute-0 gifted_meninsky[255381]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:45:43 compute-0 gifted_meninsky[255381]: --> relative data size: 1.0
Nov 22 03:45:43 compute-0 gifted_meninsky[255381]: --> All data devices are unavailable
Nov 22 03:45:43 compute-0 systemd[1]: libpod-410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710.scope: Deactivated successfully.
Nov 22 03:45:43 compute-0 systemd[1]: libpod-410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710.scope: Consumed 1.203s CPU time.
Nov 22 03:45:43 compute-0 podman[255410]: 2025-11-22 03:45:43.95211652 +0000 UTC m=+0.027209758 container died 410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b57692f6fa6497ca64e26614e3cba82607d1f022e206273bb6024f3a2044074-merged.mount: Deactivated successfully.
Nov 22 03:45:44 compute-0 podman[255410]: 2025-11-22 03:45:44.01660804 +0000 UTC m=+0.091701258 container remove 410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:45:44 compute-0 systemd[1]: libpod-conmon-410ea2e09257bfc18b8acd04f618ae5eec7ce8fa2e80a3ca3a3d146fb44e1710.scope: Deactivated successfully.
Nov 22 03:45:44 compute-0 sudo[255260]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:44 compute-0 sudo[255425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:44 compute-0 sudo[255425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:44 compute-0 sudo[255425]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:44 compute-0 sudo[255450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:45:44 compute-0 sudo[255450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:44 compute-0 sudo[255450]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:44 compute-0 sudo[255475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:44 compute-0 sudo[255475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:44 compute-0 sudo[255475]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:44 compute-0 sudo[255500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:45:44 compute-0 sudo[255500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:44 compute-0 podman[255564]: 2025-11-22 03:45:44.826680094 +0000 UTC m=+0.085076930 container create f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:45:44 compute-0 podman[255564]: 2025-11-22 03:45:44.784806215 +0000 UTC m=+0.043203111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:44 compute-0 systemd[1]: Started libpod-conmon-f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956.scope.
Nov 22 03:45:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:45:44 compute-0 podman[255564]: 2025-11-22 03:45:44.940330417 +0000 UTC m=+0.198727303 container init f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:45:44 compute-0 podman[255564]: 2025-11-22 03:45:44.952074201 +0000 UTC m=+0.210471037 container start f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:45:44 compute-0 podman[255564]: 2025-11-22 03:45:44.958551169 +0000 UTC m=+0.216948005 container attach f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:45:44 compute-0 infallible_elbakyan[255581]: 167 167
Nov 22 03:45:44 compute-0 systemd[1]: libpod-f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956.scope: Deactivated successfully.
Nov 22 03:45:44 compute-0 podman[255564]: 2025-11-22 03:45:44.962125706 +0000 UTC m=+0.220522582 container died f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:45:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-6201c2e081de000c2c3a5ce027219f19c108c91c72838f092643160c73f4fce7-merged.mount: Deactivated successfully.
Nov 22 03:45:45 compute-0 podman[255564]: 2025-11-22 03:45:45.018255329 +0000 UTC m=+0.276652135 container remove f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_elbakyan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:45:45 compute-0 systemd[1]: libpod-conmon-f10c9f2e3bac52f0b046bd2bfaa87675cebc17e48af0ef300a90f0b55b0e0956.scope: Deactivated successfully.
Nov 22 03:45:45 compute-0 podman[255607]: 2025-11-22 03:45:45.246744074 +0000 UTC m=+0.081806089 container create c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:45:45 compute-0 systemd[1]: Started libpod-conmon-c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262.scope.
Nov 22 03:45:45 compute-0 podman[255607]: 2025-11-22 03:45:45.215572838 +0000 UTC m=+0.050634893 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4a24c0c6b6a95f3b2dd23f737a8c3d28cb85a058639fcf49c3a61f1fd6d2d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4a24c0c6b6a95f3b2dd23f737a8c3d28cb85a058639fcf49c3a61f1fd6d2d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4a24c0c6b6a95f3b2dd23f737a8c3d28cb85a058639fcf49c3a61f1fd6d2d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4a24c0c6b6a95f3b2dd23f737a8c3d28cb85a058639fcf49c3a61f1fd6d2d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:45 compute-0 podman[255607]: 2025-11-22 03:45:45.346825977 +0000 UTC m=+0.181887962 container init c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:45:45 compute-0 podman[255607]: 2025-11-22 03:45:45.365015486 +0000 UTC m=+0.200077471 container start c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:45:45 compute-0 podman[255607]: 2025-11-22 03:45:45.369094332 +0000 UTC m=+0.204156317 container attach c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:45:45 compute-0 ceph-mon[75011]: pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:45:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5636 writes, 24K keys, 5636 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5636 writes, 869 syncs, 6.49 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.018       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.025       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.027       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a3090#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55936b0a31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 6.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:45:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:45:46 compute-0 silly_joliot[255623]: {
Nov 22 03:45:46 compute-0 silly_joliot[255623]:     "0": [
Nov 22 03:45:46 compute-0 silly_joliot[255623]:         {
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "devices": [
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "/dev/loop3"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             ],
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_name": "ceph_lv0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_size": "21470642176",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "name": "ceph_lv0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "tags": {
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cluster_name": "ceph",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.crush_device_class": "",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.encrypted": "0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osd_id": "0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.type": "block",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.vdo": "0"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             },
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "type": "block",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "vg_name": "ceph_vg0"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:         }
Nov 22 03:45:46 compute-0 silly_joliot[255623]:     ],
Nov 22 03:45:46 compute-0 silly_joliot[255623]:     "1": [
Nov 22 03:45:46 compute-0 silly_joliot[255623]:         {
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "devices": [
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "/dev/loop4"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             ],
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_name": "ceph_lv1",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_size": "21470642176",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "name": "ceph_lv1",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "tags": {
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cluster_name": "ceph",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.crush_device_class": "",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.encrypted": "0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osd_id": "1",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.type": "block",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.vdo": "0"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             },
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "type": "block",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "vg_name": "ceph_vg1"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:         }
Nov 22 03:45:46 compute-0 silly_joliot[255623]:     ],
Nov 22 03:45:46 compute-0 silly_joliot[255623]:     "2": [
Nov 22 03:45:46 compute-0 silly_joliot[255623]:         {
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "devices": [
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "/dev/loop5"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             ],
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_name": "ceph_lv2",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_size": "21470642176",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "name": "ceph_lv2",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "tags": {
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.cluster_name": "ceph",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.crush_device_class": "",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.encrypted": "0",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osd_id": "2",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.type": "block",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:                 "ceph.vdo": "0"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             },
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "type": "block",
Nov 22 03:45:46 compute-0 silly_joliot[255623]:             "vg_name": "ceph_vg2"
Nov 22 03:45:46 compute-0 silly_joliot[255623]:         }
Nov 22 03:45:46 compute-0 silly_joliot[255623]:     ]
Nov 22 03:45:46 compute-0 silly_joliot[255623]: }
Nov 22 03:45:46 compute-0 systemd[1]: libpod-c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262.scope: Deactivated successfully.
Nov 22 03:45:46 compute-0 podman[255607]: 2025-11-22 03:45:46.214310454 +0000 UTC m=+1.049372479 container died c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e4a24c0c6b6a95f3b2dd23f737a8c3d28cb85a058639fcf49c3a61f1fd6d2d3-merged.mount: Deactivated successfully.
Nov 22 03:45:46 compute-0 podman[255607]: 2025-11-22 03:45:46.2902352 +0000 UTC m=+1.125297175 container remove c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:45:46 compute-0 systemd[1]: libpod-conmon-c2c341078a343ffb247964ed2332057ab670b1d586f0d652ac8ac5de8f87c262.scope: Deactivated successfully.
Nov 22 03:45:46 compute-0 sudo[255500]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:46 compute-0 sudo[255644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:46 compute-0 sudo[255644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:46 compute-0 sudo[255644]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:46 compute-0 sudo[255669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:45:46 compute-0 sudo[255669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:46 compute-0 sudo[255669]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:46 compute-0 sudo[255694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:46 compute-0 sudo[255694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:46 compute-0 sudo[255694]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:46 compute-0 sudo[255719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:45:46 compute-0 sudo[255719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:46 compute-0 podman[255785]: 2025-11-22 03:45:46.925768382 +0000 UTC m=+0.036541447 container create 1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chaum, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:45:46 compute-0 systemd[1]: Started libpod-conmon-1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45.scope.
Nov 22 03:45:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:45:47 compute-0 podman[255785]: 2025-11-22 03:45:46.909640785 +0000 UTC m=+0.020413840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:47 compute-0 podman[255785]: 2025-11-22 03:45:47.006823849 +0000 UTC m=+0.117596914 container init 1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chaum, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:45:47 compute-0 podman[255785]: 2025-11-22 03:45:47.017340286 +0000 UTC m=+0.128113341 container start 1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chaum, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 03:45:47 compute-0 epic_chaum[255801]: 167 167
Nov 22 03:45:47 compute-0 systemd[1]: libpod-1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45.scope: Deactivated successfully.
Nov 22 03:45:47 compute-0 podman[255785]: 2025-11-22 03:45:47.021931165 +0000 UTC m=+0.132704220 container attach 1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:45:47 compute-0 podman[255785]: 2025-11-22 03:45:47.022222045 +0000 UTC m=+0.132995100 container died 1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chaum, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cce2c33194d5e44a7122c6fee30afe8d83f7fd6d5e1019e431dc6058cbe5628-merged.mount: Deactivated successfully.
Nov 22 03:45:47 compute-0 podman[255785]: 2025-11-22 03:45:47.061243706 +0000 UTC m=+0.172016761 container remove 1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_chaum, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:45:47 compute-0 systemd[1]: libpod-conmon-1cde0c73296b98a1eef9e33d3092d982e7c7eb5cef46aa239252274d5581cc45.scope: Deactivated successfully.
Nov 22 03:45:47 compute-0 podman[255823]: 2025-11-22 03:45:47.286159313 +0000 UTC m=+0.066899126 container create 50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:45:47 compute-0 systemd[1]: Started libpod-conmon-50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc.scope.
Nov 22 03:45:47 compute-0 podman[255823]: 2025-11-22 03:45:47.258927616 +0000 UTC m=+0.039667429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:45:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab67ec5935c0dc3220d2334ee513358f9f6e3a5fbc476674387f00af5593490/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab67ec5935c0dc3220d2334ee513358f9f6e3a5fbc476674387f00af5593490/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab67ec5935c0dc3220d2334ee513358f9f6e3a5fbc476674387f00af5593490/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab67ec5935c0dc3220d2334ee513358f9f6e3a5fbc476674387f00af5593490/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:45:47 compute-0 podman[255823]: 2025-11-22 03:45:47.370736817 +0000 UTC m=+0.151476590 container init 50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:45:47 compute-0 podman[255823]: 2025-11-22 03:45:47.386252801 +0000 UTC m=+0.166992584 container start 50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:45:47 compute-0 podman[255823]: 2025-11-22 03:45:47.390165428 +0000 UTC m=+0.170905221 container attach 50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:45:47 compute-0 ceph-mon[75011]: pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:48 compute-0 objective_kepler[255840]: {
Nov 22 03:45:48 compute-0 objective_kepler[255840]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "osd_id": 1,
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "type": "bluestore"
Nov 22 03:45:48 compute-0 objective_kepler[255840]:     },
Nov 22 03:45:48 compute-0 objective_kepler[255840]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "osd_id": 0,
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "type": "bluestore"
Nov 22 03:45:48 compute-0 objective_kepler[255840]:     },
Nov 22 03:45:48 compute-0 objective_kepler[255840]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "osd_id": 2,
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:45:48 compute-0 objective_kepler[255840]:         "type": "bluestore"
Nov 22 03:45:48 compute-0 objective_kepler[255840]:     }
Nov 22 03:45:48 compute-0 objective_kepler[255840]: }
Nov 22 03:45:48 compute-0 systemd[1]: libpod-50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc.scope: Deactivated successfully.
Nov 22 03:45:48 compute-0 podman[255823]: 2025-11-22 03:45:48.475968586 +0000 UTC m=+1.256708369 container died 50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 03:45:48 compute-0 systemd[1]: libpod-50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc.scope: Consumed 1.086s CPU time.
Nov 22 03:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ab67ec5935c0dc3220d2334ee513358f9f6e3a5fbc476674387f00af5593490-merged.mount: Deactivated successfully.
Nov 22 03:45:48 compute-0 podman[255823]: 2025-11-22 03:45:48.556075181 +0000 UTC m=+1.336814994 container remove 50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:45:48 compute-0 systemd[1]: libpod-conmon-50640969e89eb3552c08955ab772ff8dedd8a8903464b043429f7abd8c698fbc.scope: Deactivated successfully.
Nov 22 03:45:48 compute-0 sudo[255719]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:45:48 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:45:48 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:48 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 75f85d1a-5467-481f-b8e7-127d8937163e does not exist
Nov 22 03:45:48 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6dde9a39-86f8-4a78-bcdf-531a9fc7b62e does not exist
Nov 22 03:45:48 compute-0 sudo[255885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:45:48 compute-0 sudo[255885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:48 compute-0 sudo[255885]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:48 compute-0 sudo[255910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:45:48 compute-0 sudo[255910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:45:48 compute-0 sudo[255910]: pam_unix(sudo:session): session closed for user root
Nov 22 03:45:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:49 compute-0 ceph-mon[75011]: pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:45:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:51 compute-0 ceph-mon[75011]: pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:51 compute-0 ceph-mgr[75294]: [devicehealth INFO root] Check health
Nov 22 03:45:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:53 compute-0 ceph-mon[75011]: pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:55 compute-0 ceph-mon[75011]: pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:57 compute-0 podman[255935]: 2025-11-22 03:45:57.443837944 +0000 UTC m=+0.113828436 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 03:45:57 compute-0 podman[255936]: 2025-11-22 03:45:57.463557456 +0000 UTC m=+0.133386571 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 03:45:57 compute-0 ceph-mon[75011]: pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:45:59 compute-0 ceph-mon[75011]: pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:45:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:46:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/422552639' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:46:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:46:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/422552639' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:46:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/422552639' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:46:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/422552639' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:46:01 compute-0 ceph-mon[75011]: pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:03 compute-0 ceph-mon[75011]: pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:05 compute-0 ceph-mon[75011]: pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:06 compute-0 ceph-mon[75011]: pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:08 compute-0 ceph-mon[75011]: pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:10 compute-0 podman[255977]: 2025-11-22 03:46:10.445844759 +0000 UTC m=+0.119931619 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:46:10 compute-0 ceph-mon[75011]: pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:12 compute-0 ceph-mon[75011]: pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:14 compute-0 ceph-mon[75011]: pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:16 compute-0 ceph-mon[75011]: pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:18 compute-0 ceph-mon[75011]: pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:20 compute-0 ceph-mon[75011]: pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:46:22.501 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:46:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:46:22.502 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:46:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:46:22.503 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:46:22 compute-0 ceph-mon[75011]: pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:46:22.999 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:46:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:46:23.000 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:46:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:46:23.000 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:46:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:23 compute-0 nova_compute[253461]: 2025-11-22 03:46:23.890 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:23 compute-0 nova_compute[253461]: 2025-11-22 03:46:23.891 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:23 compute-0 nova_compute[253461]: 2025-11-22 03:46:23.911 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:23 compute-0 nova_compute[253461]: 2025-11-22 03:46:23.911 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:23 compute-0 nova_compute[253461]: 2025-11-22 03:46:23.912 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.428 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.447 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.447 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.447 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.447 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.448 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.448 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.472 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.472 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.472 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.472 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.473 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:46:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:46:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3085473252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:46:24 compute-0 nova_compute[253461]: 2025-11-22 03:46:24.910 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:46:24 compute-0 ceph-mon[75011]: pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3085473252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.096 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.097 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.098 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.098 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.179 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.179 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.207 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:46:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:46:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131917043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.646 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.653 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.680 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.682 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:46:25 compute-0 nova_compute[253461]: 2025-11-22 03:46:25.683 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:46:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/131917043' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:46:27 compute-0 ceph-mon[75011]: pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:28 compute-0 podman[256041]: 2025-11-22 03:46:28.367184757 +0000 UTC m=+0.051239513 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:46:28 compute-0 podman[256042]: 2025-11-22 03:46:28.495688128 +0000 UTC m=+0.166975837 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 03:46:29 compute-0 ceph-mon[75011]: pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:31 compute-0 ceph-mon[75011]: pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:33 compute-0 ceph-mon[75011]: pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:35 compute-0 ceph-mon[75011]: pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:46:36
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'volumes', '.mgr', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images']
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:46:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:46:37 compute-0 ceph-mon[75011]: pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:39 compute-0 ceph-mon[75011]: pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:41 compute-0 ceph-mon[75011]: pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:41 compute-0 podman[256086]: 2025-11-22 03:46:41.391918913 +0000 UTC m=+0.076336847 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 22 03:46:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:43 compute-0 ceph-mon[75011]: pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:45 compute-0 ceph-mon[75011]: pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:46:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:46:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:46:47 compute-0 ceph-mon[75011]: pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.126539) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783207126576, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1436, "num_deletes": 251, "total_data_size": 2240757, "memory_usage": 2270864, "flush_reason": "Manual Compaction"}
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783207147522, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2197757, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15016, "largest_seqno": 16451, "table_properties": {"data_size": 2191126, "index_size": 3766, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13869, "raw_average_key_size": 19, "raw_value_size": 2177741, "raw_average_value_size": 3093, "num_data_blocks": 173, "num_entries": 704, "num_filter_entries": 704, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783060, "oldest_key_time": 1763783060, "file_creation_time": 1763783207, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 21041 microseconds, and 7422 cpu microseconds.
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.147576) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2197757 bytes OK
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.147600) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.149381) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.149397) EVENT_LOG_v1 {"time_micros": 1763783207149392, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.149414) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2234414, prev total WAL file size 2234414, number of live WAL files 2.
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.150258) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2146KB)], [35(7309KB)]
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783207150288, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9682203, "oldest_snapshot_seqno": -1}
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4069 keys, 7893649 bytes, temperature: kUnknown
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783207207722, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7893649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7864002, "index_size": 18401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99474, "raw_average_key_size": 24, "raw_value_size": 7787879, "raw_average_value_size": 1913, "num_data_blocks": 777, "num_entries": 4069, "num_filter_entries": 4069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783207, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.208067) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7893649 bytes
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.209904) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.3 rd, 137.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.1 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.6) OK, records in: 4583, records dropped: 514 output_compression: NoCompression
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.209937) EVENT_LOG_v1 {"time_micros": 1763783207209922, "job": 16, "event": "compaction_finished", "compaction_time_micros": 57536, "compaction_time_cpu_micros": 24128, "output_level": 6, "num_output_files": 1, "total_output_size": 7893649, "num_input_records": 4583, "num_output_records": 4069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783207210886, "job": 16, "event": "table_file_deletion", "file_number": 37}
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783207213605, "job": 16, "event": "table_file_deletion", "file_number": 35}
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.150195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.213726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.213734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.213737) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.213740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:46:47 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:46:47.213743) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:46:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:48 compute-0 sudo[256106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:48 compute-0 sudo[256106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:48 compute-0 sudo[256106]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:48 compute-0 sudo[256131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:46:49 compute-0 sudo[256131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:49 compute-0 sudo[256131]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:49 compute-0 sudo[256156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:49 compute-0 sudo[256156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:49 compute-0 sudo[256156]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:49 compute-0 sudo[256181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:46:49 compute-0 sudo[256181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:49 compute-0 ceph-mon[75011]: pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:49 compute-0 sudo[256181]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:46:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:46:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:46:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:46:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:46:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:46:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:46:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:46:49 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 88288b44-7ee7-4e27-a8e9-7944d0b69d41 does not exist
Nov 22 03:46:49 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f2baf38a-b65b-4531-a553-f8bdb35b74fd does not exist
Nov 22 03:46:49 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c6b2f4e4-effb-4ff9-b37c-10fbc334e747 does not exist
Nov 22 03:46:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:46:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:46:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:46:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:46:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:46:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:46:49 compute-0 sudo[256239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:49 compute-0 sudo[256239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:49 compute-0 sudo[256239]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:49 compute-0 sudo[256264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:46:49 compute-0 sudo[256264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:49 compute-0 sudo[256264]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:50 compute-0 sudo[256289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:50 compute-0 sudo[256289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:50 compute-0 sudo[256289]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:50 compute-0 sudo[256314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:46:50 compute-0 sudo[256314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:46:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:46:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:46:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:46:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:46:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:46:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:46:50 compute-0 podman[256379]: 2025-11-22 03:46:50.534131412 +0000 UTC m=+0.055546808 container create cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bhaskara, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:46:50 compute-0 systemd[1]: Started libpod-conmon-cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63.scope.
Nov 22 03:46:50 compute-0 podman[256379]: 2025-11-22 03:46:50.502929931 +0000 UTC m=+0.024345397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:46:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:46:50 compute-0 podman[256379]: 2025-11-22 03:46:50.628966143 +0000 UTC m=+0.150381599 container init cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:46:50 compute-0 podman[256379]: 2025-11-22 03:46:50.641790876 +0000 UTC m=+0.163206272 container start cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Nov 22 03:46:50 compute-0 podman[256379]: 2025-11-22 03:46:50.646055785 +0000 UTC m=+0.167471241 container attach cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:46:50 compute-0 sleepy_bhaskara[256395]: 167 167
Nov 22 03:46:50 compute-0 systemd[1]: libpod-cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63.scope: Deactivated successfully.
Nov 22 03:46:50 compute-0 podman[256379]: 2025-11-22 03:46:50.651880838 +0000 UTC m=+0.173296244 container died cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:46:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-aac033f2e0e89f9656e5d6a3deb1d267cd0e31b9eda059582acf1069b36c47f6-merged.mount: Deactivated successfully.
Nov 22 03:46:50 compute-0 podman[256379]: 2025-11-22 03:46:50.706712859 +0000 UTC m=+0.228128255 container remove cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:46:50 compute-0 systemd[1]: libpod-conmon-cf584ad3a40336585a2cc0fc100a950a239c6e0ccebcbeefc5975c69b2851b63.scope: Deactivated successfully.
Nov 22 03:46:50 compute-0 podman[256419]: 2025-11-22 03:46:50.892384606 +0000 UTC m=+0.059997830 container create 062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:46:50 compute-0 systemd[1]: Started libpod-conmon-062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27.scope.
Nov 22 03:46:50 compute-0 podman[256419]: 2025-11-22 03:46:50.873112645 +0000 UTC m=+0.040725869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:46:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5ea139b7f3824f783f4db73ec111e59b958544f16ee7a6e280423cf154cda3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5ea139b7f3824f783f4db73ec111e59b958544f16ee7a6e280423cf154cda3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5ea139b7f3824f783f4db73ec111e59b958544f16ee7a6e280423cf154cda3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5ea139b7f3824f783f4db73ec111e59b958544f16ee7a6e280423cf154cda3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc5ea139b7f3824f783f4db73ec111e59b958544f16ee7a6e280423cf154cda3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:51 compute-0 podman[256419]: 2025-11-22 03:46:51.017868943 +0000 UTC m=+0.185482177 container init 062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:46:51 compute-0 podman[256419]: 2025-11-22 03:46:51.025941626 +0000 UTC m=+0.193554830 container start 062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:46:51 compute-0 podman[256419]: 2025-11-22 03:46:51.029624082 +0000 UTC m=+0.197237286 container attach 062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:46:51 compute-0 ceph-mon[75011]: pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:52 compute-0 strange_dijkstra[256436]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:46:52 compute-0 strange_dijkstra[256436]: --> relative data size: 1.0
Nov 22 03:46:52 compute-0 strange_dijkstra[256436]: --> All data devices are unavailable
Nov 22 03:46:52 compute-0 systemd[1]: libpod-062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27.scope: Deactivated successfully.
Nov 22 03:46:52 compute-0 systemd[1]: libpod-062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27.scope: Consumed 1.149s CPU time.
Nov 22 03:46:52 compute-0 podman[256419]: 2025-11-22 03:46:52.214705953 +0000 UTC m=+1.382319167 container died 062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc5ea139b7f3824f783f4db73ec111e59b958544f16ee7a6e280423cf154cda3-merged.mount: Deactivated successfully.
Nov 22 03:46:52 compute-0 podman[256419]: 2025-11-22 03:46:52.280888302 +0000 UTC m=+1.448501496 container remove 062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:46:52 compute-0 systemd[1]: libpod-conmon-062bdac435d45f9d47f839b5326778557d93d63632c24d3baf0d4dbd78820b27.scope: Deactivated successfully.
Nov 22 03:46:52 compute-0 sudo[256314]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:52 compute-0 sudo[256479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:52 compute-0 sudo[256479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:52 compute-0 sudo[256479]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:52 compute-0 sudo[256504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:46:52 compute-0 sudo[256504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:52 compute-0 sudo[256504]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:52 compute-0 sudo[256529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:52 compute-0 sudo[256529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:52 compute-0 sudo[256529]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:52 compute-0 sudo[256554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:46:52 compute-0 sudo[256554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:52 compute-0 podman[256621]: 2025-11-22 03:46:52.942643659 +0000 UTC m=+0.044636189 container create 8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chebyshev, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:46:52 compute-0 systemd[1]: Started libpod-conmon-8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a.scope.
Nov 22 03:46:53 compute-0 podman[256621]: 2025-11-22 03:46:52.922398832 +0000 UTC m=+0.024391352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:46:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:46:53 compute-0 podman[256621]: 2025-11-22 03:46:53.055953465 +0000 UTC m=+0.157946045 container init 8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chebyshev, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:46:53 compute-0 podman[256621]: 2025-11-22 03:46:53.065209169 +0000 UTC m=+0.167201708 container start 8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chebyshev, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:46:53 compute-0 podman[256621]: 2025-11-22 03:46:53.069968375 +0000 UTC m=+0.171960915 container attach 8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:46:53 compute-0 charming_chebyshev[256638]: 167 167
Nov 22 03:46:53 compute-0 systemd[1]: libpod-8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a.scope: Deactivated successfully.
Nov 22 03:46:53 compute-0 podman[256621]: 2025-11-22 03:46:53.073391401 +0000 UTC m=+0.175383911 container died 8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 03:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-16826b4278b957723b29c1fe24cdb30445210e310346bd273deb64800d04df3c-merged.mount: Deactivated successfully.
Nov 22 03:46:53 compute-0 podman[256621]: 2025-11-22 03:46:53.115836557 +0000 UTC m=+0.217829057 container remove 8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chebyshev, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:46:53 compute-0 systemd[1]: libpod-conmon-8326e21522493da817cdb951fa39069a287f97feee95a1c0b5ad3e18ddda986a.scope: Deactivated successfully.
Nov 22 03:46:53 compute-0 ceph-mon[75011]: pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:53 compute-0 podman[256660]: 2025-11-22 03:46:53.38272812 +0000 UTC m=+0.061363167 container create bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 03:46:53 compute-0 systemd[1]: Started libpod-conmon-bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321.scope.
Nov 22 03:46:53 compute-0 podman[256660]: 2025-11-22 03:46:53.353823962 +0000 UTC m=+0.032459049 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:46:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b37148d68ff0c627b7d7ed9eed527ade8ffb22fb49eeac998e18b9eef6897f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b37148d68ff0c627b7d7ed9eed527ade8ffb22fb49eeac998e18b9eef6897f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b37148d68ff0c627b7d7ed9eed527ade8ffb22fb49eeac998e18b9eef6897f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4b37148d68ff0c627b7d7ed9eed527ade8ffb22fb49eeac998e18b9eef6897f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:53 compute-0 podman[256660]: 2025-11-22 03:46:53.484797399 +0000 UTC m=+0.163432496 container init bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:46:53 compute-0 podman[256660]: 2025-11-22 03:46:53.496124682 +0000 UTC m=+0.174759729 container start bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:46:53 compute-0 podman[256660]: 2025-11-22 03:46:53.499990624 +0000 UTC m=+0.178625751 container attach bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:46:53 compute-0 rsyslogd[1007]: imjournal from <np0005531666:podman>: begin to drop messages due to rate-limiting
Nov 22 03:46:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]: {
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:     "0": [
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:         {
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "devices": [
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "/dev/loop3"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             ],
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_name": "ceph_lv0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_size": "21470642176",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "name": "ceph_lv0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "tags": {
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cluster_name": "ceph",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.crush_device_class": "",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.encrypted": "0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osd_id": "0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.type": "block",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.vdo": "0"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             },
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "type": "block",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "vg_name": "ceph_vg0"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:         }
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:     ],
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:     "1": [
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:         {
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "devices": [
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "/dev/loop4"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             ],
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_name": "ceph_lv1",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_size": "21470642176",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "name": "ceph_lv1",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "tags": {
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cluster_name": "ceph",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.crush_device_class": "",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.encrypted": "0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osd_id": "1",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.type": "block",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.vdo": "0"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             },
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "type": "block",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "vg_name": "ceph_vg1"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:         }
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:     ],
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:     "2": [
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:         {
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "devices": [
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "/dev/loop5"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             ],
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_name": "ceph_lv2",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_size": "21470642176",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "name": "ceph_lv2",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "tags": {
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.cluster_name": "ceph",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.crush_device_class": "",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.encrypted": "0",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osd_id": "2",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.type": "block",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:                 "ceph.vdo": "0"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             },
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "type": "block",
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:             "vg_name": "ceph_vg2"
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:         }
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]:     ]
Nov 22 03:46:54 compute-0 wonderful_rhodes[256676]: }
Nov 22 03:46:54 compute-0 systemd[1]: libpod-bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321.scope: Deactivated successfully.
Nov 22 03:46:54 compute-0 podman[256660]: 2025-11-22 03:46:54.301839372 +0000 UTC m=+0.980474439 container died bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4b37148d68ff0c627b7d7ed9eed527ade8ffb22fb49eeac998e18b9eef6897f-merged.mount: Deactivated successfully.
Nov 22 03:46:54 compute-0 podman[256660]: 2025-11-22 03:46:54.362726198 +0000 UTC m=+1.041361235 container remove bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:46:54 compute-0 systemd[1]: libpod-conmon-bbff81ad6c855369f5ac159bc98c59a764616eb157f90fd3eda1ba0a0d963321.scope: Deactivated successfully.
Nov 22 03:46:54 compute-0 sudo[256554]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:54 compute-0 sudo[256699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:54 compute-0 sudo[256699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:54 compute-0 sudo[256699]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:54 compute-0 sudo[256724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:46:54 compute-0 sudo[256724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:54 compute-0 sudo[256724]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:54 compute-0 sudo[256749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:54 compute-0 sudo[256749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:54 compute-0 sudo[256749]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:54 compute-0 sudo[256774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:46:54 compute-0 sudo[256774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:55 compute-0 podman[256840]: 2025-11-22 03:46:55.082947797 +0000 UTC m=+0.096241388 container create 6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:46:55 compute-0 podman[256840]: 2025-11-22 03:46:55.017370396 +0000 UTC m=+0.030664056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:46:55 compute-0 systemd[1]: Started libpod-conmon-6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032.scope.
Nov 22 03:46:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:46:55 compute-0 podman[256840]: 2025-11-22 03:46:55.218032466 +0000 UTC m=+0.231326117 container init 6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:46:55 compute-0 podman[256840]: 2025-11-22 03:46:55.232202872 +0000 UTC m=+0.245496483 container start 6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:46:55 compute-0 cranky_pare[256856]: 167 167
Nov 22 03:46:55 compute-0 systemd[1]: libpod-6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032.scope: Deactivated successfully.
Nov 22 03:46:55 compute-0 podman[256840]: 2025-11-22 03:46:55.27633501 +0000 UTC m=+0.289628801 container attach 6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:46:55 compute-0 podman[256840]: 2025-11-22 03:46:55.277542912 +0000 UTC m=+0.290836503 container died 6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 03:46:55 compute-0 ceph-mon[75011]: pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-501aa2f3ce7269a21bc3c80581b75de869a6e0690b2240e1d6da3575f24a53a9-merged.mount: Deactivated successfully.
Nov 22 03:46:55 compute-0 podman[256840]: 2025-11-22 03:46:55.372379881 +0000 UTC m=+0.385673462 container remove 6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_pare, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:46:55 compute-0 systemd[1]: libpod-conmon-6bef7b65726d313bb3b01b7c14751d53f2ef62b046c16086517a7fc8f4b91032.scope: Deactivated successfully.
Nov 22 03:46:55 compute-0 podman[256881]: 2025-11-22 03:46:55.604104425 +0000 UTC m=+0.076443845 container create 77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:46:55 compute-0 podman[256881]: 2025-11-22 03:46:55.559740358 +0000 UTC m=+0.032079808 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:46:55 compute-0 systemd[1]: Started libpod-conmon-77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a.scope.
Nov 22 03:46:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e37cb802ce98f5635727cf41b12937f265cb94c16b48a637aee9074596b7fa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e37cb802ce98f5635727cf41b12937f265cb94c16b48a637aee9074596b7fa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e37cb802ce98f5635727cf41b12937f265cb94c16b48a637aee9074596b7fa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e37cb802ce98f5635727cf41b12937f265cb94c16b48a637aee9074596b7fa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:46:55 compute-0 podman[256881]: 2025-11-22 03:46:55.735136639 +0000 UTC m=+0.207476149 container init 77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:46:55 compute-0 podman[256881]: 2025-11-22 03:46:55.741357991 +0000 UTC m=+0.213697411 container start 77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hermann, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:46:55 compute-0 podman[256881]: 2025-11-22 03:46:55.745545027 +0000 UTC m=+0.217884517 container attach 77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:46:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]: {
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "osd_id": 1,
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "type": "bluestore"
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:     },
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "osd_id": 0,
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "type": "bluestore"
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:     },
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "osd_id": 2,
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:         "type": "bluestore"
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]:     }
Nov 22 03:46:56 compute-0 eloquent_hermann[256897]: }
Nov 22 03:46:56 compute-0 systemd[1]: libpod-77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a.scope: Deactivated successfully.
Nov 22 03:46:56 compute-0 systemd[1]: libpod-77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a.scope: Consumed 1.074s CPU time.
Nov 22 03:46:56 compute-0 podman[256881]: 2025-11-22 03:46:56.807916464 +0000 UTC m=+1.280255894 container died 77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hermann, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 03:46:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e37cb802ce98f5635727cf41b12937f265cb94c16b48a637aee9074596b7fa3-merged.mount: Deactivated successfully.
Nov 22 03:46:56 compute-0 podman[256881]: 2025-11-22 03:46:56.891210218 +0000 UTC m=+1.363549638 container remove 77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hermann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:46:56 compute-0 systemd[1]: libpod-conmon-77a390bdd91ef804268673577c25986f9b02b913bd9a9b3c1513a36a21f70a9a.scope: Deactivated successfully.
Nov 22 03:46:56 compute-0 sudo[256774]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:46:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:46:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:46:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:46:56 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6c744c3a-fe1b-4ea4-8bfb-114183bede18 does not exist
Nov 22 03:46:56 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 4162b60e-16aa-4fb5-bdad-41a8e39e6060 does not exist
Nov 22 03:46:57 compute-0 sudo[256944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:46:57 compute-0 sudo[256944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:57 compute-0 sudo[256944]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:57 compute-0 sudo[256969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:46:57 compute-0 sudo[256969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:46:57 compute-0 sudo[256969]: pam_unix(sudo:session): session closed for user root
Nov 22 03:46:57 compute-0 ceph-mon[75011]: pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Nov 22 03:46:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:46:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:46:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 22 03:46:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:46:59 compute-0 ceph-mon[75011]: pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Nov 22 03:46:59 compute-0 podman[256994]: 2025-11-22 03:46:59.400526582 +0000 UTC m=+0.070512829 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:46:59 compute-0 podman[256995]: 2025-11-22 03:46:59.424803458 +0000 UTC m=+0.095084788 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 03:46:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:47:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2324377769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:47:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:47:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2324377769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:47:01 compute-0 ceph-mon[75011]: pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2324377769' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:47:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2324377769' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:47:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:03 compute-0 ceph-mon[75011]: pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:05 compute-0 ceph-mon[75011]: pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:07 compute-0 ceph-mon[75011]: pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:09 compute-0 ceph-mon[75011]: pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:11 compute-0 ceph-mon[75011]: pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:12 compute-0 podman[257037]: 2025-11-22 03:47:12.410231081 +0000 UTC m=+0.082812169 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:47:13 compute-0 ceph-mon[75011]: pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:15 compute-0 ceph-mon[75011]: pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:17 compute-0 ceph-mon[75011]: pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:18 compute-0 ceph-mon[75011]: pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:20 compute-0 ceph-mon[75011]: pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:22 compute-0 ceph-mon[75011]: pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:47:23.000 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:47:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:47:23.001 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:47:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:47:23.001 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:47:23 compute-0 nova_compute[253461]: 2025-11-22 03:47:23.665 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:23 compute-0 nova_compute[253461]: 2025-11-22 03:47:23.666 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:23 compute-0 nova_compute[253461]: 2025-11-22 03:47:23.666 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:24 compute-0 nova_compute[253461]: 2025-11-22 03:47:24.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:24 compute-0 nova_compute[253461]: 2025-11-22 03:47:24.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:47:24 compute-0 nova_compute[253461]: 2025-11-22 03:47:24.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:47:24 compute-0 nova_compute[253461]: 2025-11-22 03:47:24.577 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:47:24 compute-0 nova_compute[253461]: 2025-11-22 03:47:24.579 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:24 compute-0 nova_compute[253461]: 2025-11-22 03:47:24.579 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:24 compute-0 nova_compute[253461]: 2025-11-22 03:47:24.579 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:24 compute-0 nova_compute[253461]: 2025-11-22 03:47:24.579 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:47:24 compute-0 ceph-mon[75011]: pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:26 compute-0 nova_compute[253461]: 2025-11-22 03:47:26.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:26 compute-0 nova_compute[253461]: 2025-11-22 03:47:26.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:47:26 compute-0 nova_compute[253461]: 2025-11-22 03:47:26.455 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:47:26 compute-0 nova_compute[253461]: 2025-11-22 03:47:26.456 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:47:26 compute-0 nova_compute[253461]: 2025-11-22 03:47:26.456 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:47:26 compute-0 nova_compute[253461]: 2025-11-22 03:47:26.457 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:47:26 compute-0 nova_compute[253461]: 2025-11-22 03:47:26.457 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:47:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:47:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2742122690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:47:26 compute-0 nova_compute[253461]: 2025-11-22 03:47:26.908 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:47:26 compute-0 ceph-mon[75011]: pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2742122690' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.080 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.081 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.082 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.082 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.153 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.154 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.174 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:47:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:47:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1095913673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.605 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.612 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.633 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.636 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:47:27 compute-0 nova_compute[253461]: 2025-11-22 03:47:27.637 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:47:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1095913673' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:47:29 compute-0 ceph-mon[75011]: pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:30 compute-0 podman[257102]: 2025-11-22 03:47:30.381466141 +0000 UTC m=+0.057214133 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 03:47:30 compute-0 podman[257103]: 2025-11-22 03:47:30.43364569 +0000 UTC m=+0.100925670 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:47:31 compute-0 ceph-mon[75011]: pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:33 compute-0 ceph-mon[75011]: pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:35 compute-0 ceph-mon[75011]: pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:47:36
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'volumes', 'vms', 'images', 'cephfs.cephfs.meta']
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:47:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:47:37 compute-0 ceph-mon[75011]: pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:39 compute-0 ceph-mon[75011]: pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:41 compute-0 ceph-mon[75011]: pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:43 compute-0 podman[257146]: 2025-11-22 03:47:43.393562448 +0000 UTC m=+0.074808700 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 03:47:43 compute-0 ceph-mon[75011]: pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:45 compute-0 ceph-mon[75011]: pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:47:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:47:47 compute-0 ceph-mon[75011]: pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:49 compute-0 ceph-mon[75011]: pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:51 compute-0 ceph-mon[75011]: pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:53 compute-0 ceph-mon[75011]: pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:55 compute-0 ceph-mon[75011]: pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:57 compute-0 sudo[257166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:47:57 compute-0 sudo[257166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:57 compute-0 sudo[257166]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:57 compute-0 sudo[257191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:47:57 compute-0 sudo[257191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:57 compute-0 sudo[257191]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:57 compute-0 sudo[257216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:47:57 compute-0 sudo[257216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:57 compute-0 sudo[257216]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:57 compute-0 sudo[257241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:47:57 compute-0 sudo[257241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:57 compute-0 ceph-mon[75011]: pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:57 compute-0 podman[257334]: 2025-11-22 03:47:57.859107608 +0000 UTC m=+0.066422115 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:47:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:58 compute-0 podman[257334]: 2025-11-22 03:47:58.005834903 +0000 UTC m=+0.213149360 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:47:58 compute-0 sudo[257241]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:47:58 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:47:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:47:58 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:47:58 compute-0 sudo[257496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:47:58 compute-0 sudo[257496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:58 compute-0 sudo[257496]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:58 compute-0 sudo[257521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:47:58 compute-0 sudo[257521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:58 compute-0 sudo[257521]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:58 compute-0 sudo[257546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:47:58 compute-0 sudo[257546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:58 compute-0 sudo[257546]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:58 compute-0 sudo[257571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:47:58 compute-0 sudo[257571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:47:59 compute-0 sudo[257571]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:47:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:47:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:47:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:47:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:47:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:47:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7bc189b1-2085-4adb-9780-2af9ac110e9e does not exist
Nov 22 03:47:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c3560e1a-5b1f-47a1-97d5-8408354349ab does not exist
Nov 22 03:47:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev bb206e30-ae98-4735-9495-f23ce1f978b2 does not exist
Nov 22 03:47:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:47:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:47:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:47:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:47:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:47:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:47:59 compute-0 sudo[257627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:47:59 compute-0 sudo[257627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:59 compute-0 sudo[257627]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:59 compute-0 sudo[257652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:47:59 compute-0 sudo[257652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:59 compute-0 sudo[257652]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:59 compute-0 sudo[257677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:47:59 compute-0 sudo[257677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:59 compute-0 sudo[257677]: pam_unix(sudo:session): session closed for user root
Nov 22 03:47:59 compute-0 sudo[257702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:47:59 compute-0 sudo[257702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:47:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:59 compute-0 ceph-mon[75011]: pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:47:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:47:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:47:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:47:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:47:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:47:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:47:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:47:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:48:00 compute-0 podman[257769]: 2025-11-22 03:48:00.166565495 +0000 UTC m=+0.078584085 container create 345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:48:00 compute-0 podman[257769]: 2025-11-22 03:48:00.113283099 +0000 UTC m=+0.025301758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:00 compute-0 systemd[1]: Started libpod-conmon-345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8.scope.
Nov 22 03:48:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:48:00 compute-0 podman[257769]: 2025-11-22 03:48:00.310806942 +0000 UTC m=+0.222825542 container init 345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wing, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:48:00 compute-0 podman[257769]: 2025-11-22 03:48:00.319761702 +0000 UTC m=+0.231780282 container start 345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wing, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:48:00 compute-0 naughty_wing[257786]: 167 167
Nov 22 03:48:00 compute-0 systemd[1]: libpod-345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8.scope: Deactivated successfully.
Nov 22 03:48:00 compute-0 conmon[257786]: conmon 345172a6c925c1dc9032 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8.scope/container/memory.events
Nov 22 03:48:00 compute-0 podman[257769]: 2025-11-22 03:48:00.33937359 +0000 UTC m=+0.251392190 container attach 345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wing, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:48:00 compute-0 podman[257769]: 2025-11-22 03:48:00.339749055 +0000 UTC m=+0.251767635 container died 345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wing, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 03:48:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:48:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1700208224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:48:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:48:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1700208224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:48:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f218995c7fc8a4acaaa50bac45da03eee1da23c574d52aa56b4f8635a6dc3740-merged.mount: Deactivated successfully.
Nov 22 03:48:00 compute-0 podman[257769]: 2025-11-22 03:48:00.555741073 +0000 UTC m=+0.467759653 container remove 345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_wing, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:48:00 compute-0 systemd[1]: libpod-conmon-345172a6c925c1dc9032d6aecf190e49dcf13a6cedb3dced2b9448ff7b41c6b8.scope: Deactivated successfully.
Nov 22 03:48:00 compute-0 podman[257803]: 2025-11-22 03:48:00.627246096 +0000 UTC m=+0.154572705 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:48:00 compute-0 podman[257804]: 2025-11-22 03:48:00.658336174 +0000 UTC m=+0.186964058 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:48:00 compute-0 podman[257853]: 2025-11-22 03:48:00.765950768 +0000 UTC m=+0.085268522 container create e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:48:00 compute-0 podman[257853]: 2025-11-22 03:48:00.704262558 +0000 UTC m=+0.023580342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:00 compute-0 systemd[1]: Started libpod-conmon-e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa.scope.
Nov 22 03:48:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d51c89b08c246620fc5dc23ff86c5f931005dc84021f0c0053452f125c7f192/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d51c89b08c246620fc5dc23ff86c5f931005dc84021f0c0053452f125c7f192/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d51c89b08c246620fc5dc23ff86c5f931005dc84021f0c0053452f125c7f192/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d51c89b08c246620fc5dc23ff86c5f931005dc84021f0c0053452f125c7f192/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d51c89b08c246620fc5dc23ff86c5f931005dc84021f0c0053452f125c7f192/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:00 compute-0 podman[257853]: 2025-11-22 03:48:00.931902722 +0000 UTC m=+0.251220506 container init e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:48:00 compute-0 podman[257853]: 2025-11-22 03:48:00.938889789 +0000 UTC m=+0.258207543 container start e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:48:00 compute-0 ceph-mon[75011]: pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1700208224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:48:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1700208224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:48:00 compute-0 podman[257853]: 2025-11-22 03:48:00.966679873 +0000 UTC m=+0.285997627 container attach e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 03:48:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:01 compute-0 gracious_borg[257869]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:48:01 compute-0 gracious_borg[257869]: --> relative data size: 1.0
Nov 22 03:48:01 compute-0 gracious_borg[257869]: --> All data devices are unavailable
Nov 22 03:48:01 compute-0 systemd[1]: libpod-e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa.scope: Deactivated successfully.
Nov 22 03:48:01 compute-0 podman[257853]: 2025-11-22 03:48:01.949575192 +0000 UTC m=+1.268892966 container died e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Nov 22 03:48:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d51c89b08c246620fc5dc23ff86c5f931005dc84021f0c0053452f125c7f192-merged.mount: Deactivated successfully.
Nov 22 03:48:02 compute-0 podman[257853]: 2025-11-22 03:48:02.016012257 +0000 UTC m=+1.335330011 container remove e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:48:02 compute-0 systemd[1]: libpod-conmon-e29522a823ec95fab8f58f70d809c1d90c57a9eb350445fd0d85d751b4ec8baa.scope: Deactivated successfully.
Nov 22 03:48:02 compute-0 sudo[257702]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:02 compute-0 sudo[257909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:48:02 compute-0 sudo[257909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:02 compute-0 sudo[257909]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:02 compute-0 sudo[257934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:48:02 compute-0 sudo[257934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:02 compute-0 sudo[257934]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:02 compute-0 sudo[257959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:48:02 compute-0 sudo[257959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:02 compute-0 sudo[257959]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:02 compute-0 sudo[257984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:48:02 compute-0 sudo[257984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:02 compute-0 podman[258049]: 2025-11-22 03:48:02.695889588 +0000 UTC m=+0.060346765 container create c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:48:02 compute-0 systemd[1]: Started libpod-conmon-c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37.scope.
Nov 22 03:48:02 compute-0 podman[258049]: 2025-11-22 03:48:02.661073767 +0000 UTC m=+0.025530964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:48:02 compute-0 podman[258049]: 2025-11-22 03:48:02.787952901 +0000 UTC m=+0.152410108 container init c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:48:02 compute-0 podman[258049]: 2025-11-22 03:48:02.800961552 +0000 UTC m=+0.165418729 container start c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:48:02 compute-0 podman[258049]: 2025-11-22 03:48:02.805156942 +0000 UTC m=+0.169614269 container attach c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:48:02 compute-0 gifted_faraday[258065]: 167 167
Nov 22 03:48:02 compute-0 systemd[1]: libpod-c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37.scope: Deactivated successfully.
Nov 22 03:48:02 compute-0 podman[258049]: 2025-11-22 03:48:02.806781847 +0000 UTC m=+0.171239034 container died c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:48:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-72bc0126bea8059ee580a532401cae4aa461926888a58e316cdd10930f1ff70e-merged.mount: Deactivated successfully.
Nov 22 03:48:02 compute-0 podman[258049]: 2025-11-22 03:48:02.843928373 +0000 UTC m=+0.208385570 container remove c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:02 compute-0 systemd[1]: libpod-conmon-c707325760dc651305df6a4d088954c60d25020a5a789dd4227515ce493e9a37.scope: Deactivated successfully.
Nov 22 03:48:02 compute-0 ceph-mon[75011]: pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:03 compute-0 podman[258088]: 2025-11-22 03:48:03.039371218 +0000 UTC m=+0.038848583 container create eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:48:03 compute-0 systemd[1]: Started libpod-conmon-eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a.scope.
Nov 22 03:48:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e12f3d181bea19d82b30364f0acb62bddbf269c6a7fc13537e1b80a33686caa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e12f3d181bea19d82b30364f0acb62bddbf269c6a7fc13537e1b80a33686caa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e12f3d181bea19d82b30364f0acb62bddbf269c6a7fc13537e1b80a33686caa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e12f3d181bea19d82b30364f0acb62bddbf269c6a7fc13537e1b80a33686caa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:03 compute-0 podman[258088]: 2025-11-22 03:48:03.110516281 +0000 UTC m=+0.109993676 container init eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:48:03 compute-0 podman[258088]: 2025-11-22 03:48:03.117457869 +0000 UTC m=+0.116935233 container start eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Nov 22 03:48:03 compute-0 podman[258088]: 2025-11-22 03:48:03.024255791 +0000 UTC m=+0.023733186 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:03 compute-0 podman[258088]: 2025-11-22 03:48:03.121032777 +0000 UTC m=+0.120510172 container attach eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:48:03 compute-0 cool_joliot[258105]: {
Nov 22 03:48:03 compute-0 cool_joliot[258105]:     "0": [
Nov 22 03:48:03 compute-0 cool_joliot[258105]:         {
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "devices": [
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "/dev/loop3"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             ],
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_name": "ceph_lv0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_size": "21470642176",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "name": "ceph_lv0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "tags": {
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cluster_name": "ceph",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.crush_device_class": "",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.encrypted": "0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osd_id": "0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.type": "block",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.vdo": "0"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             },
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "type": "block",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "vg_name": "ceph_vg0"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:         }
Nov 22 03:48:03 compute-0 cool_joliot[258105]:     ],
Nov 22 03:48:03 compute-0 cool_joliot[258105]:     "1": [
Nov 22 03:48:03 compute-0 cool_joliot[258105]:         {
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "devices": [
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "/dev/loop4"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             ],
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_name": "ceph_lv1",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_size": "21470642176",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "name": "ceph_lv1",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "tags": {
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cluster_name": "ceph",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.crush_device_class": "",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.encrypted": "0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osd_id": "1",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.type": "block",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.vdo": "0"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             },
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "type": "block",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "vg_name": "ceph_vg1"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:         }
Nov 22 03:48:03 compute-0 cool_joliot[258105]:     ],
Nov 22 03:48:03 compute-0 cool_joliot[258105]:     "2": [
Nov 22 03:48:03 compute-0 cool_joliot[258105]:         {
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "devices": [
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "/dev/loop5"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             ],
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_name": "ceph_lv2",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_size": "21470642176",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "name": "ceph_lv2",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "tags": {
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.cluster_name": "ceph",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.crush_device_class": "",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.encrypted": "0",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osd_id": "2",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.type": "block",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:                 "ceph.vdo": "0"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             },
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "type": "block",
Nov 22 03:48:03 compute-0 cool_joliot[258105]:             "vg_name": "ceph_vg2"
Nov 22 03:48:03 compute-0 cool_joliot[258105]:         }
Nov 22 03:48:03 compute-0 cool_joliot[258105]:     ]
Nov 22 03:48:03 compute-0 cool_joliot[258105]: }
Nov 22 03:48:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:03 compute-0 systemd[1]: libpod-eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a.scope: Deactivated successfully.
Nov 22 03:48:03 compute-0 podman[258088]: 2025-11-22 03:48:03.923082995 +0000 UTC m=+0.922560370 container died eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:48:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e12f3d181bea19d82b30364f0acb62bddbf269c6a7fc13537e1b80a33686caa-merged.mount: Deactivated successfully.
Nov 22 03:48:03 compute-0 podman[258088]: 2025-11-22 03:48:03.990590839 +0000 UTC m=+0.990068214 container remove eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_joliot, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:48:04 compute-0 systemd[1]: libpod-conmon-eccfe409a0549c353dd87cbefb5021404dfe6202c49700542766f2232d805e8a.scope: Deactivated successfully.
Nov 22 03:48:04 compute-0 sudo[257984]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:04 compute-0 sudo[258128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:48:04 compute-0 sudo[258128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:04 compute-0 sudo[258128]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:04 compute-0 sudo[258153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:48:04 compute-0 sudo[258153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:04 compute-0 sudo[258153]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:04 compute-0 sudo[258178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:48:04 compute-0 sudo[258178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:04 compute-0 sudo[258178]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:04 compute-0 sudo[258203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:48:04 compute-0 sudo[258203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:04 compute-0 podman[258268]: 2025-11-22 03:48:04.744462531 +0000 UTC m=+0.057382406 container create e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:04 compute-0 systemd[1]: Started libpod-conmon-e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13.scope.
Nov 22 03:48:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:48:04 compute-0 podman[258268]: 2025-11-22 03:48:04.725952736 +0000 UTC m=+0.038872651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:04 compute-0 podman[258268]: 2025-11-22 03:48:04.823583429 +0000 UTC m=+0.136503324 container init e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_saha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:48:04 compute-0 podman[258268]: 2025-11-22 03:48:04.834129001 +0000 UTC m=+0.147048866 container start e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:48:04 compute-0 podman[258268]: 2025-11-22 03:48:04.837226076 +0000 UTC m=+0.150146051 container attach e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_saha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:04 compute-0 interesting_saha[258284]: 167 167
Nov 22 03:48:04 compute-0 systemd[1]: libpod-e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13.scope: Deactivated successfully.
Nov 22 03:48:04 compute-0 podman[258268]: 2025-11-22 03:48:04.838457485 +0000 UTC m=+0.151377350 container died e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_saha, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Nov 22 03:48:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e8900565e76b7e81bac8fcf49d0bc96ba2e690f552c2aabc369635e669a8e4b-merged.mount: Deactivated successfully.
Nov 22 03:48:04 compute-0 podman[258268]: 2025-11-22 03:48:04.883996201 +0000 UTC m=+0.196916076 container remove e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_saha, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:48:04 compute-0 systemd[1]: libpod-conmon-e4fd7ae8b501755c7959f0ace831040b326483cee516dab22020145a3a94ad13.scope: Deactivated successfully.
Nov 22 03:48:04 compute-0 ceph-mon[75011]: pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:05 compute-0 podman[258309]: 2025-11-22 03:48:05.067286181 +0000 UTC m=+0.045837399 container create 119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sanderson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:48:05 compute-0 systemd[1]: Started libpod-conmon-119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00.scope.
Nov 22 03:48:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9298d16231f3209935568aa49ab243e8604d0476953068c0b6e285a2d10deb2a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9298d16231f3209935568aa49ab243e8604d0476953068c0b6e285a2d10deb2a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9298d16231f3209935568aa49ab243e8604d0476953068c0b6e285a2d10deb2a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9298d16231f3209935568aa49ab243e8604d0476953068c0b6e285a2d10deb2a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:48:05 compute-0 podman[258309]: 2025-11-22 03:48:05.046010798 +0000 UTC m=+0.024562046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:48:05 compute-0 podman[258309]: 2025-11-22 03:48:05.157625417 +0000 UTC m=+0.136176735 container init 119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sanderson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:48:05 compute-0 podman[258309]: 2025-11-22 03:48:05.171151253 +0000 UTC m=+0.149702501 container start 119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:48:05 compute-0 podman[258309]: 2025-11-22 03:48:05.175644835 +0000 UTC m=+0.154196153 container attach 119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sanderson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:48:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:06 compute-0 zen_sanderson[258326]: {
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "osd_id": 1,
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "type": "bluestore"
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:     },
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "osd_id": 0,
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "type": "bluestore"
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:     },
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "osd_id": 2,
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:         "type": "bluestore"
Nov 22 03:48:06 compute-0 zen_sanderson[258326]:     }
Nov 22 03:48:06 compute-0 zen_sanderson[258326]: }
Nov 22 03:48:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:06 compute-0 systemd[1]: libpod-119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00.scope: Deactivated successfully.
Nov 22 03:48:06 compute-0 systemd[1]: libpod-119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00.scope: Consumed 1.046s CPU time.
Nov 22 03:48:06 compute-0 podman[258309]: 2025-11-22 03:48:06.207695048 +0000 UTC m=+1.186246266 container died 119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sanderson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:48:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9298d16231f3209935568aa49ab243e8604d0476953068c0b6e285a2d10deb2a-merged.mount: Deactivated successfully.
Nov 22 03:48:06 compute-0 podman[258309]: 2025-11-22 03:48:06.358139679 +0000 UTC m=+1.336690897 container remove 119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:48:06 compute-0 systemd[1]: libpod-conmon-119dbc5c4ad16643420f5ea1b5d135f4035f174e8adb8248ab220812de414b00.scope: Deactivated successfully.
Nov 22 03:48:06 compute-0 sudo[258203]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:48:06 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:48:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:48:06 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:48:06 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 4895eead-4bd8-4f61-94c2-9a7ed558b723 does not exist
Nov 22 03:48:06 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a323e107-1239-4a5f-9fdd-cf27c0bd9e09 does not exist
Nov 22 03:48:06 compute-0 sudo[258373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:48:06 compute-0 sudo[258373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:06 compute-0 sudo[258373]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:06 compute-0 sudo[258398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:48:06 compute-0 sudo[258398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:48:06 compute-0 sudo[258398]: pam_unix(sudo:session): session closed for user root
Nov 22 03:48:07 compute-0 ceph-mon[75011]: pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:48:07 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:48:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:09 compute-0 ceph-mon[75011]: pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:11 compute-0 ceph-mon[75011]: pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:13 compute-0 ceph-mon[75011]: pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:14 compute-0 podman[258423]: 2025-11-22 03:48:14.402113242 +0000 UTC m=+0.080698937 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 22 03:48:15 compute-0 ceph-mon[75011]: pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:17 compute-0 ceph-mon[75011]: pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:19 compute-0 ceph-mon[75011]: pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:21 compute-0 ceph-mon[75011]: pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:48:23.001 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:48:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:48:23.001 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:48:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:48:23.002 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:48:23 compute-0 ceph-mon[75011]: pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:24 compute-0 ceph-mon[75011]: pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:25 compute-0 nova_compute[253461]: 2025-11-22 03:48:25.633 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:25 compute-0 nova_compute[253461]: 2025-11-22 03:48:25.633 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:25 compute-0 nova_compute[253461]: 2025-11-22 03:48:25.655 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:25 compute-0 nova_compute[253461]: 2025-11-22 03:48:25.656 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:48:25 compute-0 nova_compute[253461]: 2025-11-22 03:48:25.656 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:48:25 compute-0 nova_compute[253461]: 2025-11-22 03:48:25.672 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:48:25 compute-0 nova_compute[253461]: 2025-11-22 03:48:25.672 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:25 compute-0 nova_compute[253461]: 2025-11-22 03:48:25.673 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.454 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.454 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.454 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.454 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.455 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:48:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:48:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4092486049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:48:26 compute-0 nova_compute[253461]: 2025-11-22 03:48:26.859 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:48:26 compute-0 ceph-mon[75011]: pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4092486049' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.078 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.079 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5187MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.080 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.080 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.147 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.148 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.165 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:48:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:48:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452843680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.570 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.577 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.597 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.599 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:48:27 compute-0 nova_compute[253461]: 2025-11-22 03:48:27.600 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:48:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1452843680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:48:29 compute-0 ceph-mon[75011]: pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:31 compute-0 ceph-mon[75011]: pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:31 compute-0 podman[258489]: 2025-11-22 03:48:31.396799189 +0000 UTC m=+0.069900799 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:48:31 compute-0 podman[258490]: 2025-11-22 03:48:31.440748053 +0000 UTC m=+0.111784747 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 03:48:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:33 compute-0 ceph-mon[75011]: pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:35 compute-0 ceph-mon[75011]: pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:48:36
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'images']
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:48:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:48:37 compute-0 ceph-mon[75011]: pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:39 compute-0 ceph-mon[75011]: pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:41 compute-0 ceph-mon[75011]: pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:43 compute-0 ceph-mon[75011]: pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:45 compute-0 ceph-mon[75011]: pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:45 compute-0 podman[258533]: 2025-11-22 03:48:45.417777534 +0000 UTC m=+0.093747149 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:48:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:48:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:48:47 compute-0 ceph-mon[75011]: pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:49 compute-0 ceph-mon[75011]: pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:51 compute-0 ceph-mon[75011]: pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:53 compute-0 ceph-mon[75011]: pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:55 compute-0 ceph-mon[75011]: pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:57 compute-0 ceph-mon[75011]: pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:59 compute-0 ceph-mon[75011]: pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:48:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:48:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:49:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/425955591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:49:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:49:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/425955591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:49:01 compute-0 ceph-mon[75011]: pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/425955591' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:49:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/425955591' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:49:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:02 compute-0 podman[258553]: 2025-11-22 03:49:02.403660217 +0000 UTC m=+0.075423569 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:49:02 compute-0 podman[258554]: 2025-11-22 03:49:02.514384441 +0000 UTC m=+0.181172668 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 03:49:03 compute-0 ceph-mon[75011]: pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:49:05 compute-0 ceph-mon[75011]: pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:06 compute-0 sudo[258595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:06 compute-0 sudo[258595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:06 compute-0 sudo[258595]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:06 compute-0 sudo[258620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:49:06 compute-0 sudo[258620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:06 compute-0 sudo[258620]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:06 compute-0 sudo[258645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:06 compute-0 sudo[258645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:06 compute-0 sudo[258645]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:06 compute-0 sudo[258670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:49:06 compute-0 sudo[258670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:07 compute-0 ceph-mon[75011]: pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:07 compute-0 sudo[258670]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:49:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:49:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:49:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:49:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:49:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:49:07 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6d21540a-e487-4e1c-bcdc-1e56bdae7f22 does not exist
Nov 22 03:49:07 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6c88d20d-06a1-42a4-9732-8a1ac4e0d749 does not exist
Nov 22 03:49:07 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 33a8695f-2b91-4f38-9b71-21cd6b48d879 does not exist
Nov 22 03:49:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:49:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:49:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:49:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:49:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:49:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:49:07 compute-0 sudo[258726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:07 compute-0 sudo[258726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:07 compute-0 sudo[258726]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:07 compute-0 sudo[258751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:49:07 compute-0 sudo[258751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:07 compute-0 sudo[258751]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:07 compute-0 sudo[258776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:07 compute-0 sudo[258776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:07 compute-0 sudo[258776]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:07 compute-0 sudo[258801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:49:07 compute-0 sudo[258801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:08 compute-0 podman[258865]: 2025-11-22 03:49:08.03430651 +0000 UTC m=+0.054330155 container create 7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:49:08 compute-0 systemd[1]: Started libpod-conmon-7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76.scope.
Nov 22 03:49:08 compute-0 podman[258865]: 2025-11-22 03:49:08.003248648 +0000 UTC m=+0.023272333 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:49:08 compute-0 podman[258865]: 2025-11-22 03:49:08.137908533 +0000 UTC m=+0.157932238 container init 7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:49:08 compute-0 podman[258865]: 2025-11-22 03:49:08.147078211 +0000 UTC m=+0.167101856 container start 7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:49:08 compute-0 podman[258865]: 2025-11-22 03:49:08.151157436 +0000 UTC m=+0.171181081 container attach 7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:49:08 compute-0 elastic_chebyshev[258882]: 167 167
Nov 22 03:49:08 compute-0 systemd[1]: libpod-7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76.scope: Deactivated successfully.
Nov 22 03:49:08 compute-0 conmon[258882]: conmon 7e137b8914fb257397a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76.scope/container/memory.events
Nov 22 03:49:08 compute-0 podman[258865]: 2025-11-22 03:49:08.155364551 +0000 UTC m=+0.175388186 container died 7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:49:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-28128db5383ee4a0ecdb22fb2b68cf895472eeadc4f057fdfeb69c49e87ce1ea-merged.mount: Deactivated successfully.
Nov 22 03:49:08 compute-0 podman[258865]: 2025-11-22 03:49:08.209692904 +0000 UTC m=+0.229716509 container remove 7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_chebyshev, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:49:08 compute-0 systemd[1]: libpod-conmon-7e137b8914fb257397a8d5bfd3ee65dec79f91018be32e92a39e9a5520fdec76.scope: Deactivated successfully.
Nov 22 03:49:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:49:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:49:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:49:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:49:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:49:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:49:08 compute-0 podman[258906]: 2025-11-22 03:49:08.460903647 +0000 UTC m=+0.065350421 container create a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:49:08 compute-0 systemd[1]: Started libpod-conmon-a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459.scope.
Nov 22 03:49:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d1ff2e37d48f78f63ec42c1c314670d0c8153082c99b715c6432fc0aa49e52/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d1ff2e37d48f78f63ec42c1c314670d0c8153082c99b715c6432fc0aa49e52/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:08 compute-0 podman[258906]: 2025-11-22 03:49:08.438254454 +0000 UTC m=+0.042701308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d1ff2e37d48f78f63ec42c1c314670d0c8153082c99b715c6432fc0aa49e52/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d1ff2e37d48f78f63ec42c1c314670d0c8153082c99b715c6432fc0aa49e52/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d1ff2e37d48f78f63ec42c1c314670d0c8153082c99b715c6432fc0aa49e52/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:08 compute-0 podman[258906]: 2025-11-22 03:49:08.549380758 +0000 UTC m=+0.153827592 container init a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:49:08 compute-0 podman[258906]: 2025-11-22 03:49:08.560307545 +0000 UTC m=+0.164754359 container start a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:49:08 compute-0 podman[258906]: 2025-11-22 03:49:08.565710147 +0000 UTC m=+0.170157011 container attach a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:49:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:49:09 compute-0 ceph-mon[75011]: pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:09 compute-0 confident_snyder[258922]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:49:09 compute-0 confident_snyder[258922]: --> relative data size: 1.0
Nov 22 03:49:09 compute-0 confident_snyder[258922]: --> All data devices are unavailable
Nov 22 03:49:09 compute-0 systemd[1]: libpod-a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459.scope: Deactivated successfully.
Nov 22 03:49:09 compute-0 systemd[1]: libpod-a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459.scope: Consumed 1.051s CPU time.
Nov 22 03:49:09 compute-0 podman[258951]: 2025-11-22 03:49:09.71297045 +0000 UTC m=+0.031579082 container died a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:49:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-49d1ff2e37d48f78f63ec42c1c314670d0c8153082c99b715c6432fc0aa49e52-merged.mount: Deactivated successfully.
Nov 22 03:49:09 compute-0 podman[258951]: 2025-11-22 03:49:09.779922118 +0000 UTC m=+0.098530700 container remove a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:49:09 compute-0 systemd[1]: libpod-conmon-a9b710e23a23b52cdceac9c0984324ed89b6b4402471d696988e952e6f554459.scope: Deactivated successfully.
Nov 22 03:49:09 compute-0 sudo[258801]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:09 compute-0 sudo[258966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:09 compute-0 sudo[258966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:09 compute-0 sudo[258966]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:09 compute-0 sudo[258991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:49:09 compute-0 sudo[258991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:09 compute-0 sudo[258991]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:10 compute-0 sudo[259016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:10 compute-0 sudo[259016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:10 compute-0 sudo[259016]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:10 compute-0 sudo[259041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:49:10 compute-0 sudo[259041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:10 compute-0 podman[259106]: 2025-11-22 03:49:10.563807207 +0000 UTC m=+0.049505201 container create 24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:49:10 compute-0 systemd[1]: Started libpod-conmon-24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8.scope.
Nov 22 03:49:10 compute-0 podman[259106]: 2025-11-22 03:49:10.538988337 +0000 UTC m=+0.024686351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:49:10 compute-0 podman[259106]: 2025-11-22 03:49:10.657853821 +0000 UTC m=+0.143551805 container init 24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:49:10 compute-0 podman[259106]: 2025-11-22 03:49:10.670307766 +0000 UTC m=+0.156005740 container start 24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:49:10 compute-0 podman[259106]: 2025-11-22 03:49:10.674503661 +0000 UTC m=+0.160201675 container attach 24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banach, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:49:10 compute-0 magical_banach[259122]: 167 167
Nov 22 03:49:10 compute-0 systemd[1]: libpod-24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8.scope: Deactivated successfully.
Nov 22 03:49:10 compute-0 podman[259106]: 2025-11-22 03:49:10.677139187 +0000 UTC m=+0.162837161 container died 24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banach, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:49:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5a2823fd80efdb43d68ce1783af11cea0e0377cd48d8641b533fbe1aef2e81e-merged.mount: Deactivated successfully.
Nov 22 03:49:10 compute-0 podman[259106]: 2025-11-22 03:49:10.724382519 +0000 UTC m=+0.210080483 container remove 24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:49:10 compute-0 systemd[1]: libpod-conmon-24291eceab05cd15f9243a199fecaaee15b1aa9a07c1383fff20b68f4a8e05d8.scope: Deactivated successfully.
Nov 22 03:49:10 compute-0 podman[259146]: 2025-11-22 03:49:10.918959181 +0000 UTC m=+0.038021635 container create 5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:49:10 compute-0 systemd[1]: Started libpod-conmon-5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce.scope.
Nov 22 03:49:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a50c4652b115aa3264be167d007c3f68964f3038fcf7c60f9c74f704eb6001/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a50c4652b115aa3264be167d007c3f68964f3038fcf7c60f9c74f704eb6001/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a50c4652b115aa3264be167d007c3f68964f3038fcf7c60f9c74f704eb6001/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3a50c4652b115aa3264be167d007c3f68964f3038fcf7c60f9c74f704eb6001/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:10 compute-0 podman[259146]: 2025-11-22 03:49:10.901733682 +0000 UTC m=+0.020796146 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:11 compute-0 podman[259146]: 2025-11-22 03:49:11.000731481 +0000 UTC m=+0.119793925 container init 5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:49:11 compute-0 podman[259146]: 2025-11-22 03:49:11.011692677 +0000 UTC m=+0.130755111 container start 5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:49:11 compute-0 podman[259146]: 2025-11-22 03:49:11.014992103 +0000 UTC m=+0.134054577 container attach 5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamarr, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:49:11 compute-0 ceph-mon[75011]: pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]: {
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:     "0": [
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:         {
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "devices": [
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "/dev/loop3"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             ],
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_name": "ceph_lv0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_size": "21470642176",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "name": "ceph_lv0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "tags": {
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cluster_name": "ceph",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.crush_device_class": "",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.encrypted": "0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osd_id": "0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.type": "block",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.vdo": "0"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             },
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "type": "block",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "vg_name": "ceph_vg0"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:         }
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:     ],
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:     "1": [
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:         {
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "devices": [
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "/dev/loop4"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             ],
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_name": "ceph_lv1",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_size": "21470642176",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "name": "ceph_lv1",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "tags": {
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cluster_name": "ceph",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.crush_device_class": "",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.encrypted": "0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osd_id": "1",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.type": "block",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.vdo": "0"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             },
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "type": "block",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "vg_name": "ceph_vg1"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:         }
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:     ],
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:     "2": [
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:         {
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "devices": [
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "/dev/loop5"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             ],
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_name": "ceph_lv2",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_size": "21470642176",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "name": "ceph_lv2",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "tags": {
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.cluster_name": "ceph",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.crush_device_class": "",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.encrypted": "0",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osd_id": "2",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.type": "block",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:                 "ceph.vdo": "0"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             },
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "type": "block",
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:             "vg_name": "ceph_vg2"
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:         }
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]:     ]
Nov 22 03:49:11 compute-0 amazing_lamarr[259162]: }
Nov 22 03:49:11 compute-0 systemd[1]: libpod-5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce.scope: Deactivated successfully.
Nov 22 03:49:11 compute-0 podman[259146]: 2025-11-22 03:49:11.790102182 +0000 UTC m=+0.909164686 container died 5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamarr, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Nov 22 03:49:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3a50c4652b115aa3264be167d007c3f68964f3038fcf7c60f9c74f704eb6001-merged.mount: Deactivated successfully.
Nov 22 03:49:11 compute-0 podman[259146]: 2025-11-22 03:49:11.864732041 +0000 UTC m=+0.983794475 container remove 5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_lamarr, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:49:11 compute-0 systemd[1]: libpod-conmon-5aee1fe08a0c78dcde53d91f5c49421d30eff34622b75fd4e388c395814b22ce.scope: Deactivated successfully.
Nov 22 03:49:11 compute-0 sudo[259041]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:11 compute-0 sudo[259183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:11 compute-0 sudo[259183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:11 compute-0 sudo[259183]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:12 compute-0 sudo[259208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:49:12 compute-0 sudo[259208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:12 compute-0 sudo[259208]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:12 compute-0 sudo[259233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:12 compute-0 sudo[259233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:12 compute-0 sudo[259233]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:12 compute-0 sudo[259258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:49:12 compute-0 sudo[259258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:12 compute-0 podman[259322]: 2025-11-22 03:49:12.571856553 +0000 UTC m=+0.060302747 container create da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_neumann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 03:49:12 compute-0 systemd[1]: Started libpod-conmon-da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3.scope.
Nov 22 03:49:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:49:12 compute-0 podman[259322]: 2025-11-22 03:49:12.549632571 +0000 UTC m=+0.038078775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:12 compute-0 podman[259322]: 2025-11-22 03:49:12.655256761 +0000 UTC m=+0.143702975 container init da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:49:12 compute-0 podman[259322]: 2025-11-22 03:49:12.666907107 +0000 UTC m=+0.155353261 container start da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_neumann, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:49:12 compute-0 podman[259322]: 2025-11-22 03:49:12.670169583 +0000 UTC m=+0.158615747 container attach da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:49:12 compute-0 competent_neumann[259338]: 167 167
Nov 22 03:49:12 compute-0 systemd[1]: libpod-da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3.scope: Deactivated successfully.
Nov 22 03:49:12 compute-0 conmon[259338]: conmon da6f6396560a3cb263f3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3.scope/container/memory.events
Nov 22 03:49:12 compute-0 podman[259322]: 2025-11-22 03:49:12.675119496 +0000 UTC m=+0.163565680 container died da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_neumann, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:49:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd12c33cdc3d27aeb906426bb89cd47cb75c074e25d7580235a94869b1ba52c5-merged.mount: Deactivated successfully.
Nov 22 03:49:12 compute-0 podman[259322]: 2025-11-22 03:49:12.717124324 +0000 UTC m=+0.205570478 container remove da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_neumann, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:49:12 compute-0 systemd[1]: libpod-conmon-da6f6396560a3cb263f3f28c2f010b3bbba11a68aa2588b081d8402febd86ee3.scope: Deactivated successfully.
Nov 22 03:49:12 compute-0 podman[259362]: 2025-11-22 03:49:12.882125682 +0000 UTC m=+0.045846334 container create f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:49:12 compute-0 systemd[1]: Started libpod-conmon-f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12.scope.
Nov 22 03:49:12 compute-0 podman[259362]: 2025-11-22 03:49:12.864975844 +0000 UTC m=+0.028696526 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:49:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954c716e2c0fa20736a8c729eb73532736cba1ea7da841d09bab3f42217f54b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954c716e2c0fa20736a8c729eb73532736cba1ea7da841d09bab3f42217f54b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954c716e2c0fa20736a8c729eb73532736cba1ea7da841d09bab3f42217f54b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/954c716e2c0fa20736a8c729eb73532736cba1ea7da841d09bab3f42217f54b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:49:12 compute-0 podman[259362]: 2025-11-22 03:49:12.985611795 +0000 UTC m=+0.149332547 container init f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:49:13 compute-0 podman[259362]: 2025-11-22 03:49:12.999505928 +0000 UTC m=+0.163226600 container start f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ptolemy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:49:13 compute-0 podman[259362]: 2025-11-22 03:49:13.003706743 +0000 UTC m=+0.167427505 container attach f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ptolemy, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:49:13 compute-0 ceph-mon[75011]: pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]: {
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "osd_id": 1,
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "type": "bluestore"
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:     },
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "osd_id": 0,
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "type": "bluestore"
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:     },
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "osd_id": 2,
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:         "type": "bluestore"
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]:     }
Nov 22 03:49:14 compute-0 hungry_ptolemy[259379]: }
Nov 22 03:49:14 compute-0 systemd[1]: libpod-f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12.scope: Deactivated successfully.
Nov 22 03:49:14 compute-0 podman[259362]: 2025-11-22 03:49:14.11533807 +0000 UTC m=+1.279058792 container died f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ptolemy, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:49:14 compute-0 systemd[1]: libpod-f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12.scope: Consumed 1.126s CPU time.
Nov 22 03:49:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-954c716e2c0fa20736a8c729eb73532736cba1ea7da841d09bab3f42217f54b9-merged.mount: Deactivated successfully.
Nov 22 03:49:14 compute-0 podman[259362]: 2025-11-22 03:49:14.191789006 +0000 UTC m=+1.355509708 container remove f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:49:14 compute-0 systemd[1]: libpod-conmon-f894f5ecce29e3647df38ecf1716e8a994e29f722ba91933ba0eb8a005684a12.scope: Deactivated successfully.
Nov 22 03:49:14 compute-0 sudo[259258]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:49:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:49:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:49:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:49:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 39d9272b-986a-46b7-b813-588a95ad68cd does not exist
Nov 22 03:49:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 983ccbcb-c908-41a0-91dd-038a318a762c does not exist
Nov 22 03:49:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:49:14 compute-0 sudo[259426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:49:14 compute-0 sudo[259426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:14 compute-0 sudo[259426]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:14 compute-0 sudo[259451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:49:14 compute-0 sudo[259451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:49:14 compute-0 sudo[259451]: pam_unix(sudo:session): session closed for user root
Nov 22 03:49:15 compute-0 ceph-mon[75011]: pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:49:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:49:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:16 compute-0 podman[259476]: 2025-11-22 03:49:16.435410943 +0000 UTC m=+0.106541440 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:49:17 compute-0 ceph-mon[75011]: pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:19 compute-0 ceph-mon[75011]: pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:49:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:21 compute-0 ceph-mon[75011]: pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:22 compute-0 nova_compute[253461]: 2025-11-22 03:49:22.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:22 compute-0 nova_compute[253461]: 2025-11-22 03:49:22.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 03:49:22 compute-0 nova_compute[253461]: 2025-11-22 03:49:22.451 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 03:49:22 compute-0 nova_compute[253461]: 2025-11-22 03:49:22.452 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:22 compute-0 nova_compute[253461]: 2025-11-22 03:49:22.452 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 03:49:22 compute-0 nova_compute[253461]: 2025-11-22 03:49:22.466 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:49:23.002 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:49:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:49:23.002 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:49:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:49:23.003 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:49:23 compute-0 ceph-mon[75011]: pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:49:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Nov 22 03:49:25 compute-0 ceph-mon[75011]: pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Nov 22 03:49:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Nov 22 03:49:25 compute-0 nova_compute[253461]: 2025-11-22 03:49:25.472 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:25 compute-0 nova_compute[253461]: 2025-11-22 03:49:25.473 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:25 compute-0 nova_compute[253461]: 2025-11-22 03:49:25.473 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:49:25 compute-0 nova_compute[253461]: 2025-11-22 03:49:25.473 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:49:25 compute-0 nova_compute[253461]: 2025-11-22 03:49:25.485 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:49:25 compute-0 nova_compute[253461]: 2025-11-22 03:49:25.486 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 22 03:49:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Nov 22 03:49:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Nov 22 03:49:26 compute-0 ceph-mon[75011]: osdmap e131: 3 total, 3 up, 3 in
Nov 22 03:49:26 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.472 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.473 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.473 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.473 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.473 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:49:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:49:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1754331954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:49:26 compute-0 nova_compute[253461]: 2025-11-22 03:49:26.930 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.122 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.123 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5160MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.123 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.124 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:49:27 compute-0 rsyslogd[1007]: imjournal: 1220 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.305 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.305 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:49:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Nov 22 03:49:27 compute-0 ceph-mon[75011]: pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 102 B/s wr, 0 op/s
Nov 22 03:49:27 compute-0 ceph-mon[75011]: osdmap e132: 3 total, 3 up, 3 in
Nov 22 03:49:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1754331954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:49:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Nov 22 03:49:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.357 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing inventories for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.424 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating ProviderTree inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.425 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.454 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing aggregate associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.478 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing trait associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.494 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:49:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:49:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3915164810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.938 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.943 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:49:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.3 KiB/s wr, 28 op/s
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.960 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.961 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:49:27 compute-0 nova_compute[253461]: 2025-11-22 03:49:27.961 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:49:28 compute-0 ceph-mon[75011]: osdmap e133: 3 total, 3 up, 3 in
Nov 22 03:49:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3915164810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:49:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Nov 22 03:49:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Nov 22 03:49:29 compute-0 ceph-mon[75011]: pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.3 KiB/s wr, 28 op/s
Nov 22 03:49:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Nov 22 03:49:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Nov 22 03:49:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 KiB/s wr, 37 op/s
Nov 22 03:49:29 compute-0 nova_compute[253461]: 2025-11-22 03:49:29.961 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:29 compute-0 nova_compute[253461]: 2025-11-22 03:49:29.962 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:49:30 compute-0 ceph-mon[75011]: osdmap e134: 3 total, 3 up, 3 in
Nov 22 03:49:31 compute-0 ceph-mon[75011]: pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 KiB/s wr, 37 op/s
Nov 22 03:49:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.8 MiB/s wr, 53 op/s
Nov 22 03:49:33 compute-0 podman[259543]: 2025-11-22 03:49:33.364779734 +0000 UTC m=+0.048285402 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 03:49:33 compute-0 ceph-mon[75011]: pgmap v872: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.8 MiB/s wr, 53 op/s
Nov 22 03:49:33 compute-0 podman[259544]: 2025-11-22 03:49:33.468238788 +0000 UTC m=+0.148546450 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:49:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Nov 22 03:49:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Nov 22 03:49:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Nov 22 03:49:34 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Nov 22 03:49:35 compute-0 ceph-mon[75011]: pgmap v873: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.4 MiB/s wr, 50 op/s
Nov 22 03:49:35 compute-0 ceph-mon[75011]: osdmap e135: 3 total, 3 up, 3 in
Nov 22 03:49:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 5.1 MiB/s wr, 25 op/s
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:49:36
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'images', '.mgr', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'vms']
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:49:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:49:37 compute-0 ceph-mon[75011]: pgmap v875: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 5.1 MiB/s wr, 25 op/s
Nov 22 03:49:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 4.8 MiB/s wr, 24 op/s
Nov 22 03:49:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:39 compute-0 ceph-mon[75011]: pgmap v876: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 4.8 MiB/s wr, 24 op/s
Nov 22 03:49:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 4.1 MiB/s wr, 20 op/s
Nov 22 03:49:41 compute-0 ceph-mon[75011]: pgmap v877: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 4.1 MiB/s wr, 20 op/s
Nov 22 03:49:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Nov 22 03:49:43 compute-0 ceph-mon[75011]: pgmap v878: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Nov 22 03:49:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:45 compute-0 ceph-mon[75011]: pgmap v879: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:49:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:49:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:49:46.461 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:49:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:49:46.463 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:49:47 compute-0 ceph-mon[75011]: pgmap v880: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:47 compute-0 podman[259588]: 2025-11-22 03:49:47.382680959 +0000 UTC m=+0.064207852 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 22 03:49:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:49 compute-0 ceph-mon[75011]: pgmap v881: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Nov 22 03:49:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Nov 22 03:49:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Nov 22 03:49:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:49:50.465 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:49:51 compute-0 ceph-mon[75011]: pgmap v882: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Nov 22 03:49:51 compute-0 ceph-mon[75011]: osdmap e136: 3 total, 3 up, 3 in
Nov 22 03:49:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Nov 22 03:49:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Nov 22 03:49:51 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Nov 22 03:49:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 383 B/s wr, 0 op/s
Nov 22 03:49:52 compute-0 ceph-mon[75011]: osdmap e137: 3 total, 3 up, 3 in
Nov 22 03:49:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Nov 22 03:49:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Nov 22 03:49:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Nov 22 03:49:53 compute-0 ceph-mon[75011]: pgmap v885: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 127 B/s rd, 383 B/s wr, 0 op/s
Nov 22 03:49:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 682 B/s wr, 1 op/s
Nov 22 03:49:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:54 compute-0 ceph-mon[75011]: osdmap e138: 3 total, 3 up, 3 in
Nov 22 03:49:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Nov 22 03:49:55 compute-0 ceph-mon[75011]: pgmap v887: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 682 B/s wr, 1 op/s
Nov 22 03:49:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Nov 22 03:49:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Nov 22 03:49:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Nov 22 03:49:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:49:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3815733865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:49:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:49:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3815733865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:49:56 compute-0 ceph-mon[75011]: osdmap e139: 3 total, 3 up, 3 in
Nov 22 03:49:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3815733865' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:49:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3815733865' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:49:57 compute-0 ceph-mon[75011]: pgmap v889: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Nov 22 03:49:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 KiB/s wr, 53 op/s
Nov 22 03:49:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:49:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Nov 22 03:49:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Nov 22 03:49:59 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Nov 22 03:49:59 compute-0 ceph-mon[75011]: pgmap v890: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 KiB/s wr, 53 op/s
Nov 22 03:49:59 compute-0 ceph-mon[75011]: osdmap e140: 3 total, 3 up, 3 in
Nov 22 03:49:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Nov 22 03:50:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3612395807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3612395807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3612395807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3612395807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:01 compute-0 ceph-mon[75011]: pgmap v892: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Nov 22 03:50:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 44 op/s
Nov 22 03:50:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/837733408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/837733408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:03 compute-0 ceph-mon[75011]: pgmap v893: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.9 KiB/s wr, 44 op/s
Nov 22 03:50:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/837733408' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/837733408' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Nov 22 03:50:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:04 compute-0 podman[259609]: 2025-11-22 03:50:04.409954187 +0000 UTC m=+0.079373332 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 03:50:04 compute-0 podman[259610]: 2025-11-22 03:50:04.46160587 +0000 UTC m=+0.123740328 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2)
Nov 22 03:50:05 compute-0 ceph-mon[75011]: pgmap v894: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.8 KiB/s wr, 41 op/s
Nov 22 03:50:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Nov 22 03:50:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:07 compute-0 ceph-mon[75011]: pgmap v895: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Nov 22 03:50:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 22 03:50:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:09 compute-0 ceph-mon[75011]: pgmap v896: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Nov 22 03:50:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 960 B/s rd, 0 B/s wr, 1 op/s
Nov 22 03:50:11 compute-0 ceph-mon[75011]: pgmap v897: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 960 B/s rd, 0 B/s wr, 1 op/s
Nov 22 03:50:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 22 KiB/s wr, 3 op/s
Nov 22 03:50:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Nov 22 03:50:13 compute-0 ceph-mon[75011]: pgmap v898: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 22 KiB/s wr, 3 op/s
Nov 22 03:50:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Nov 22 03:50:13 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Nov 22 03:50:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 26 KiB/s wr, 6 op/s
Nov 22 03:50:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:14 compute-0 sudo[259653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:14 compute-0 sudo[259653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:14 compute-0 sudo[259653]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:14 compute-0 ceph-mon[75011]: osdmap e141: 3 total, 3 up, 3 in
Nov 22 03:50:14 compute-0 sudo[259678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:50:14 compute-0 sudo[259678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:14 compute-0 sudo[259678]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:14 compute-0 sudo[259703]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:14 compute-0 sudo[259703]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:14 compute-0 sudo[259703]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:14 compute-0 sudo[259728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:50:14 compute-0 sudo[259728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:15 compute-0 sudo[259728]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:50:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:50:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:50:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:50:15 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f60b53bd-16a1-4f72-a819-7c3090fb6021 does not exist
Nov 22 03:50:15 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 2de36930-5adf-49cc-9f72-7d9856e6e11e does not exist
Nov 22 03:50:15 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e53c451b-b963-402f-a05c-5171da8d4e21 does not exist
Nov 22 03:50:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:50:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:50:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:50:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4191385558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4191385558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:15 compute-0 sudo[259784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:15 compute-0 sudo[259784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:15 compute-0 sudo[259784]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:15 compute-0 sudo[259809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:50:15 compute-0 sudo[259809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:15 compute-0 sudo[259809]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:15 compute-0 ceph-mon[75011]: pgmap v900: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 26 KiB/s wr, 6 op/s
Nov 22 03:50:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:50:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4191385558' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4191385558' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:15 compute-0 sudo[259834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:15 compute-0 sudo[259834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:15 compute-0 sudo[259834]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:15 compute-0 sudo[259859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:50:15 compute-0 sudo[259859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 27 KiB/s wr, 11 op/s
Nov 22 03:50:16 compute-0 podman[259924]: 2025-11-22 03:50:16.086716682 +0000 UTC m=+0.067971348 container create 7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shirley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:50:16 compute-0 systemd[1]: Started libpod-conmon-7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075.scope.
Nov 22 03:50:16 compute-0 podman[259924]: 2025-11-22 03:50:16.057393017 +0000 UTC m=+0.038647743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:50:16 compute-0 podman[259924]: 2025-11-22 03:50:16.177395446 +0000 UTC m=+0.158650092 container init 7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:50:16 compute-0 podman[259924]: 2025-11-22 03:50:16.186476547 +0000 UTC m=+0.167731213 container start 7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shirley, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:50:16 compute-0 podman[259924]: 2025-11-22 03:50:16.19139326 +0000 UTC m=+0.172647966 container attach 7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:50:16 compute-0 silly_shirley[259941]: 167 167
Nov 22 03:50:16 compute-0 systemd[1]: libpod-7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075.scope: Deactivated successfully.
Nov 22 03:50:16 compute-0 podman[259924]: 2025-11-22 03:50:16.193881089 +0000 UTC m=+0.175135764 container died 7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:50:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8bb92d1450a4da2e10286b9a9cb0a18c550cbdcd356ff49b119846b16e1f0d0-merged.mount: Deactivated successfully.
Nov 22 03:50:16 compute-0 podman[259924]: 2025-11-22 03:50:16.240769625 +0000 UTC m=+0.222024261 container remove 7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:50:16 compute-0 systemd[1]: libpod-conmon-7cd304c4020fc1d749239a109ce4925754c5c3f3dfa056facec3a093237af075.scope: Deactivated successfully.
Nov 22 03:50:16 compute-0 podman[259965]: 2025-11-22 03:50:16.418306923 +0000 UTC m=+0.056193193 container create 802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bhabha, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:50:16 compute-0 systemd[1]: Started libpod-conmon-802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5.scope.
Nov 22 03:50:16 compute-0 podman[259965]: 2025-11-22 03:50:16.391017098 +0000 UTC m=+0.028903438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2169374751df8826b9e7d25e99098365c7a5d587066adbbfb66146f7d8c857/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2169374751df8826b9e7d25e99098365c7a5d587066adbbfb66146f7d8c857/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2169374751df8826b9e7d25e99098365c7a5d587066adbbfb66146f7d8c857/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2169374751df8826b9e7d25e99098365c7a5d587066adbbfb66146f7d8c857/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e2169374751df8826b9e7d25e99098365c7a5d587066adbbfb66146f7d8c857/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:16 compute-0 podman[259965]: 2025-11-22 03:50:16.523459518 +0000 UTC m=+0.161345789 container init 802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:50:16 compute-0 podman[259965]: 2025-11-22 03:50:16.532538901 +0000 UTC m=+0.170425152 container start 802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 03:50:16 compute-0 podman[259965]: 2025-11-22 03:50:16.536609156 +0000 UTC m=+0.174495397 container attach 802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bhabha, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:50:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674143203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674143203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:17 compute-0 ceph-mon[75011]: pgmap v901: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 6.9 KiB/s rd, 27 KiB/s wr, 11 op/s
Nov 22 03:50:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3674143203' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3674143203' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:17 compute-0 naughty_bhabha[259981]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:50:17 compute-0 naughty_bhabha[259981]: --> relative data size: 1.0
Nov 22 03:50:17 compute-0 naughty_bhabha[259981]: --> All data devices are unavailable
Nov 22 03:50:17 compute-0 systemd[1]: libpod-802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5.scope: Deactivated successfully.
Nov 22 03:50:17 compute-0 podman[259965]: 2025-11-22 03:50:17.647352083 +0000 UTC m=+1.285238384 container died 802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:50:17 compute-0 systemd[1]: libpod-802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5.scope: Consumed 1.066s CPU time.
Nov 22 03:50:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e2169374751df8826b9e7d25e99098365c7a5d587066adbbfb66146f7d8c857-merged.mount: Deactivated successfully.
Nov 22 03:50:17 compute-0 podman[259965]: 2025-11-22 03:50:17.715900801 +0000 UTC m=+1.353787042 container remove 802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:50:17 compute-0 systemd[1]: libpod-conmon-802fc0ad59f75a6e1aadae6316fdf702fa195942bc2dbffa5744e019317048b5.scope: Deactivated successfully.
Nov 22 03:50:17 compute-0 podman[260011]: 2025-11-22 03:50:17.748379044 +0000 UTC m=+0.072638145 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 03:50:17 compute-0 sudo[259859]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:17 compute-0 sudo[260043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:17 compute-0 sudo[260043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:17 compute-0 sudo[260043]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:17 compute-0 sudo[260068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:50:17 compute-0 sudo[260068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:17 compute-0 sudo[260068]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:17 compute-0 sudo[260093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:17 compute-0 sudo[260093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:17 compute-0 sudo[260093]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 29 KiB/s wr, 57 op/s
Nov 22 03:50:18 compute-0 sudo[260118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:50:18 compute-0 sudo[260118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1487287091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1487287091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:18 compute-0 podman[260186]: 2025-11-22 03:50:18.388036257 +0000 UTC m=+0.057962957 container create 631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ardinghelli, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:50:18 compute-0 systemd[1]: Started libpod-conmon-631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6.scope.
Nov 22 03:50:18 compute-0 podman[260186]: 2025-11-22 03:50:18.36930037 +0000 UTC m=+0.039227070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:50:18 compute-0 podman[260186]: 2025-11-22 03:50:18.489730895 +0000 UTC m=+0.159657655 container init 631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ardinghelli, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:50:18 compute-0 podman[260186]: 2025-11-22 03:50:18.498259551 +0000 UTC m=+0.168186260 container start 631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:50:18 compute-0 podman[260186]: 2025-11-22 03:50:18.503163842 +0000 UTC m=+0.173090582 container attach 631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:50:18 compute-0 hardcore_ardinghelli[260202]: 167 167
Nov 22 03:50:18 compute-0 podman[260186]: 2025-11-22 03:50:18.50845454 +0000 UTC m=+0.178381250 container died 631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ardinghelli, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:50:18 compute-0 systemd[1]: libpod-631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6.scope: Deactivated successfully.
Nov 22 03:50:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c16e14f3a72485cf143d3f1e1f139e8cb3403df031587975c08756ceb73c40a-merged.mount: Deactivated successfully.
Nov 22 03:50:18 compute-0 podman[260186]: 2025-11-22 03:50:18.560247597 +0000 UTC m=+0.230174277 container remove 631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:50:18 compute-0 systemd[1]: libpod-conmon-631647c54bf15f73d13d7d73a659b026196ce89c4d8d7a673608ff8fe98adca6.scope: Deactivated successfully.
Nov 22 03:50:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1487287091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1487287091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:18 compute-0 podman[260227]: 2025-11-22 03:50:18.741403748 +0000 UTC m=+0.035956901 container create fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mahavira, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:50:18 compute-0 systemd[1]: Started libpod-conmon-fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e.scope.
Nov 22 03:50:18 compute-0 podman[260227]: 2025-11-22 03:50:18.725403259 +0000 UTC m=+0.019956432 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19184b802e80f9635f622ffa8c4dd916cb2e60d903850caee121b39dd019e842/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19184b802e80f9635f622ffa8c4dd916cb2e60d903850caee121b39dd019e842/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19184b802e80f9635f622ffa8c4dd916cb2e60d903850caee121b39dd019e842/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19184b802e80f9635f622ffa8c4dd916cb2e60d903850caee121b39dd019e842/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:18 compute-0 podman[260227]: 2025-11-22 03:50:18.846459513 +0000 UTC m=+0.141012686 container init fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mahavira, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:50:18 compute-0 podman[260227]: 2025-11-22 03:50:18.859781492 +0000 UTC m=+0.154334665 container start fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:50:18 compute-0 podman[260227]: 2025-11-22 03:50:18.864221828 +0000 UTC m=+0.158774991 container attach fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:50:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]: {
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:     "0": [
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:         {
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "devices": [
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "/dev/loop3"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             ],
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_name": "ceph_lv0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_size": "21470642176",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "name": "ceph_lv0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "tags": {
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cluster_name": "ceph",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.crush_device_class": "",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.encrypted": "0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osd_id": "0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.type": "block",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.vdo": "0"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             },
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "type": "block",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "vg_name": "ceph_vg0"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:         }
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:     ],
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:     "1": [
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:         {
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "devices": [
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "/dev/loop4"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             ],
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_name": "ceph_lv1",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_size": "21470642176",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "name": "ceph_lv1",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "tags": {
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cluster_name": "ceph",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.crush_device_class": "",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.encrypted": "0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osd_id": "1",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.type": "block",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.vdo": "0"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             },
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "type": "block",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "vg_name": "ceph_vg1"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:         }
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:     ],
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:     "2": [
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:         {
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "devices": [
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "/dev/loop5"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             ],
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_name": "ceph_lv2",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_size": "21470642176",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "name": "ceph_lv2",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "tags": {
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.cluster_name": "ceph",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.crush_device_class": "",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.encrypted": "0",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osd_id": "2",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.type": "block",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:                 "ceph.vdo": "0"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             },
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "type": "block",
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:             "vg_name": "ceph_vg2"
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:         }
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]:     ]
Nov 22 03:50:19 compute-0 intelligent_mahavira[260243]: }
Nov 22 03:50:19 compute-0 ceph-mon[75011]: pgmap v902: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 29 KiB/s wr, 57 op/s
Nov 22 03:50:19 compute-0 systemd[1]: libpod-fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e.scope: Deactivated successfully.
Nov 22 03:50:19 compute-0 podman[260227]: 2025-11-22 03:50:19.614953645 +0000 UTC m=+0.909506858 container died fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mahavira, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:50:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-19184b802e80f9635f622ffa8c4dd916cb2e60d903850caee121b39dd019e842-merged.mount: Deactivated successfully.
Nov 22 03:50:19 compute-0 podman[260227]: 2025-11-22 03:50:19.666347564 +0000 UTC m=+0.960900727 container remove fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:50:19 compute-0 systemd[1]: libpod-conmon-fa7c932b7407616041673c061024eb206511f057052bbf4e6b2bf2bfe9bb779e.scope: Deactivated successfully.
Nov 22 03:50:19 compute-0 sudo[260118]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:19 compute-0 sudo[260265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:19 compute-0 sudo[260265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:19 compute-0 sudo[260265]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:19 compute-0 sudo[260290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:50:19 compute-0 sudo[260290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:19 compute-0 sudo[260290]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:19 compute-0 sudo[260315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:19 compute-0 sudo[260315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:19 compute-0 sudo[260315]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:19 compute-0 sudo[260340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:50:19 compute-0 sudo[260340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 29 KiB/s wr, 57 op/s
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.050 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.052 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.104 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.289 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.290 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.299 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.300 253465 INFO nova.compute.claims [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:50:20 compute-0 podman[260407]: 2025-11-22 03:50:20.398555725 +0000 UTC m=+0.067843435 container create 4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_knuth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:50:20 compute-0 podman[260407]: 2025-11-22 03:50:20.360636971 +0000 UTC m=+0.029924741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.455 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:20 compute-0 systemd[1]: Started libpod-conmon-4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e.scope.
Nov 22 03:50:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:50:20 compute-0 podman[260407]: 2025-11-22 03:50:20.544175923 +0000 UTC m=+0.213463682 container init 4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_knuth, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:50:20 compute-0 podman[260407]: 2025-11-22 03:50:20.558614596 +0000 UTC m=+0.227902296 container start 4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:50:20 compute-0 nice_knuth[260424]: 167 167
Nov 22 03:50:20 compute-0 systemd[1]: libpod-4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e.scope: Deactivated successfully.
Nov 22 03:50:20 compute-0 conmon[260424]: conmon 4940b214521fe2278ea9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e.scope/container/memory.events
Nov 22 03:50:20 compute-0 podman[260407]: 2025-11-22 03:50:20.570601267 +0000 UTC m=+0.239888977 container attach 4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:50:20 compute-0 podman[260407]: 2025-11-22 03:50:20.571033099 +0000 UTC m=+0.240320839 container died 4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:50:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d3abcdc90644adfd29470852fd5c37ca354226cce4c102d9e098af705c272bb-merged.mount: Deactivated successfully.
Nov 22 03:50:20 compute-0 podman[260407]: 2025-11-22 03:50:20.629993972 +0000 UTC m=+0.299281682 container remove 4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:50:20 compute-0 systemd[1]: libpod-conmon-4940b214521fe2278ea9367791bf673520ee4b5c4163d47f83e0b4962d60e23e.scope: Deactivated successfully.
Nov 22 03:50:20 compute-0 podman[260466]: 2025-11-22 03:50:20.854062896 +0000 UTC m=+0.056975275 container create a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 03:50:20 compute-0 systemd[1]: Started libpod-conmon-a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587.scope.
Nov 22 03:50:20 compute-0 podman[260466]: 2025-11-22 03:50:20.827846118 +0000 UTC m=+0.030758487 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:50:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:50:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267592384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:50:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c5d8a3a3db896d07ca6b2dfaef42455855e4d34bf9ed314d3e127d36f620dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c5d8a3a3db896d07ca6b2dfaef42455855e4d34bf9ed314d3e127d36f620dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c5d8a3a3db896d07ca6b2dfaef42455855e4d34bf9ed314d3e127d36f620dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7c5d8a3a3db896d07ca6b2dfaef42455855e4d34bf9ed314d3e127d36f620dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.950 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:20 compute-0 podman[260466]: 2025-11-22 03:50:20.956668179 +0000 UTC m=+0.159580538 container init a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.961 253465 DEBUG nova.compute.provider_tree [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:50:20 compute-0 podman[260466]: 2025-11-22 03:50:20.969220937 +0000 UTC m=+0.172133286 container start a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:50:20 compute-0 podman[260466]: 2025-11-22 03:50:20.973194592 +0000 UTC m=+0.176106961 container attach a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:50:20 compute-0 nova_compute[253461]: 2025-11-22 03:50:20.980 253465 DEBUG nova.scheduler.client.report [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.005 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.006 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.062 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.063 253465 DEBUG nova.network.neutron [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.089 253465 INFO nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.106 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.217 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.219 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.220 253465 INFO nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Creating image(s)
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.259 253465 DEBUG nova.storage.rbd_utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] rbd image 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.294 253465 DEBUG nova.storage.rbd_utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] rbd image 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.327 253465 DEBUG nova.storage.rbd_utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] rbd image 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.331 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.332 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:21 compute-0 ceph-mon[75011]: pgmap v903: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 29 KiB/s wr, 57 op/s
Nov 22 03:50:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2267592384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.781 253465 WARNING oslo_policy.policy [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.781 253465 WARNING oslo_policy.policy [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.785 253465 DEBUG nova.policy [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f358860e840943098fe9f91af8f7b08f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b513e0b5b0547e2835dc35495d5637f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:50:21 compute-0 nova_compute[253461]: 2025-11-22 03:50:21.889 253465 DEBUG nova.virt.libvirt.imagebackend [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Image locations are: [{'url': 'rbd://7adcc38b-6484-5de6-b879-33a0309153df/images/feac2ecd-89f4-4e45-b9fb-68cb0cf353c3/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://7adcc38b-6484-5de6-b879-33a0309153df/images/feac2ecd-89f4-4e45-b9fb-68cb0cf353c3/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 03:50:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.4 KiB/s wr, 59 op/s
Nov 22 03:50:22 compute-0 objective_rhodes[260483]: {
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "osd_id": 1,
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "type": "bluestore"
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:     },
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "osd_id": 0,
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "type": "bluestore"
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:     },
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "osd_id": 2,
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:         "type": "bluestore"
Nov 22 03:50:22 compute-0 objective_rhodes[260483]:     }
Nov 22 03:50:22 compute-0 objective_rhodes[260483]: }
Nov 22 03:50:22 compute-0 systemd[1]: libpod-a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587.scope: Deactivated successfully.
Nov 22 03:50:22 compute-0 podman[260466]: 2025-11-22 03:50:22.116222944 +0000 UTC m=+1.319135312 container died a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:50:22 compute-0 systemd[1]: libpod-a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587.scope: Consumed 1.152s CPU time.
Nov 22 03:50:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7c5d8a3a3db896d07ca6b2dfaef42455855e4d34bf9ed314d3e127d36f620dd-merged.mount: Deactivated successfully.
Nov 22 03:50:22 compute-0 podman[260466]: 2025-11-22 03:50:22.189529554 +0000 UTC m=+1.392441893 container remove a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:50:22 compute-0 systemd[1]: libpod-conmon-a7d3b8a9cc616aacc11e090ee3339bf9ec8be1eb7120e88f503c9d580e365587.scope: Deactivated successfully.
Nov 22 03:50:22 compute-0 sudo[260340]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:50:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:50:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.236138) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783422236217, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2111, "num_deletes": 252, "total_data_size": 3406233, "memory_usage": 3471584, "flush_reason": "Manual Compaction"}
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783422262775, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3338331, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16452, "largest_seqno": 18562, "table_properties": {"data_size": 3328728, "index_size": 6097, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19368, "raw_average_key_size": 20, "raw_value_size": 3309443, "raw_average_value_size": 3436, "num_data_blocks": 274, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783208, "oldest_key_time": 1763783208, "file_creation_time": 1763783422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 26697 microseconds, and 13711 cpu microseconds.
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:50:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.262839) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3338331 bytes OK
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.262866) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.264973) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.265003) EVENT_LOG_v1 {"time_micros": 1763783422264992, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.265027) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3397360, prev total WAL file size 3436788, number of live WAL files 2.
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:50:22 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1e1d8bcb-c38d-4fbc-a5e2-c48552541bae does not exist
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.266371) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3260KB)], [38(7708KB)]
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783422266448, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11231980, "oldest_snapshot_seqno": -1}
Nov 22 03:50:22 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 0635b92b-a3e6-4eb5-931c-0cc946650564 does not exist
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4513 keys, 9491820 bytes, temperature: kUnknown
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783422322561, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9491820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9457582, "index_size": 21840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 109260, "raw_average_key_size": 24, "raw_value_size": 9371982, "raw_average_value_size": 2076, "num_data_blocks": 924, "num_entries": 4513, "num_filter_entries": 4513, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783422, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.322763) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9491820 bytes
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.324572) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.0 rd, 169.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5032, records dropped: 519 output_compression: NoCompression
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.324589) EVENT_LOG_v1 {"time_micros": 1763783422324581, "job": 18, "event": "compaction_finished", "compaction_time_micros": 56169, "compaction_time_cpu_micros": 26616, "output_level": 6, "num_output_files": 1, "total_output_size": 9491820, "num_input_records": 5032, "num_output_records": 4513, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783422325535, "job": 18, "event": "table_file_deletion", "file_number": 40}
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783422326953, "job": 18, "event": "table_file_deletion", "file_number": 38}
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.266267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.327002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.327007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.327009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.327010) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:22.327012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:22 compute-0 sudo[260587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:50:22 compute-0 sudo[260587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:22 compute-0 sudo[260587]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:22 compute-0 sudo[260612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:50:22 compute-0 sudo[260612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:50:22 compute-0 sudo[260612]: pam_unix(sudo:session): session closed for user root
Nov 22 03:50:22 compute-0 nova_compute[253461]: 2025-11-22 03:50:22.838 253465 DEBUG nova.network.neutron [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Successfully created port: b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:50:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:23.003 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:23.003 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:23.003 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:23 compute-0 ceph-mon[75011]: pgmap v904: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.4 KiB/s wr, 59 op/s
Nov 22 03:50:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:50:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.594 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.675 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d.part --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.677 253465 DEBUG nova.virt.images [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] feac2ecd-89f4-4e45-b9fb-68cb0cf353c3 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.679 253465 DEBUG nova.privsep.utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.680 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d.part /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.881 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d.part /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d.converted" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.888 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.972 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d.converted --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 66 op/s
Nov 22 03:50:23 compute-0 nova_compute[253461]: 2025-11-22 03:50:23.975 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.000 253465 DEBUG nova.storage.rbd_utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] rbd image 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.004 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.062 253465 DEBUG nova.network.neutron [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Successfully updated port: b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.080 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.081 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquired lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.081 253465 DEBUG nova.network.neutron [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:50:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Nov 22 03:50:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Nov 22 03:50:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Nov 22 03:50:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.476 253465 DEBUG nova.network.neutron [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.584 253465 DEBUG nova.compute.manager [req-233449c4-1631-47eb-b93e-ea4ee6c15a85 req-a41960d1-1d95-49e7-bac0-44c9a25877fc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event network-changed-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.585 253465 DEBUG nova.compute.manager [req-233449c4-1631-47eb-b93e-ea4ee6c15a85 req-a41960d1-1d95-49e7-bac0-44c9a25877fc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Refreshing instance network info cache due to event network-changed-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:50:24 compute-0 nova_compute[253461]: 2025-11-22 03:50:24.585 253465 DEBUG oslo_concurrency.lockutils [req-233449c4-1631-47eb-b93e-ea4ee6c15a85 req-a41960d1-1d95-49e7-bac0-44c9a25877fc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:50:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Nov 22 03:50:25 compute-0 ceph-mon[75011]: pgmap v905: 305 pgs: 305 active+clean; 42 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.0 KiB/s wr, 66 op/s
Nov 22 03:50:25 compute-0 ceph-mon[75011]: osdmap e142: 3 total, 3 up, 3 in
Nov 22 03:50:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Nov 22 03:50:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.354 253465 DEBUG nova.network.neutron [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Updating instance_info_cache with network_info: [{"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.376 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Releasing lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.377 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Instance network_info: |[{"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.377 253465 DEBUG oslo_concurrency.lockutils [req-233449c4-1631-47eb-b93e-ea4ee6c15a85 req-a41960d1-1d95-49e7-bac0-44c9a25877fc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.377 253465 DEBUG nova.network.neutron [req-233449c4-1631-47eb-b93e-ea4ee6c15a85 req-a41960d1-1d95-49e7-bac0-44c9a25877fc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Refreshing network info cache for port b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.431 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.431 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.470 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.471 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.542 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:25 compute-0 nova_compute[253461]: 2025-11-22 03:50:25.624 253465 DEBUG nova.storage.rbd_utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] resizing rbd image 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:50:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 64 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.6 MiB/s wr, 44 op/s
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.005 253465 DEBUG nova.objects.instance [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lazy-loading 'migration_context' on Instance uuid 7a2bb77b-45b0-41b6-a9ae-27d62354c775 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.038 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.039 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Ensure instance console log exists: /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.040 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.041 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.042 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.047 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Start _get_guest_xml network_info=[{"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.054 253465 WARNING nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.060 253465 DEBUG nova.virt.libvirt.host [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.061 253465 DEBUG nova.virt.libvirt.host [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.065 253465 DEBUG nova.virt.libvirt.host [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.066 253465 DEBUG nova.virt.libvirt.host [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.067 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.067 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.068 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.068 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.069 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.069 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.070 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.070 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.071 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.071 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.072 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.072 253465 DEBUG nova.virt.hardware [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.087 253465 DEBUG nova.privsep.utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.087 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:26 compute-0 ceph-mon[75011]: osdmap e143: 3 total, 3 up, 3 in
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.457 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.458 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.458 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.459 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.459 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:50:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1865353784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.613 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.637 253465 DEBUG nova.storage.rbd_utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] rbd image 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.641 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.817 253465 DEBUG nova.network.neutron [req-233449c4-1631-47eb-b93e-ea4ee6c15a85 req-a41960d1-1d95-49e7-bac0-44c9a25877fc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Updated VIF entry in instance network info cache for port b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.818 253465 DEBUG nova.network.neutron [req-233449c4-1631-47eb-b93e-ea4ee6c15a85 req-a41960d1-1d95-49e7-bac0-44c9a25877fc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Updating instance_info_cache with network_info: [{"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.835 253465 DEBUG oslo_concurrency.lockutils [req-233449c4-1631-47eb-b93e-ea4ee6c15a85 req-a41960d1-1d95-49e7-bac0-44c9a25877fc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:50:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:50:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1212415619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:50:26 compute-0 nova_compute[253461]: 2025-11-22 03:50:26.887 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.105 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.107 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=59.97578048706055GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.107 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:50:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186654739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.108 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.124 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.126 253465 DEBUG nova.virt.libvirt.vif [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:50:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1841714985',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1841714985',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1841714985',id=1,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMMPA2XHMYQjmnnfQBZV/dUcJHc5mtwtVDD9Nwnsp0M8EgjKTYwJyBvWYgHXCBmQJu0QPhrGh9rGfR0Dz+ovhIT+c4pbVwyTQ4tFbQI9rTt2/Gdbg3ApkbrhqdPNEU786g==',key_name='tempest-keypair-2121433138',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b513e0b5b0547e2835dc35495d5637f',ramdisk_id='',reservation_id='r-po7tcthz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-361238539',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-361238539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:50:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f358860e840943098fe9f91af8f7b08f',uuid=7a2bb77b-45b0-41b6-a9ae-27d62354c775,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.127 253465 DEBUG nova.network.os_vif_util [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Converting VIF {"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.128 253465 DEBUG nova.network.os_vif_util [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a0:b2,bridge_name='br-int',has_traffic_filtering=True,id=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6,network=Network(aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f7c4fc-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.131 253465 DEBUG nova.objects.instance [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7a2bb77b-45b0-41b6-a9ae-27d62354c775 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.151 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <uuid>7a2bb77b-45b0-41b6-a9ae-27d62354c775</uuid>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <name>instance-00000001</name>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <nova:name>tempest-EncryptedVolumesExtendAttachedTest-instance-1841714985</nova:name>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:50:26</nova:creationTime>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <nova:user uuid="f358860e840943098fe9f91af8f7b08f">tempest-EncryptedVolumesExtendAttachedTest-361238539-project-member</nova:user>
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <nova:project uuid="4b513e0b5b0547e2835dc35495d5637f">tempest-EncryptedVolumesExtendAttachedTest-361238539</nova:project>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <nova:port uuid="b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6">
Nov 22 03:50:27 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <system>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <entry name="serial">7a2bb77b-45b0-41b6-a9ae-27d62354c775</entry>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <entry name="uuid">7a2bb77b-45b0-41b6-a9ae-27d62354c775</entry>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </system>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <os>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   </os>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <features>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   </features>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk">
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       </source>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk.config">
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       </source>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:50:27 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:c3:a0:b2"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <target dev="tapb7f7c4fc-78"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775/console.log" append="off"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <video>
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </video>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:50:27 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:50:27 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:50:27 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:50:27 compute-0 nova_compute[253461]: </domain>
Nov 22 03:50:27 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.153 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Preparing to wait for external event network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.153 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.153 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.154 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.155 253465 DEBUG nova.virt.libvirt.vif [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:50:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1841714985',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1841714985',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1841714985',id=1,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMMPA2XHMYQjmnnfQBZV/dUcJHc5mtwtVDD9Nwnsp0M8EgjKTYwJyBvWYgHXCBmQJu0QPhrGh9rGfR0Dz+ovhIT+c4pbVwyTQ4tFbQI9rTt2/Gdbg3ApkbrhqdPNEU786g==',key_name='tempest-keypair-2121433138',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b513e0b5b0547e2835dc35495d5637f',ramdisk_id='',reservation_id='r-po7tcthz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-361238539',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-361238539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:50:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f358860e840943098fe9f91af8f7b08f',uuid=7a2bb77b-45b0-41b6-a9ae-27d62354c775,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.155 253465 DEBUG nova.network.os_vif_util [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Converting VIF {"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.156 253465 DEBUG nova.network.os_vif_util [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a0:b2,bridge_name='br-int',has_traffic_filtering=True,id=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6,network=Network(aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f7c4fc-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.157 253465 DEBUG os_vif [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a0:b2,bridge_name='br-int',has_traffic_filtering=True,id=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6,network=Network(aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f7c4fc-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.212 253465 DEBUG ovsdbapp.backend.ovs_idl [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.212 253465 DEBUG ovsdbapp.backend.ovs_idl [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.213 253465 DEBUG ovsdbapp.backend.ovs_idl [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.213 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.215 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.216 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.217 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.219 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.222 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.237 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.237 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.238 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.239 253465 INFO oslo.privsep.daemon [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp0onfu_n2/privsep.sock']
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.257 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 7a2bb77b-45b0-41b6-a9ae-27d62354c775 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.258 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.258 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.294 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2126700416' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2126700416' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:27 compute-0 ceph-mon[75011]: pgmap v908: 305 pgs: 305 active+clean; 64 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.6 MiB/s wr, 44 op/s
Nov 22 03:50:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1865353784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:50:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1212415619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:50:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1186654739' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:50:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2126700416' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:50:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689474659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.790 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.799 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.832 253465 ERROR nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [req-56784292-5129-4ce5-8622-24969f304b8a] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 62e18608-eaef-4f09-8e92-06d41e51d580.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-56784292-5129-4ce5-8622-24969f304b8a"}]}
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.846 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing inventories for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.869 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating ProviderTree inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.869 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.893 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing aggregate associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.919 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing trait associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.968 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 73 op/s
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.990 253465 INFO oslo.privsep.daemon [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Spawned new privsep daemon via rootwrap
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.847 260874 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.852 260874 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.854 260874 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Nov 22 03:50:27 compute-0 nova_compute[253461]: 2025-11-22 03:50:27.855 260874 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260874
Nov 22 03:50:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3859154088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3859154088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2126700416' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2689474659' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:50:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3859154088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3859154088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.314 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.315 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7f7c4fc-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.316 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7f7c4fc-78, col_values=(('external_ids', {'iface-id': 'b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c3:a0:b2', 'vm-uuid': '7a2bb77b-45b0-41b6-a9ae-27d62354c775'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.318 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:28 compute-0 NetworkManager[48916]: <info>  [1763783428.3197] manager: (tapb7f7c4fc-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.323 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.325 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.325 253465 INFO os_vif [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c3:a0:b2,bridge_name='br-int',has_traffic_filtering=True,id=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6,network=Network(aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f7c4fc-78')
Nov 22 03:50:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:50:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1912232286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.383 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.384 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.384 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] No VIF found with MAC fa:16:3e:c3:a0:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.385 253465 INFO nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Using config drive
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.407 253465 DEBUG nova.storage.rbd_utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] rbd image 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.411 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.416 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.472 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updated inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.473 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.473 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.491 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.492 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.384s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:28 compute-0 nova_compute[253461]: 2025-11-22 03:50:28.998 253465 INFO nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Creating config drive at /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775/disk.config
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.003 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprywswyaj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.136 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprywswyaj" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.167 253465 DEBUG nova.storage.rbd_utils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] rbd image 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.172 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775/disk.config 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:50:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e143 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Nov 22 03:50:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Nov 22 03:50:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Nov 22 03:50:29 compute-0 ceph-mon[75011]: pgmap v909: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 73 op/s
Nov 22 03:50:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1912232286' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.327411) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783429327462, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 372, "num_deletes": 250, "total_data_size": 223316, "memory_usage": 231184, "flush_reason": "Manual Compaction"}
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783429333222, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 221255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18563, "largest_seqno": 18934, "table_properties": {"data_size": 218877, "index_size": 478, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6234, "raw_average_key_size": 19, "raw_value_size": 214061, "raw_average_value_size": 683, "num_data_blocks": 22, "num_entries": 313, "num_filter_entries": 313, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783422, "oldest_key_time": 1763783422, "file_creation_time": 1763783429, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 5811 microseconds, and 881 cpu microseconds.
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.333247) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 221255 bytes OK
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.333259) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.335639) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.335666) EVENT_LOG_v1 {"time_micros": 1763783429335657, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.335684) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 220834, prev total WAL file size 220834, number of live WAL files 2.
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.336730) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373533' seq:0, type:0; will stop at (end)
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(216KB)], [41(9269KB)]
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783429336784, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9713075, "oldest_snapshot_seqno": -1}
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.356 253465 DEBUG oslo_concurrency.processutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775/disk.config 7a2bb77b-45b0-41b6-a9ae-27d62354c775_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.357 253465 INFO nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Deleting local config drive /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775/disk.config because it was imported into RBD.
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4313 keys, 6413217 bytes, temperature: kUnknown
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783429377988, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6413217, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6384768, "index_size": 16563, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10821, "raw_key_size": 105627, "raw_average_key_size": 24, "raw_value_size": 6307001, "raw_average_value_size": 1462, "num_data_blocks": 694, "num_entries": 4313, "num_filter_entries": 4313, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783429, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.378167) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6413217 bytes
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.379297) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.5 rd, 155.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.1 +0.0 blob) out(6.1 +0.0 blob), read-write-amplify(72.9) write-amplify(29.0) OK, records in: 4826, records dropped: 513 output_compression: NoCompression
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.379313) EVENT_LOG_v1 {"time_micros": 1763783429379306, "job": 20, "event": "compaction_finished", "compaction_time_micros": 41252, "compaction_time_cpu_micros": 25215, "output_level": 6, "num_output_files": 1, "total_output_size": 6413217, "num_input_records": 4826, "num_output_records": 4313, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783429379418, "job": 20, "event": "table_file_deletion", "file_number": 43}
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783429380741, "job": 20, "event": "table_file_deletion", "file_number": 41}
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.336628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.380764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.380767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.380768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.380770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:50:29.380771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:50:29 compute-0 systemd[1]: Starting libvirt secret daemon...
Nov 22 03:50:29 compute-0 systemd[1]: Started libvirt secret daemon.
Nov 22 03:50:29 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.488 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.488 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:29 compute-0 NetworkManager[48916]: <info>  [1763783429.4921] manager: (tapb7f7c4fc-78): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Nov 22 03:50:29 compute-0 kernel: tapb7f7c4fc-78: entered promiscuous mode
Nov 22 03:50:29 compute-0 ovn_controller[152691]: 2025-11-22T03:50:29Z|00027|binding|INFO|Claiming lport b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 for this chassis.
Nov 22 03:50:29 compute-0 ovn_controller[152691]: 2025-11-22T03:50:29Z|00028|binding|INFO|b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6: Claiming fa:16:3e:c3:a0:b2 10.100.0.5
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.540 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.546 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.547 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.548 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.548 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.548 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:50:29 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:29.555 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:a0:b2 10.100.0.5'], port_security=['fa:16:3e:c3:a0:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7a2bb77b-45b0-41b6-a9ae-27d62354c775', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b513e0b5b0547e2835dc35495d5637f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53875ba9-d4ce-4815-aff2-34f2c670521d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ebb1d3-8e5d-45d0-be1c-055784d08c57, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:50:29 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:29.557 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 in datapath aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff bound to our chassis
Nov 22 03:50:29 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:29.560 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff
Nov 22 03:50:29 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:29.562 162689 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpm2aa2zr1/privsep.sock']
Nov 22 03:50:29 compute-0 systemd-udevd[260996]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:50:29 compute-0 NetworkManager[48916]: <info>  [1763783429.5888] device (tapb7f7c4fc-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:50:29 compute-0 NetworkManager[48916]: <info>  [1763783429.5895] device (tapb7f7c4fc-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:50:29 compute-0 systemd-machined[215728]: New machine qemu-1-instance-00000001.
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.623 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:29 compute-0 ovn_controller[152691]: 2025-11-22T03:50:29Z|00029|binding|INFO|Setting lport b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 ovn-installed in OVS
Nov 22 03:50:29 compute-0 ovn_controller[152691]: 2025-11-22T03:50:29Z|00030|binding|INFO|Setting lport b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 up in Southbound
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.629 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:29 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.925 253465 DEBUG nova.compute.manager [req-a735bc2b-f33d-4bb5-93cb-89205a299d6f req-f8528232-1b30-4dde-a9a5-cec9213e2cdb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.928 253465 DEBUG oslo_concurrency.lockutils [req-a735bc2b-f33d-4bb5-93cb-89205a299d6f req-f8528232-1b30-4dde-a9a5-cec9213e2cdb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.929 253465 DEBUG oslo_concurrency.lockutils [req-a735bc2b-f33d-4bb5-93cb-89205a299d6f req-f8528232-1b30-4dde-a9a5-cec9213e2cdb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.929 253465 DEBUG oslo_concurrency.lockutils [req-a735bc2b-f33d-4bb5-93cb-89205a299d6f req-f8528232-1b30-4dde-a9a5-cec9213e2cdb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:29 compute-0 nova_compute[253461]: 2025-11-22 03:50:29.930 253465 DEBUG nova.compute.manager [req-a735bc2b-f33d-4bb5-93cb-89205a299d6f req-f8528232-1b30-4dde-a9a5-cec9213e2cdb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Processing event network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:50:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 67 op/s
Nov 22 03:50:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:30.256 162689 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 03:50:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:30.257 162689 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpm2aa2zr1/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.258 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:50:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:30.125 261050 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 03:50:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:30.130 261050 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 03:50:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:30.134 261050 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Nov 22 03:50:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:30.135 261050 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261050
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.261 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783430.2606027, 7a2bb77b-45b0-41b6-a9ae-27d62354c775 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.261 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] VM Started (Lifecycle Event)
Nov 22 03:50:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:30.262 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c45b5da3-fa09-4aff-9282-dc7ea0ad9352]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.287 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.292 253465 INFO nova.virt.libvirt.driver [-] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Instance spawned successfully.
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.292 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.306 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.316 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.322 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.322 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.323 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.324 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.325 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.325 253465 DEBUG nova.virt.libvirt.driver [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:50:30 compute-0 ceph-mon[75011]: osdmap e144: 3 total, 3 up, 3 in
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.336 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.337 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783430.2623317, 7a2bb77b-45b0-41b6-a9ae-27d62354c775 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.337 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] VM Paused (Lifecycle Event)
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.372 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.377 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783430.285812, 7a2bb77b-45b0-41b6-a9ae-27d62354c775 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.377 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] VM Resumed (Lifecycle Event)
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.403 253465 INFO nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Took 9.19 seconds to spawn the instance on the hypervisor.
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.404 253465 DEBUG nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.405 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.416 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.451 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.482 253465 INFO nova.compute.manager [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Took 10.24 seconds to build instance.
Nov 22 03:50:30 compute-0 nova_compute[253461]: 2025-11-22 03:50:30.499 253465 DEBUG oslo_concurrency.lockutils [None req-0f1ed6d8-fee4-4993-9599-56a3e0ff61fb f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.447s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.017 261050 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.017 261050 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.017 261050 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:31 compute-0 ceph-mon[75011]: pgmap v911: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.5 MiB/s wr, 67 op/s
Nov 22 03:50:31 compute-0 nova_compute[253461]: 2025-11-22 03:50:31.398 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.784 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7087faa5-659d-4630-8917-2b784545c9f8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.785 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaec0e7f2-41 in ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.787 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaec0e7f2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.787 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e78dd376-37b9-429f-87d5-73a503bc6906]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.791 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4d279218-d486-4c5f-9da5-d6ce30e360c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.815 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[41f72fcf-fc57-4889-81a7-f4c1fd0dd8de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.838 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[05ff2d4d-c645-48a6-8c20-43386ed069b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:31.840 162689 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpb8fxcmk8/privsep.sock']
Nov 22 03:50:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 81 op/s
Nov 22 03:50:32 compute-0 nova_compute[253461]: 2025-11-22 03:50:32.043 253465 DEBUG nova.compute.manager [req-85ba7705-b557-40eb-8d1c-912d1b1c9d61 req-9c4fde0e-dcf6-4498-803e-8d5999f284b4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:50:32 compute-0 nova_compute[253461]: 2025-11-22 03:50:32.043 253465 DEBUG oslo_concurrency.lockutils [req-85ba7705-b557-40eb-8d1c-912d1b1c9d61 req-9c4fde0e-dcf6-4498-803e-8d5999f284b4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:32 compute-0 nova_compute[253461]: 2025-11-22 03:50:32.044 253465 DEBUG oslo_concurrency.lockutils [req-85ba7705-b557-40eb-8d1c-912d1b1c9d61 req-9c4fde0e-dcf6-4498-803e-8d5999f284b4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:32 compute-0 nova_compute[253461]: 2025-11-22 03:50:32.044 253465 DEBUG oslo_concurrency.lockutils [req-85ba7705-b557-40eb-8d1c-912d1b1c9d61 req-9c4fde0e-dcf6-4498-803e-8d5999f284b4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:32 compute-0 nova_compute[253461]: 2025-11-22 03:50:32.045 253465 DEBUG nova.compute.manager [req-85ba7705-b557-40eb-8d1c-912d1b1c9d61 req-9c4fde0e-dcf6-4498-803e-8d5999f284b4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] No waiting events found dispatching network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:50:32 compute-0 nova_compute[253461]: 2025-11-22 03:50:32.045 253465 WARNING nova.compute.manager [req-85ba7705-b557-40eb-8d1c-912d1b1c9d61 req-9c4fde0e-dcf6-4498-803e-8d5999f284b4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received unexpected event network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 for instance with vm_state active and task_state None.
Nov 22 03:50:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:32.680 162689 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 22 03:50:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:32.682 162689 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpb8fxcmk8/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 22 03:50:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:32.549 261069 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 03:50:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:32.557 261069 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 03:50:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:32.562 261069 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 22 03:50:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:32.563 261069 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261069
Nov 22 03:50:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:32.686 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ddf97e-4539-4595-92a8-5815a47e682d]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.189 261069 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.189 261069 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.189 261069 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.359 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 ceph-mon[75011]: pgmap v912: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.8 MiB/s wr, 81 op/s
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.724 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8e377a-87e1-46cf-9ff4-9c2942c39b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7517] manager: (tapaec0e7f2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.749 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ff0b3a-da63-41a0-9e2e-118589c85c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7718] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7730] device (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.771 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7758] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7772] device (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.779 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[06a1b40e-9011-44a9-bbbb-396dcd78054f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7807] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7827] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.782 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b3581d-e7cc-4ebe-a298-ff5108558eee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7842] device (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.7855] device (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 22 03:50:33 compute-0 systemd-udevd[261082]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.8166] device (tapaec0e7f2-40): carrier: link connected
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.823 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[ed6ad1ff-febd-4edd-884f-25f5d1eed8d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.840 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[04aa9494-227d-4ddf-97f2-bcf22e48453b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaec0e7f2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:f9:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 368641, 'reachable_time': 32307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261099, 'error': None, 'target': 'ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.852 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.854 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[434eb78f-4a95-4fca-915e-74c62c6d0d9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:f976'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 368641, 'tstamp': 368641}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261100, 'error': None, 'target': 'ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.863 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.867 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1b61f878-712b-45f6-a006-e7bd968dde8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaec0e7f2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:f9:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 368641, 'reachable_time': 32307, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 261102, 'error': None, 'target': 'ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.889 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5f54d871-2e4e-4e05-a38d-6dc0b93db6ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.930 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[232310c0-87b1-45b6-ad04-280e70fda406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.931 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaec0e7f2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.931 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.932 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaec0e7f2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:50:33 compute-0 kernel: tapaec0e7f2-40: entered promiscuous mode
Nov 22 03:50:33 compute-0 NetworkManager[48916]: <info>  [1763783433.9343] manager: (tapaec0e7f2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.933 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.935 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.937 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaec0e7f2-40, col_values=(('external_ids', {'iface-id': 'd7da4432-802e-4fa3-aab1-dce8b507640c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.938 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 ovn_controller[152691]: 2025-11-22T03:50:33Z|00031|binding|INFO|Releasing lport d7da4432-802e-4fa3-aab1-dce8b507640c from this chassis (sb_readonly=0)
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.940 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.941 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.942 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d715820c-d8ae-401c-9464-a85af62bf84b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.943 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff.pid.haproxy
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:50:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:33.944 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff', 'env', 'PROCESS_TAG=haproxy-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:50:33 compute-0 nova_compute[253461]: 2025-11-22 03:50:33.954 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 117 op/s
Nov 22 03:50:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:34 compute-0 podman[261134]: 2025-11-22 03:50:34.3526387 +0000 UTC m=+0.054434354 container create d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:50:34 compute-0 nova_compute[253461]: 2025-11-22 03:50:34.358 253465 DEBUG nova.compute.manager [req-138af262-0ff2-4778-a6a1-a604c9817cf6 req-cfafa3ca-3206-409f-bb7c-eb6a2ef2c9c5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event network-changed-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:50:34 compute-0 nova_compute[253461]: 2025-11-22 03:50:34.359 253465 DEBUG nova.compute.manager [req-138af262-0ff2-4778-a6a1-a604c9817cf6 req-cfafa3ca-3206-409f-bb7c-eb6a2ef2c9c5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Refreshing instance network info cache due to event network-changed-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:50:34 compute-0 nova_compute[253461]: 2025-11-22 03:50:34.359 253465 DEBUG oslo_concurrency.lockutils [req-138af262-0ff2-4778-a6a1-a604c9817cf6 req-cfafa3ca-3206-409f-bb7c-eb6a2ef2c9c5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:50:34 compute-0 nova_compute[253461]: 2025-11-22 03:50:34.359 253465 DEBUG oslo_concurrency.lockutils [req-138af262-0ff2-4778-a6a1-a604c9817cf6 req-cfafa3ca-3206-409f-bb7c-eb6a2ef2c9c5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:50:34 compute-0 nova_compute[253461]: 2025-11-22 03:50:34.360 253465 DEBUG nova.network.neutron [req-138af262-0ff2-4778-a6a1-a604c9817cf6 req-cfafa3ca-3206-409f-bb7c-eb6a2ef2c9c5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Refreshing network info cache for port b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:50:34 compute-0 systemd[1]: Started libpod-conmon-d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f.scope.
Nov 22 03:50:34 compute-0 podman[261134]: 2025-11-22 03:50:34.324091784 +0000 UTC m=+0.025887448 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:50:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:50:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/381af0d619581318afe49de70c92750fdbe84c27790a36e839952a35abedf347/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:50:34 compute-0 podman[261134]: 2025-11-22 03:50:34.46490507 +0000 UTC m=+0.166700764 container init d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 03:50:34 compute-0 podman[261134]: 2025-11-22 03:50:34.471700198 +0000 UTC m=+0.173495852 container start d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 03:50:34 compute-0 neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff[261150]: [NOTICE]   (261160) : New worker (261171) forked
Nov 22 03:50:34 compute-0 neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff[261150]: [NOTICE]   (261160) : Loading success.
Nov 22 03:50:34 compute-0 podman[261153]: 2025-11-22 03:50:34.53320393 +0000 UTC m=+0.068172096 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 03:50:34 compute-0 podman[261182]: 2025-11-22 03:50:34.649651264 +0000 UTC m=+0.091951851 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 22 03:50:35 compute-0 ceph-mon[75011]: pgmap v913: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 117 op/s
Nov 22 03:50:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 883 KiB/s wr, 129 op/s
Nov 22 03:50:36 compute-0 nova_compute[253461]: 2025-11-22 03:50:36.015 253465 DEBUG nova.network.neutron [req-138af262-0ff2-4778-a6a1-a604c9817cf6 req-cfafa3ca-3206-409f-bb7c-eb6a2ef2c9c5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Updated VIF entry in instance network info cache for port b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:50:36 compute-0 nova_compute[253461]: 2025-11-22 03:50:36.016 253465 DEBUG nova.network.neutron [req-138af262-0ff2-4778-a6a1-a604c9817cf6 req-cfafa3ca-3206-409f-bb7c-eb6a2ef2c9c5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Updating instance_info_cache with network_info: [{"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:50:36 compute-0 nova_compute[253461]: 2025-11-22 03:50:36.036 253465 DEBUG oslo_concurrency.lockutils [req-138af262-0ff2-4778-a6a1-a604c9817cf6 req-cfafa3ca-3206-409f-bb7c-eb6a2ef2c9c5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-7a2bb77b-45b0-41b6-a9ae-27d62354c775" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:50:36
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'backups', '.rgw.root', 'volumes', 'images', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'vms']
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:50:36 compute-0 nova_compute[253461]: 2025-11-22 03:50:36.400 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:50:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:50:37 compute-0 ceph-mon[75011]: pgmap v914: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 883 KiB/s wr, 129 op/s
Nov 22 03:50:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 105 op/s
Nov 22 03:50:38 compute-0 nova_compute[253461]: 2025-11-22 03:50:38.361 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:39 compute-0 ceph-mon[75011]: pgmap v915: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 18 KiB/s wr, 105 op/s
Nov 22 03:50:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 17 KiB/s wr, 99 op/s
Nov 22 03:50:41 compute-0 nova_compute[253461]: 2025-11-22 03:50:41.403 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:41 compute-0 ceph-mon[75011]: pgmap v916: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 17 KiB/s wr, 99 op/s
Nov 22 03:50:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 88 op/s
Nov 22 03:50:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3715402486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3715402486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:42 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3715402486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:42 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3715402486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:43 compute-0 nova_compute[253461]: 2025-11-22 03:50:43.365 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:43 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 03:50:43 compute-0 ceph-mon[75011]: pgmap v917: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 88 op/s
Nov 22 03:50:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 426 B/s wr, 80 op/s
Nov 22 03:50:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:44 compute-0 sshd-session[261214]: error: kex_exchange_identification: read: Connection reset by peer
Nov 22 03:50:44 compute-0 sshd-session[261214]: Connection reset by 45.140.17.97 port 33043
Nov 22 03:50:44 compute-0 ovn_controller[152691]: 2025-11-22T03:50:44Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c3:a0:b2 10.100.0.5
Nov 22 03:50:44 compute-0 ovn_controller[152691]: 2025-11-22T03:50:44Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c3:a0:b2 10.100.0.5
Nov 22 03:50:45 compute-0 ceph-mon[75011]: pgmap v918: 305 pgs: 305 active+clean; 88 MiB data, 212 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 426 B/s wr, 80 op/s
Nov 22 03:50:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 100 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 943 KiB/s wr, 92 op/s
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005274213306635411 of space, bias 1.0, pg target 0.1582263991990623 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 4.578530963078727e-06 of space, bias 1.0, pg target 0.001373559288923618 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:50:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:50:46 compute-0 nova_compute[253461]: 2025-11-22 03:50:46.447 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/89284236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/89284236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 03:50:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:50:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/131682975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:50:48 compute-0 ceph-mon[75011]: pgmap v919: 305 pgs: 305 active+clean; 100 MiB data, 240 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 943 KiB/s wr, 92 op/s
Nov 22 03:50:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/89284236' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/89284236' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:48 compute-0 nova_compute[253461]: 2025-11-22 03:50:48.368 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:48 compute-0 podman[261215]: 2025-11-22 03:50:48.380415791 +0000 UTC m=+0.062418161 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 03:50:49 compute-0 ceph-mon[75011]: pgmap v920: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 347 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Nov 22 03:50:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/131682975' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:50:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Nov 22 03:50:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Nov 22 03:50:49 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Nov 22 03:50:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:49.441 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:50:49 compute-0 nova_compute[253461]: 2025-11-22 03:50:49.443 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:49.443 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:50:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 2.5 MiB/s wr, 112 op/s
Nov 22 03:50:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Nov 22 03:50:50 compute-0 ceph-mon[75011]: osdmap e145: 3 total, 3 up, 3 in
Nov 22 03:50:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Nov 22 03:50:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Nov 22 03:50:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Nov 22 03:50:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Nov 22 03:50:51 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Nov 22 03:50:51 compute-0 ceph-mon[75011]: pgmap v922: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 2.5 MiB/s wr, 112 op/s
Nov 22 03:50:51 compute-0 ceph-mon[75011]: osdmap e146: 3 total, 3 up, 3 in
Nov 22 03:50:51 compute-0 nova_compute[253461]: 2025-11-22 03:50:51.451 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 2.4 MiB/s wr, 83 op/s
Nov 22 03:50:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Nov 22 03:50:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Nov 22 03:50:52 compute-0 ceph-mon[75011]: osdmap e147: 3 total, 3 up, 3 in
Nov 22 03:50:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Nov 22 03:50:53 compute-0 ceph-mon[75011]: pgmap v925: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 2.4 MiB/s wr, 83 op/s
Nov 22 03:50:53 compute-0 ceph-mon[75011]: osdmap e148: 3 total, 3 up, 3 in
Nov 22 03:50:53 compute-0 nova_compute[253461]: 2025-11-22 03:50:53.403 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 33 KiB/s wr, 36 op/s
Nov 22 03:50:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Nov 22 03:50:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:50:54.447 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:50:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Nov 22 03:50:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Nov 22 03:50:55 compute-0 ceph-mon[75011]: pgmap v927: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 33 KiB/s wr, 36 op/s
Nov 22 03:50:55 compute-0 ceph-mon[75011]: osdmap e149: 3 total, 3 up, 3 in
Nov 22 03:50:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 28 KiB/s wr, 33 op/s
Nov 22 03:50:56 compute-0 nova_compute[253461]: 2025-11-22 03:50:56.453 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:50:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2172856887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:50:57 compute-0 ceph-mon[75011]: pgmap v929: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 28 KiB/s wr, 33 op/s
Nov 22 03:50:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2172856887' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:50:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Nov 22 03:50:58 compute-0 nova_compute[253461]: 2025-11-22 03:50:58.406 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:50:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Nov 22 03:50:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Nov 22 03:50:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Nov 22 03:50:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:50:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968850863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:50:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2968850863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:50:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:50:59 compute-0 ceph-mon[75011]: pgmap v930: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.5 KiB/s wr, 30 op/s
Nov 22 03:50:59 compute-0 ceph-mon[75011]: osdmap e150: 3 total, 3 up, 3 in
Nov 22 03:50:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2968850863' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:50:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2968850863' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.9 KiB/s wr, 21 op/s
Nov 22 03:51:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3605690390' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3605690390' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Nov 22 03:51:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Nov 22 03:51:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3605690390' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3605690390' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Nov 22 03:51:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3069706756' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3069706756' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:01 compute-0 nova_compute[253461]: 2025-11-22 03:51:01.457 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:01 compute-0 ceph-mon[75011]: pgmap v932: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.9 KiB/s wr, 21 op/s
Nov 22 03:51:01 compute-0 ceph-mon[75011]: osdmap e151: 3 total, 3 up, 3 in
Nov 22 03:51:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3069706756' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3069706756' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.5 KiB/s wr, 77 op/s
Nov 22 03:51:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Nov 22 03:51:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Nov 22 03:51:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Nov 22 03:51:03 compute-0 nova_compute[253461]: 2025-11-22 03:51:03.409 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Nov 22 03:51:03 compute-0 ceph-mon[75011]: pgmap v934: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 3.5 KiB/s wr, 77 op/s
Nov 22 03:51:03 compute-0 ceph-mon[75011]: osdmap e152: 3 total, 3 up, 3 in
Nov 22 03:51:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Nov 22 03:51:03 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Nov 22 03:51:03 compute-0 ovn_controller[152691]: 2025-11-22T03:51:03Z|00032|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 22 03:51:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 2.4 KiB/s wr, 122 op/s
Nov 22 03:51:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1038408610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1038408610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:04 compute-0 ceph-mon[75011]: osdmap e153: 3 total, 3 up, 3 in
Nov 22 03:51:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1038408610' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1038408610' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:05 compute-0 podman[261238]: 2025-11-22 03:51:05.39604755 +0000 UTC m=+0.065577728 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:51:05 compute-0 podman[261239]: 2025-11-22 03:51:05.499362826 +0000 UTC m=+0.153539088 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:51:05 compute-0 ceph-mon[75011]: pgmap v937: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 2.4 KiB/s wr, 122 op/s
Nov 22 03:51:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 5.0 KiB/s wr, 117 op/s
Nov 22 03:51:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.458 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.458 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.486 253465 DEBUG nova.objects.instance [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lazy-loading 'flavor' on Instance uuid 7a2bb77b-45b0-41b6-a9ae-27d62354c775 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.514 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.546 253465 INFO nova.virt.libvirt.driver [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Ignoring supplied device name: /dev/vdb
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.565 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Nov 22 03:51:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Nov 22 03:51:06 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.804 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.805 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:06 compute-0 nova_compute[253461]: 2025-11-22 03:51:06.805 253465 INFO nova.compute.manager [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Attaching volume a251c669-0b3b-47e4-b081-42e20d6cfbbe to /dev/vdb
Nov 22 03:51:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:51:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4249574883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.058 253465 DEBUG os_brick.utils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.060 253465 INFO oslo.privsep.daemon [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpkwpuq13x/privsep.sock']
Nov 22 03:51:07 compute-0 ceph-mon[75011]: pgmap v938: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 5.0 KiB/s wr, 117 op/s
Nov 22 03:51:07 compute-0 ceph-mon[75011]: osdmap e154: 3 total, 3 up, 3 in
Nov 22 03:51:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4249574883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:51:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Nov 22 03:51:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Nov 22 03:51:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.823 253465 INFO oslo.privsep.daemon [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Spawned new privsep daemon via rootwrap
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.698 261287 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.703 261287 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.705 261287 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.706 261287 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261287
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.827 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[be3e43b9-8211-4c95-ba7d-e72bb81e660f]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.936 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.960 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.961 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[817e3931-7aad-4b55-aa5b-63c86f1da218]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.962 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.970 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.970 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[d9366f9b-c9b8-46bd-a955-b6b9a59486e0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.972 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.981 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.981 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[dc155b43-8370-400a-b16f-abdcdafec681]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.983 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[b752fc96-b272-4909-a11a-bcf04559560b]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:07 compute-0 nova_compute[253461]: 2025-11-22 03:51:07.983 253465 DEBUG oslo_concurrency.processutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.008 253465 DEBUG oslo_concurrency.processutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.011 253465 DEBUG os_brick.initiator.connectors.lightos [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.012 253465 DEBUG os_brick.initiator.connectors.lightos [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.012 253465 DEBUG os_brick.initiator.connectors.lightos [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.012 253465 DEBUG os_brick.utils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] <== get_connector_properties: return (952ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.012 253465 DEBUG nova.virt.block_device [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Updating existing volume attachment record: 3d1162a3-678c-4ab9-963c-a27a06bc2f59 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 03:51:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 6.8 KiB/s wr, 87 op/s
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.415 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:51:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1591119800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:51:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Nov 22 03:51:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Nov 22 03:51:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Nov 22 03:51:08 compute-0 ceph-mon[75011]: osdmap e155: 3 total, 3 up, 3 in
Nov 22 03:51:08 compute-0 ceph-mon[75011]: pgmap v941: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 6.8 KiB/s wr, 87 op/s
Nov 22 03:51:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1591119800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.913 253465 DEBUG os_brick.encryptors [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Using volume encryption metadata '{'encryption_key_id': '43d20a2b-cc53-45e1-ac71-c4d402b9b24d', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7a2bb77b-45b0-41b6-a9ae-27d62354c775', 'attached_at': '', 'detached_at': '', 'volume_id': 'a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.917 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.917 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.918 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.928 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.946 253465 DEBUG barbicanclient.v1.secrets [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.946 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.981 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:08 compute-0 nova_compute[253461]: 2025-11-22 03:51:08.981 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.003 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.004 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.049 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.049 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.077 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.078 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.109 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.110 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.128 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.129 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.157 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.158 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Nov 22 03:51:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Nov 22 03:51:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.417 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.418 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.452 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.453 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.490 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.490 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.522 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.523 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.548 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.548 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.575 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.576 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.597 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.598 253465 INFO barbicanclient.base [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Calculated Secrets uuid ref: secrets/43d20a2b-cc53-45e1-ac71-c4d402b9b24d
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.627 253465 DEBUG barbicanclient.client [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.628 253465 DEBUG nova.virt.libvirt.host [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 03:51:09 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 03:51:09 compute-0 nova_compute[253461]:     <volume>a251c669-0b3b-47e4-b081-42e20d6cfbbe</volume>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   </usage>
Nov 22 03:51:09 compute-0 nova_compute[253461]: </secret>
Nov 22 03:51:09 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.698 253465 DEBUG nova.objects.instance [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lazy-loading 'flavor' on Instance uuid 7a2bb77b-45b0-41b6-a9ae-27d62354c775 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.752 253465 DEBUG nova.virt.libvirt.driver [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Attempting to attach volume a251c669-0b3b-47e4-b081-42e20d6cfbbe with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 03:51:09 compute-0 nova_compute[253461]: 2025-11-22 03:51:09.755 253465 DEBUG nova.virt.libvirt.guest [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 03:51:09 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe">
Nov 22 03:51:09 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   </source>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 03:51:09 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   </auth>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   <serial>a251c669-0b3b-47e4-b081-42e20d6cfbbe</serial>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 03:51:09 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="f5447ed0-9eb4-4f8d-9b24-1e547bc2db68"/>
Nov 22 03:51:09 compute-0 nova_compute[253461]:   </encryption>
Nov 22 03:51:09 compute-0 nova_compute[253461]: </disk>
Nov 22 03:51:09 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 03:51:09 compute-0 ceph-mon[75011]: osdmap e156: 3 total, 3 up, 3 in
Nov 22 03:51:09 compute-0 ceph-mon[75011]: osdmap e157: 3 total, 3 up, 3 in
Nov 22 03:51:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.7 KiB/s wr, 106 op/s
Nov 22 03:51:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Nov 22 03:51:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Nov 22 03:51:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Nov 22 03:51:11 compute-0 ceph-mon[75011]: pgmap v944: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.7 KiB/s wr, 106 op/s
Nov 22 03:51:11 compute-0 nova_compute[253461]: 2025-11-22 03:51:11.516 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3242537398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3242537398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.6 KiB/s wr, 124 op/s
Nov 22 03:51:12 compute-0 ceph-mon[75011]: osdmap e158: 3 total, 3 up, 3 in
Nov 22 03:51:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3242537398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3242537398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:12 compute-0 nova_compute[253461]: 2025-11-22 03:51:12.478 253465 DEBUG nova.virt.libvirt.driver [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:51:12 compute-0 nova_compute[253461]: 2025-11-22 03:51:12.478 253465 DEBUG nova.virt.libvirt.driver [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:51:12 compute-0 nova_compute[253461]: 2025-11-22 03:51:12.478 253465 DEBUG nova.virt.libvirt.driver [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:51:12 compute-0 nova_compute[253461]: 2025-11-22 03:51:12.479 253465 DEBUG nova.virt.libvirt.driver [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] No VIF found with MAC fa:16:3e:c3:a0:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:51:12 compute-0 nova_compute[253461]: 2025-11-22 03:51:12.785 253465 DEBUG oslo_concurrency.lockutils [None req-74a8f8c2-104e-4732-82c6-405146ebf066 f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Nov 22 03:51:13 compute-0 ceph-mon[75011]: pgmap v946: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.6 KiB/s wr, 124 op/s
Nov 22 03:51:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Nov 22 03:51:13 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Nov 22 03:51:13 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 03:51:13 compute-0 nova_compute[253461]: 2025-11-22 03:51:13.447 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1092883545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1092883545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 5.1 KiB/s wr, 110 op/s
Nov 22 03:51:14 compute-0 ceph-mon[75011]: osdmap e159: 3 total, 3 up, 3 in
Nov 22 03:51:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1092883545' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1092883545' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Nov 22 03:51:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Nov 22 03:51:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Nov 22 03:51:14 compute-0 nova_compute[253461]: 2025-11-22 03:51:14.974 253465 DEBUG nova.compute.manager [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event volume-extended-a251c669-0b3b-47e4-b081-42e20d6cfbbe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:51:14 compute-0 nova_compute[253461]: 2025-11-22 03:51:14.991 253465 DEBUG nova.compute.manager [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Handling volume-extended event for volume a251c669-0b3b-47e4-b081-42e20d6cfbbe extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Nov 22 03:51:15 compute-0 nova_compute[253461]: 2025-11-22 03:51:15.005 253465 INFO nova.compute.manager [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Cinder extended volume a251c669-0b3b-47e4-b081-42e20d6cfbbe; extending it to detect new size
Nov 22 03:51:15 compute-0 nova_compute[253461]: 2025-11-22 03:51:15.308 253465 DEBUG os_brick.encryptors [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Using volume encryption metadata '{'encryption_key_id': '43d20a2b-cc53-45e1-ac71-c4d402b9b24d', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7a2bb77b-45b0-41b6-a9ae-27d62354c775', 'attached_at': '', 'detached_at': '', 'volume_id': 'a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 03:51:15 compute-0 nova_compute[253461]: 2025-11-22 03:51:15.309 253465 INFO oslo.privsep.daemon [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpoiftsep0/privsep.sock']
Nov 22 03:51:15 compute-0 ceph-mon[75011]: pgmap v948: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 5.1 KiB/s wr, 110 op/s
Nov 22 03:51:15 compute-0 ceph-mon[75011]: osdmap e160: 3 total, 3 up, 3 in
Nov 22 03:51:16 compute-0 nova_compute[253461]: 2025-11-22 03:51:16.000 253465 INFO oslo.privsep.daemon [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Spawned new privsep daemon via rootwrap
Nov 22 03:51:16 compute-0 nova_compute[253461]: 2025-11-22 03:51:15.884 261320 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 22 03:51:16 compute-0 nova_compute[253461]: 2025-11-22 03:51:15.888 261320 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 22 03:51:16 compute-0 nova_compute[253461]: 2025-11-22 03:51:15.890 261320 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 22 03:51:16 compute-0 nova_compute[253461]: 2025-11-22 03:51:15.890 261320 INFO oslo.privsep.daemon [-] privsep daemon running as pid 261320
Nov 22 03:51:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 11 KiB/s wr, 154 op/s
Nov 22 03:51:16 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Nov 22 03:51:16 compute-0 systemd[1]: Started Process Core Dump (PID 261341/UID 0).
Nov 22 03:51:16 compute-0 nova_compute[253461]: 2025-11-22 03:51:16.517 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:17 compute-0 systemd-coredump[261342]: Process 261322 (qemu-img) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 261332:
                                                    #0  0x00007f0b8fdb703c __pthread_kill_implementation (libc.so.6 + 0x8d03c)
                                                    #1  0x00007f0b8fd69b86 raise (libc.so.6 + 0x3fb86)
                                                    #2  0x00007f0b8fd53873 abort (libc.so.6 + 0x29873)
                                                    #3  0x0000557a7bfaf5df ___interceptor_pthread_create (qemu-img + 0x4f5df)
                                                    #4  0x00007f0b8cf8dff4 _ZN6Thread10try_createEm (libceph-common.so.2 + 0x258ff4)
                                                    #5  0x00007f0b8cf906ae _ZN6Thread6createEPKcm (libceph-common.so.2 + 0x25b6ae)
                                                    #6  0x00007f0b8de9726b _ZNSt8_Rb_treeISt4pairINSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEESt10type_indexES0_IKS8_N4ceph12immobile_anyILm576EEEESt10_Select1stISD_ENSA_6common11CephContext19associated_objs_cmpESaISD_EE22_M_emplace_hint_uniqueIJRKSt21piecewise_construct_tSt5tupleIJRSt17basic_string_viewIcS4_ERS7_EESP_IJRKSt15in_place_type_tIN6librbd21TaskFinisherSingletonEERPSH_EEEEESt17_Rb_tree_iteratorISD_ESt23_Rb_tree_const_iteratorISD_EDpOT_.constprop.0 (librbd.so.1 + 0x51126b)
                                                    #7  0x00007f0b8dac47a6 _ZN6librbd8ImageCtx4initEv (librbd.so.1 + 0x13e7a6)
                                                    #8  0x00007f0b8db9e2d3 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE12send_refreshEv (librbd.so.1 + 0x2182d3)
                                                    #9  0x00007f0b8db9ef46 _ZN6librbd5image11OpenRequestINS_8ImageCtxEE23handle_v2_get_data_poolEPi (librbd.so.1 + 0x218f46)
                                                    #10 0x00007f0b8db9f2a7 _ZN6librbd4util6detail20rados_state_callbackINS_5image11OpenRequestINS_8ImageCtxEEEXadL_ZNS6_23handle_v2_get_data_poolEPiEELb1EEEvPvS8_ (librbd.so.1 + 0x2192a7)
                                                    #11 0x00007f0b8d89d0ac _ZN5boost4asio6detail18completion_handlerINS1_7binder0IN8librados14CB_AioCompleteEEENS0_10io_context19basic_executor_typeISaIvELm0EEEE11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xad0ac)
                                                    #12 0x00007f0b8d89c585 _ZN5boost4asio6detail14strand_service11do_completeEPvPNS1_19scheduler_operationERKNS_6system10error_codeEm (librados.so.2 + 0xac585)
                                                    #13 0x00007f0b8d917498 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127498)
                                                    #14 0x00007f0b8d8b64e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)
                                                    #15 0x00007f0b8c624ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #16 0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #17 0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261326:
                                                    #0  0x00007f0b8fe39a3e epoll_wait (libc.so.6 + 0x10fa3e)
                                                    #1  0x00007f0b8d175618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f0b8d173702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f0b8d1742c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f0b8c624ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261322:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f0b8c61e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f0b8dacbeb3 _ZN6librbd10ImageStateINS_8ImageCtxEE4openEm (librbd.so.1 + 0x145eb3)
                                                    #4  0x00007f0b8da9bfcb rbd_open (librbd.so.1 + 0x115fcb)
                                                    #5  0x00007f0b8e04689d qemu_rbd_open (block-rbd.so + 0x489d)
                                                    #6  0x0000557a7bfc023c bdrv_open_driver.llvm.11742085306969899539 (qemu-img + 0x6023c)
                                                    #7  0x0000557a7bfc5497 bdrv_open_inherit.llvm.11742085306969899539 (qemu-img + 0x65497)
                                                    #8  0x0000557a7bfd2e1e bdrv_open_child_bs.llvm.11742085306969899539 (qemu-img + 0x72e1e)
                                                    #9  0x0000557a7bfc4c16 bdrv_open_inherit.llvm.11742085306969899539 (qemu-img + 0x64c16)
                                                    #10 0x0000557a7bff4533 blk_new_open (qemu-img + 0x94533)
                                                    #11 0x0000557a7c0b4196 img_open_file (qemu-img + 0x154196)
                                                    #12 0x0000557a7c0b3d40 img_open (qemu-img + 0x153d40)
                                                    #13 0x0000557a7c0afceb img_info (qemu-img + 0x14fceb)
                                                    #14 0x0000557a7c0a92da main (qemu-img + 0x1492da)
                                                    #15 0x00007f0b8fd54610 __libc_start_call_main (libc.so.6 + 0x2a610)
                                                    #16 0x00007f0b8fd546c0 __libc_start_main@@GLIBC_2.34 (libc.so.6 + 0x2a6c0)
                                                    #17 0x0000557a7bfaf285 _start (qemu-img + 0x4f285)
                                                    
                                                    Stack trace of thread 261325:
                                                    #0  0x00007f0b8fe39a3e epoll_wait (libc.so.6 + 0x10fa3e)
                                                    #1  0x00007f0b8d175618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f0b8d173702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f0b8d1742c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f0b8c624ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261323:
                                                    #0  0x00007f0b8fe3282d syscall (libc.so.6 + 0x10882d)
                                                    #1  0x0000557a7c139de3 qemu_event_wait (qemu-img + 0x1d9de3)
                                                    #2  0x0000557a7c144ec7 call_rcu_thread (qemu-img + 0x1e4ec7)
                                                    #3  0x0000557a7c137efa qemu_thread_start.llvm.14789798431656624625 (qemu-img + 0x1d7efa)
                                                    #4  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261327:
                                                    #0  0x00007f0b8fe39a3e epoll_wait (libc.so.6 + 0x10fa3e)
                                                    #1  0x00007f0b8d175618 _ZN11EpollDriver10event_waitERSt6vectorI14FiredFileEventSaIS1_EEP7timeval (libceph-common.so.2 + 0x440618)
                                                    #2  0x00007f0b8d173702 _ZN11EventCenter14process_eventsEjPNSt6chrono8durationImSt5ratioILl1ELl1000000000EEEE (libceph-common.so.2 + 0x43e702)
                                                    #3  0x00007f0b8d1742c6 _ZNSt17_Function_handlerIFvvEZN12NetworkStack10add_threadEP6WorkerEUlvE_E9_M_invokeERKSt9_Any_data (libceph-common.so.2 + 0x43f2c6)
                                                    #4  0x00007f0b8c624ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261338:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f0b8c61e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f0b8cf937f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)
                                                    #4  0x00007f0b8cf93f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261324:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f0b8c61e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f0b8d1a00a2 _ZN4ceph7logging3Log5entryEv (libceph-common.so.2 + 0x46b0a2)
                                                    #4  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261339:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f0b8c61e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f0b8cf937f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)
                                                    #4  0x00007f0b8cf93f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261333:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f0b8d917266 _ZN5boost4asio6detail9scheduler3runERNS_6system10error_codeE.constprop.0.isra.0 (librados.so.2 + 0x127266)
                                                    #3  0x00007f0b8d8b64e4 _ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJZ17make_named_threadIZN4ceph5async15io_context_pool5startEsEUlvE_JEES_St17basic_string_viewIcSt11char_traitsIcEEOT_DpOT0_EUlSD_SG_E_S7_EEEEE6_M_runEv (librados.so.2 + 0xc64e4)
                                                    #4  0x00007f0b8c624ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261336:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f0b8c61e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f0b8d09c0b9 _ZN13DispatchQueue18run_local_deliveryEv (libceph-common.so.2 + 0x3670b9)
                                                    #4  0x00007f0b8d12d431 _ZN13DispatchQueue19LocalDeliveryThread5entryEv (libceph-common.so.2 + 0x3f8431)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261337:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb4cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f0b8cf93b23 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25eb23)
                                                    #3  0x00007f0b8cf93f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #4  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261331:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb4cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f0b8cfae150 _ZN4ceph6common24CephContextServiceThread5entryEv (libceph-common.so.2 + 0x279150)
                                                    #3  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #4  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261334:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb4cc0 pthread_cond_clockwait@GLIBC_2.30 (libc.so.6 + 0x8acc0)
                                                    #2  0x00007f0b8d8ef364 _ZN4ceph5timerINS_17coarse_mono_clockEE12timer_threadEv (librados.so.2 + 0xff364)
                                                    #3  0x00007f0b8c624ae4 execute_native_thread_routine (libstdc++.so.6 + 0xdbae4)
                                                    #4  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #5  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261335:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f0b8c61e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f0b8d09c49f _ZN13DispatchQueue5entryEv (libceph-common.so.2 + 0x36749f)
                                                    #4  0x00007f0b8d12d411 _ZN13DispatchQueue14DispatchThread5entryEv (libceph-common.so.2 + 0x3f8411)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    
                                                    Stack trace of thread 261340:
                                                    #0  0x00007f0b8fdb238a __futex_abstimed_wait_common (libc.so.6 + 0x8838a)
                                                    #1  0x00007f0b8fdb48e2 pthread_cond_wait@@GLIBC_2.3.2 (libc.so.6 + 0x8a8e2)
                                                    #2  0x00007f0b8c61e6c0 _ZNSt18condition_variable4waitERSt11unique_lockISt5mutexE (libstdc++.so.6 + 0xd56c0)
                                                    #3  0x00007f0b8cf937f8 _ZN15CommonSafeTimerISt5mutexE12timer_threadEv (libceph-common.so.2 + 0x25e7f8)
                                                    #4  0x00007f0b8cf93f81 _ZN21CommonSafeTimerThreadISt5mutexE5entryEv (libceph-common.so.2 + 0x25ef81)
                                                    #5  0x00007f0b8fdb52fa start_thread (libc.so.6 + 0x8b2fa)
                                                    #6  0x00007f0b8fe3a400 __clone3 (libc.so.6 + 0x110400)
                                                    ELF object binary architecture: AMD x86-64
Nov 22 03:51:17 compute-0 systemd[1]: systemd-coredump@0-261341-0.service: Deactivated successfully.
Nov 22 03:51:17 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Unknown error when attempting to find the payload_offset for LUKSv1 encrypted disk rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack.: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack : Unexpected error while running command.
Nov 22 03:51:17 compute-0 nova_compute[253461]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack --force-share --output=json
Nov 22 03:51:17 compute-0 nova_compute[253461]: Exit code: -6
Nov 22 03:51:17 compute-0 nova_compute[253461]: Stdout: ''
Nov 22 03:51:17 compute-0 nova_compute[253461]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Traceback (most recent call last):
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775]     info = images.privileged_qemu_img_info(path)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775]   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775]     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775]   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775]     return self.channel.remote_call(name, args, kwargs,
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775]   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775]     raise exc_type(*result[2])
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack : Unexpected error while running command.
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack --force-share --output=json
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Exit code: -6
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Stdout: ''
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.349 253465 ERROR nova.virt.libvirt.driver [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] 
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.352 253465 WARNING nova.compute.manager [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Extend volume failed, volume_id=a251c669-0b3b-47e4-b081-42e20d6cfbbe, reason: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack : Unexpected error while running command.
Nov 22 03:51:17 compute-0 nova_compute[253461]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack --force-share --output=json
Nov 22 03:51:17 compute-0 nova_compute[253461]: Exit code: -6
Nov 22 03:51:17 compute-0 nova_compute[253461]: Stdout: ''
Nov 22 03:51:17 compute-0 nova_compute[253461]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n': nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack : Unexpected error while running command.
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server [req-aa8a7662-ca72-434f-a746-ab80bd83c19e req-df00bfce-7083-433a-94a3-b340cb858c56 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Exception during message handling: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack : Unexpected error while running command.
Nov 22 03:51:17 compute-0 nova_compute[253461]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack --force-share --output=json
Nov 22 03:51:17 compute-0 nova_compute[253461]: Exit code: -6
Nov 22 03:51:17 compute-0 nova_compute[253461]: Stdout: ''
Nov 22 03:51:17 compute-0 nova_compute[253461]: Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     raise self.value
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11073, in external_instance_event
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     raise self.value
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10930, in extend_volume
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2865, in extend_volume
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     self._resize_attached_encrypted_volume(
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2804, in _resize_attached_encrypted_volume
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     LOG.exception('Unknown error when attempting to find the '
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     self.force_reraise()
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     raise self.value
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     info = images.privileged_qemu_img_info(path)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     return self.channel.remote_call(name, args, kwargs,
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server     raise exc_type(*result[2])
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack : Unexpected error while running command.
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe:id=openstack --force-share --output=json
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server Exit code: -6
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server Stdout: ''
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server Stderr: 'safestack CHECK failed: /builddir/build/BUILD/llvm-project-20.1.8.src/compiler-rt/lib/safestack/safestack.cpp:120 MAP_FAILED != addr\n'
Nov 22 03:51:17 compute-0 nova_compute[253461]: 2025-11-22 03:51:17.402 253465 ERROR oslo_messaging.rpc.server 
Nov 22 03:51:17 compute-0 ceph-mon[75011]: pgmap v950: 305 pgs: 305 active+clean; 121 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 11 KiB/s wr, 154 op/s
Nov 22 03:51:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 11 KiB/s wr, 115 op/s
Nov 22 03:51:18 compute-0 nova_compute[253461]: 2025-11-22 03:51:18.493 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:18 compute-0 nova_compute[253461]: 2025-11-22 03:51:18.948 253465 DEBUG oslo_concurrency.lockutils [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:18 compute-0 nova_compute[253461]: 2025-11-22 03:51:18.949 253465 DEBUG oslo_concurrency.lockutils [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:18 compute-0 nova_compute[253461]: 2025-11-22 03:51:18.964 253465 INFO nova.compute.manager [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Detaching volume a251c669-0b3b-47e4-b081-42e20d6cfbbe
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.106 253465 INFO nova.virt.block_device [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Attempting to driver detach volume a251c669-0b3b-47e4-b081-42e20d6cfbbe from mountpoint /dev/vdb
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.234 253465 DEBUG os_brick.encryptors [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Using volume encryption metadata '{'encryption_key_id': '43d20a2b-cc53-45e1-ac71-c4d402b9b24d', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '7a2bb77b-45b0-41b6-a9ae-27d62354c775', 'attached_at': '', 'detached_at': '', 'volume_id': 'a251c669-0b3b-47e4-b081-42e20d6cfbbe', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.245 253465 DEBUG nova.virt.libvirt.driver [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Attempting to detach device vdb from instance 7a2bb77b-45b0-41b6-a9ae-27d62354c775 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.246 253465 DEBUG nova.virt.libvirt.guest [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe">
Nov 22 03:51:19 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   </source>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <serial>a251c669-0b3b-47e4-b081-42e20d6cfbbe</serial>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 03:51:19 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="f5447ed0-9eb4-4f8d-9b24-1e547bc2db68"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   </encryption>
Nov 22 03:51:19 compute-0 nova_compute[253461]: </disk>
Nov 22 03:51:19 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.257 253465 INFO nova.virt.libvirt.driver [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Successfully detached device vdb from instance 7a2bb77b-45b0-41b6-a9ae-27d62354c775 from the persistent domain config.
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.258 253465 DEBUG nova.virt.libvirt.driver [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 7a2bb77b-45b0-41b6-a9ae-27d62354c775 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.258 253465 DEBUG nova.virt.libvirt.guest [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-a251c669-0b3b-47e4-b081-42e20d6cfbbe">
Nov 22 03:51:19 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   </source>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <serial>a251c669-0b3b-47e4-b081-42e20d6cfbbe</serial>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 03:51:19 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="f5447ed0-9eb4-4f8d-9b24-1e547bc2db68"/>
Nov 22 03:51:19 compute-0 nova_compute[253461]:   </encryption>
Nov 22 03:51:19 compute-0 nova_compute[253461]: </disk>
Nov 22 03:51:19 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.327 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763783479.327303, 7a2bb77b-45b0-41b6-a9ae-27d62354c775 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.329 253465 DEBUG nova.virt.libvirt.driver [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 7a2bb77b-45b0-41b6-a9ae-27d62354c775 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.331 253465 INFO nova.virt.libvirt.driver [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Successfully detached device vdb from instance 7a2bb77b-45b0-41b6-a9ae-27d62354c775 from the live domain config.
Nov 22 03:51:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Nov 22 03:51:19 compute-0 podman[261348]: 2025-11-22 03:51:19.393479004 +0000 UTC m=+0.074252127 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:51:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Nov 22 03:51:19 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Nov 22 03:51:19 compute-0 ceph-mon[75011]: pgmap v951: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 11 KiB/s wr, 115 op/s
Nov 22 03:51:19 compute-0 ceph-mon[75011]: osdmap e161: 3 total, 3 up, 3 in
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.561 253465 DEBUG nova.objects.instance [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lazy-loading 'flavor' on Instance uuid 7a2bb77b-45b0-41b6-a9ae-27d62354c775 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:51:19 compute-0 nova_compute[253461]: 2025-11-22 03:51:19.608 253465 DEBUG oslo_concurrency.lockutils [None req-fcdfbc3f-d454-482f-bb9f-c1fcc21f903d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 8.9 KiB/s wr, 70 op/s
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.307 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.308 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.308 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.308 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.308 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.310 253465 INFO nova.compute.manager [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Terminating instance
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.310 253465 DEBUG nova.compute.manager [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:51:20 compute-0 kernel: tapb7f7c4fc-78 (unregistering): left promiscuous mode
Nov 22 03:51:20 compute-0 NetworkManager[48916]: <info>  [1763783480.4089] device (tapb7f7c4fc-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:51:20 compute-0 ovn_controller[152691]: 2025-11-22T03:51:20Z|00033|binding|INFO|Releasing lport b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 from this chassis (sb_readonly=0)
Nov 22 03:51:20 compute-0 ovn_controller[152691]: 2025-11-22T03:51:20Z|00034|binding|INFO|Setting lport b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 down in Southbound
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.412 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:20 compute-0 ovn_controller[152691]: 2025-11-22T03:51:20Z|00035|binding|INFO|Removing iface tapb7f7c4fc-78 ovn-installed in OVS
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.415 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:20 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:20.421 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c3:a0:b2 10.100.0.5'], port_security=['fa:16:3e:c3:a0:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '7a2bb77b-45b0-41b6-a9ae-27d62354c775', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b513e0b5b0547e2835dc35495d5637f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53875ba9-d4ce-4815-aff2-34f2c670521d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6ebb1d3-8e5d-45d0-be1c-055784d08c57, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:51:20 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:20.422 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 in datapath aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff unbound from our chassis
Nov 22 03:51:20 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:20.423 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:51:20 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:20.425 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a78d9055-f4d1-407c-b9bd-fa5e3815b2ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:20 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:20.425 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff namespace which is not needed anymore
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.433 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:20 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 22 03:51:20 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 19.388s CPU time.
Nov 22 03:51:20 compute-0 systemd-machined[215728]: Machine qemu-1-instance-00000001 terminated.
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.535 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.541 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.553 253465 INFO nova.virt.libvirt.driver [-] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Instance destroyed successfully.
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.553 253465 DEBUG nova.objects.instance [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lazy-loading 'resources' on Instance uuid 7a2bb77b-45b0-41b6-a9ae-27d62354c775 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.571 253465 DEBUG nova.virt.libvirt.vif [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:50:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1841714985',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1841714985',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1841714985',id=1,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMMPA2XHMYQjmnnfQBZV/dUcJHc5mtwtVDD9Nwnsp0M8EgjKTYwJyBvWYgHXCBmQJu0QPhrGh9rGfR0Dz+ovhIT+c4pbVwyTQ4tFbQI9rTt2/Gdbg3ApkbrhqdPNEU786g==',key_name='tempest-keypair-2121433138',keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:50:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b513e0b5b0547e2835dc35495d5637f',ramdisk_id='',reservation_id='r-po7tcthz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-361238539',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-361238539-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:50:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='f358860e840943098fe9f91af8f7b08f',uuid=7a2bb77b-45b0-41b6-a9ae-27d62354c775,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.572 253465 DEBUG nova.network.os_vif_util [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Converting VIF {"id": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "address": "fa:16:3e:c3:a0:b2", "network": {"id": "aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-1622374140-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b513e0b5b0547e2835dc35495d5637f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f7c4fc-78", "ovs_interfaceid": "b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.573 253465 DEBUG nova.network.os_vif_util [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c3:a0:b2,bridge_name='br-int',has_traffic_filtering=True,id=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6,network=Network(aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f7c4fc-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.573 253465 DEBUG os_vif [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:a0:b2,bridge_name='br-int',has_traffic_filtering=True,id=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6,network=Network(aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f7c4fc-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.577 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.577 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7f7c4fc-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.579 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.581 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.583 253465 INFO os_vif [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c3:a0:b2,bridge_name='br-int',has_traffic_filtering=True,id=b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6,network=Network(aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f7c4fc-78')
Nov 22 03:51:20 compute-0 neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff[261150]: [NOTICE]   (261160) : haproxy version is 2.8.14-c23fe91
Nov 22 03:51:20 compute-0 neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff[261150]: [NOTICE]   (261160) : path to executable is /usr/sbin/haproxy
Nov 22 03:51:20 compute-0 neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff[261150]: [WARNING]  (261160) : Exiting Master process...
Nov 22 03:51:20 compute-0 neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff[261150]: [WARNING]  (261160) : Exiting Master process...
Nov 22 03:51:20 compute-0 neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff[261150]: [ALERT]    (261160) : Current worker (261171) exited with code 143 (Terminated)
Nov 22 03:51:20 compute-0 neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff[261150]: [WARNING]  (261160) : All workers exited. Exiting... (0)
Nov 22 03:51:20 compute-0 systemd[1]: libpod-d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f.scope: Deactivated successfully.
Nov 22 03:51:20 compute-0 podman[261395]: 2025-11-22 03:51:20.602939564 +0000 UTC m=+0.075962314 container died d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.651 253465 DEBUG nova.compute.manager [req-bc6d4617-a6a9-476f-8ac1-bc6024afbe21 req-7ca8aa16-43d9-4f1f-8f1b-0eb4739aa613 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event network-vif-unplugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.651 253465 DEBUG oslo_concurrency.lockutils [req-bc6d4617-a6a9-476f-8ac1-bc6024afbe21 req-7ca8aa16-43d9-4f1f-8f1b-0eb4739aa613 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.652 253465 DEBUG oslo_concurrency.lockutils [req-bc6d4617-a6a9-476f-8ac1-bc6024afbe21 req-7ca8aa16-43d9-4f1f-8f1b-0eb4739aa613 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.652 253465 DEBUG oslo_concurrency.lockutils [req-bc6d4617-a6a9-476f-8ac1-bc6024afbe21 req-7ca8aa16-43d9-4f1f-8f1b-0eb4739aa613 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.653 253465 DEBUG nova.compute.manager [req-bc6d4617-a6a9-476f-8ac1-bc6024afbe21 req-7ca8aa16-43d9-4f1f-8f1b-0eb4739aa613 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] No waiting events found dispatching network-vif-unplugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:51:20 compute-0 nova_compute[253461]: 2025-11-22 03:51:20.653 253465 DEBUG nova.compute.manager [req-bc6d4617-a6a9-476f-8ac1-bc6024afbe21 req-7ca8aa16-43d9-4f1f-8f1b-0eb4739aa613 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event network-vif-unplugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f-userdata-shm.mount: Deactivated successfully.
Nov 22 03:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-381af0d619581318afe49de70c92750fdbe84c27790a36e839952a35abedf347-merged.mount: Deactivated successfully.
Nov 22 03:51:20 compute-0 podman[261395]: 2025-11-22 03:51:20.931964801 +0000 UTC m=+0.404987592 container cleanup d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:51:20 compute-0 systemd[1]: libpod-conmon-d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f.scope: Deactivated successfully.
Nov 22 03:51:21 compute-0 podman[261451]: 2025-11-22 03:51:21.053779015 +0000 UTC m=+0.083741694 container remove d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.065 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7beb1860-0835-4a39-932e-63c0a411645a]: (4, ('Sat Nov 22 03:51:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff (d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f)\nd684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f\nSat Nov 22 03:51:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff (d684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f)\nd684e0f16596e64693c05db8ce23fc8fc5f4b029d16629723272b8d30adc106f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.069 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f0b3e2-3bfc-4d6e-8cc8-a4519d862488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.071 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaec0e7f2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.074 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:21 compute-0 kernel: tapaec0e7f2-40: left promiscuous mode
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.088 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.091 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[21f982dc-66d0-4538-8f67-a68780aac7ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.103 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3fbef358-559b-4129-a397-1ec62474cd3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.104 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e0300623-9bf0-4b52-aa0f-83cd0903695e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.123 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[07f6c0e2-cc03-48c7-a494-8fcdefe192d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 368631, 'reachable_time': 16803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261467, 'error': None, 'target': 'ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.137 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aec0e7f2-4a2a-464c-9cd5-76f1d77f1eff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:51:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:21.138 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b93cb2-0e32-4c95-ae29-b36f186d21e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:51:21 compute-0 systemd[1]: run-netns-ovnmeta\x2daec0e7f2\x2d4a2a\x2d464c\x2d9cd5\x2d76f1d77f1eff.mount: Deactivated successfully.
Nov 22 03:51:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626186160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/626186160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.451 253465 INFO nova.virt.libvirt.driver [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Deleting instance files /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775_del
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.452 253465 INFO nova.virt.libvirt.driver [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Deletion of /var/lib/nova/instances/7a2bb77b-45b0-41b6-a9ae-27d62354c775_del complete
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.518 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.538 253465 DEBUG nova.virt.libvirt.host [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.538 253465 INFO nova.virt.libvirt.host [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] UEFI support detected
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.540 253465 INFO nova.compute.manager [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Took 1.23 seconds to destroy the instance on the hypervisor.
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.541 253465 DEBUG oslo.service.loopingcall [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.542 253465 DEBUG nova.compute.manager [-] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:51:21 compute-0 nova_compute[253461]: 2025-11-22 03:51:21.542 253465 DEBUG nova.network.neutron [-] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:51:21 compute-0 ceph-mon[75011]: pgmap v953: 305 pgs: 305 active+clean; 121 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 8.9 KiB/s wr, 70 op/s
Nov 22 03:51:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/626186160' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/626186160' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 114 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 9.0 KiB/s wr, 83 op/s
Nov 22 03:51:22 compute-0 sudo[261470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:22 compute-0 sudo[261470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:22 compute-0 sudo[261470]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:22 compute-0 sudo[261495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:51:22 compute-0 sudo[261495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:22 compute-0 sudo[261495]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:22 compute-0 sudo[261520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:22 compute-0 sudo[261520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:22 compute-0 sudo[261520]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:22 compute-0 sudo[261545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:51:22 compute-0 sudo[261545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:22 compute-0 nova_compute[253461]: 2025-11-22 03:51:22.836 253465 DEBUG nova.compute.manager [req-3cc00dd4-2786-46cb-b8fc-a3bcb95776e9 req-fd89d844-f85f-4fb5-a2f0-3084134dc992 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:51:22 compute-0 nova_compute[253461]: 2025-11-22 03:51:22.837 253465 DEBUG oslo_concurrency.lockutils [req-3cc00dd4-2786-46cb-b8fc-a3bcb95776e9 req-fd89d844-f85f-4fb5-a2f0-3084134dc992 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:22 compute-0 nova_compute[253461]: 2025-11-22 03:51:22.837 253465 DEBUG oslo_concurrency.lockutils [req-3cc00dd4-2786-46cb-b8fc-a3bcb95776e9 req-fd89d844-f85f-4fb5-a2f0-3084134dc992 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:22 compute-0 nova_compute[253461]: 2025-11-22 03:51:22.837 253465 DEBUG oslo_concurrency.lockutils [req-3cc00dd4-2786-46cb-b8fc-a3bcb95776e9 req-fd89d844-f85f-4fb5-a2f0-3084134dc992 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:22 compute-0 nova_compute[253461]: 2025-11-22 03:51:22.837 253465 DEBUG nova.compute.manager [req-3cc00dd4-2786-46cb-b8fc-a3bcb95776e9 req-fd89d844-f85f-4fb5-a2f0-3084134dc992 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] No waiting events found dispatching network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:51:22 compute-0 nova_compute[253461]: 2025-11-22 03:51:22.837 253465 WARNING nova.compute.manager [req-3cc00dd4-2786-46cb-b8fc-a3bcb95776e9 req-fd89d844-f85f-4fb5-a2f0-3084134dc992 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received unexpected event network-vif-plugged-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 for instance with vm_state active and task_state deleting.
Nov 22 03:51:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:23.003 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:23.004 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:23.004 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.061 253465 DEBUG nova.network.neutron [-] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.170 253465 INFO nova.compute.manager [-] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Took 1.63 seconds to deallocate network for instance.
Nov 22 03:51:23 compute-0 sudo[261545]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:51:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:51:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:51:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:51:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d492d5d0-8200-4f9f-bc34-211cc49b2c47 does not exist
Nov 22 03:51:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 32ce93ce-ce15-408b-98a7-e591ad7753e3 does not exist
Nov 22 03:51:23 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b413a420-f2fd-49a9-b39d-1f135c3bca9f does not exist
Nov 22 03:51:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:51:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:51:23 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:51:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.355 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.355 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:23 compute-0 sudo[261602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:23 compute-0 sudo[261602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:23 compute-0 sudo[261602]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.410 253465 DEBUG oslo_concurrency.processutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:51:23 compute-0 sudo[261627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:51:23 compute-0 sudo[261627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:23 compute-0 sudo[261627]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:23 compute-0 sudo[261653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:23 compute-0 sudo[261653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:23 compute-0 sudo[261653]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.547 253465 DEBUG nova.compute.manager [req-ae886767-9ff3-4d96-aadf-e6e014ad575f req-c0ec68f2-bb23-4db4-b22d-e615745f6731 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Received event network-vif-deleted-b7f7c4fc-78b2-493c-ab2d-99ee6202d9a6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:51:23 compute-0 sudo[261696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:51:23 compute-0 sudo[261696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:23 compute-0 ceph-mon[75011]: pgmap v954: 305 pgs: 305 active+clean; 114 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 9.0 KiB/s wr, 83 op/s
Nov 22 03:51:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:51:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:51:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:51:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4126081114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.843 253465 DEBUG oslo_concurrency.processutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.851 253465 DEBUG nova.compute.provider_tree [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:51:23 compute-0 nova_compute[253461]: 2025-11-22 03:51:23.887 253465 DEBUG nova.scheduler.client.report [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:51:24 compute-0 podman[261765]: 2025-11-22 03:51:23.94183626 +0000 UTC m=+0.027302837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 75 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 7.6 KiB/s wr, 90 op/s
Nov 22 03:51:24 compute-0 podman[261765]: 2025-11-22 03:51:24.103974202 +0000 UTC m=+0.189440748 container create 9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:51:24 compute-0 nova_compute[253461]: 2025-11-22 03:51:24.109 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:24 compute-0 nova_compute[253461]: 2025-11-22 03:51:24.152 253465 INFO nova.scheduler.client.report [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Deleted allocations for instance 7a2bb77b-45b0-41b6-a9ae-27d62354c775
Nov 22 03:51:24 compute-0 systemd[1]: Started libpod-conmon-9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8.scope.
Nov 22 03:51:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:51:24 compute-0 podman[261765]: 2025-11-22 03:51:24.207485171 +0000 UTC m=+0.292951778 container init 9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lamarr, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 03:51:24 compute-0 podman[261765]: 2025-11-22 03:51:24.216080686 +0000 UTC m=+0.301547203 container start 9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lamarr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:51:24 compute-0 podman[261765]: 2025-11-22 03:51:24.219826285 +0000 UTC m=+0.305292952 container attach 9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:51:24 compute-0 funny_lamarr[261781]: 167 167
Nov 22 03:51:24 compute-0 systemd[1]: libpod-9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8.scope: Deactivated successfully.
Nov 22 03:51:24 compute-0 podman[261765]: 2025-11-22 03:51:24.222098197 +0000 UTC m=+0.307564714 container died 9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:51:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f6a2604388b0c3daceb1c310317a4db45b9a799f6e288603e65b9fe7d13ee63-merged.mount: Deactivated successfully.
Nov 22 03:51:24 compute-0 podman[261765]: 2025-11-22 03:51:24.266449137 +0000 UTC m=+0.351915654 container remove 9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lamarr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:51:24 compute-0 nova_compute[253461]: 2025-11-22 03:51:24.276 253465 DEBUG oslo_concurrency.lockutils [None req-f4628c90-40ee-4a44-8cf2-96b70d9c261d f358860e840943098fe9f91af8f7b08f 4b513e0b5b0547e2835dc35495d5637f - - default default] Lock "7a2bb77b-45b0-41b6-a9ae-27d62354c775" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:24 compute-0 systemd[1]: libpod-conmon-9660db4d63db48bc67cab105d9a3a0596ab2f2c79f111cfbbc6451284ea5fda8.scope: Deactivated successfully.
Nov 22 03:51:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:24 compute-0 podman[261803]: 2025-11-22 03:51:24.413205084 +0000 UTC m=+0.038524999 container create 74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:51:24 compute-0 systemd[1]: Started libpod-conmon-74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b.scope.
Nov 22 03:51:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:51:24 compute-0 podman[261803]: 2025-11-22 03:51:24.397055896 +0000 UTC m=+0.022375790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac6f0d5f6783f54128f6cb1e1cd156f9ab7c93dc4bfe84e6ab5e46b9e52f5ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac6f0d5f6783f54128f6cb1e1cd156f9ab7c93dc4bfe84e6ab5e46b9e52f5ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac6f0d5f6783f54128f6cb1e1cd156f9ab7c93dc4bfe84e6ab5e46b9e52f5ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac6f0d5f6783f54128f6cb1e1cd156f9ab7c93dc4bfe84e6ab5e46b9e52f5ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cac6f0d5f6783f54128f6cb1e1cd156f9ab7c93dc4bfe84e6ab5e46b9e52f5ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:24 compute-0 podman[261803]: 2025-11-22 03:51:24.526588348 +0000 UTC m=+0.151908283 container init 74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elbakyan, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:51:24 compute-0 podman[261803]: 2025-11-22 03:51:24.543245685 +0000 UTC m=+0.168565560 container start 74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elbakyan, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:51:24 compute-0 podman[261803]: 2025-11-22 03:51:24.546351992 +0000 UTC m=+0.171671907 container attach 74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 03:51:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4126081114' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:51:25 compute-0 nova_compute[253461]: 2025-11-22 03:51:25.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:25 compute-0 nova_compute[253461]: 2025-11-22 03:51:25.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:51:25 compute-0 nova_compute[253461]: 2025-11-22 03:51:25.431 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:51:25 compute-0 nova_compute[253461]: 2025-11-22 03:51:25.556 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:51:25 compute-0 wizardly_elbakyan[261820]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:51:25 compute-0 wizardly_elbakyan[261820]: --> relative data size: 1.0
Nov 22 03:51:25 compute-0 wizardly_elbakyan[261820]: --> All data devices are unavailable
Nov 22 03:51:25 compute-0 systemd[1]: libpod-74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b.scope: Deactivated successfully.
Nov 22 03:51:25 compute-0 nova_compute[253461]: 2025-11-22 03:51:25.614 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:25 compute-0 podman[261803]: 2025-11-22 03:51:25.615320283 +0000 UTC m=+1.240640178 container died 74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elbakyan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:51:25 compute-0 systemd[1]: libpod-74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b.scope: Consumed 1.003s CPU time.
Nov 22 03:51:25 compute-0 ceph-mon[75011]: pgmap v955: 305 pgs: 305 active+clean; 75 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 7.6 KiB/s wr, 90 op/s
Nov 22 03:51:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-cac6f0d5f6783f54128f6cb1e1cd156f9ab7c93dc4bfe84e6ab5e46b9e52f5ed-merged.mount: Deactivated successfully.
Nov 22 03:51:25 compute-0 podman[261803]: 2025-11-22 03:51:25.827044709 +0000 UTC m=+1.452364584 container remove 74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:51:25 compute-0 systemd[1]: libpod-conmon-74c25732d2889be2ef217d6877ed0b8cfcd1a327c40933001b7f5b4d3c8dff4b.scope: Deactivated successfully.
Nov 22 03:51:25 compute-0 sudo[261696]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:25 compute-0 sudo[261860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:25 compute-0 sudo[261860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:25 compute-0 sudo[261860]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:25 compute-0 sudo[261885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:51:26 compute-0 sudo[261885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:26 compute-0 sudo[261885]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 42 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.3 KiB/s wr, 68 op/s
Nov 22 03:51:26 compute-0 sudo[261910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:26 compute-0 sudo[261910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:26 compute-0 sudo[261910]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:26 compute-0 sudo[261935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:51:26 compute-0 sudo[261935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:26 compute-0 nova_compute[253461]: 2025-11-22 03:51:26.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:26 compute-0 nova_compute[253461]: 2025-11-22 03:51:26.471 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:26 compute-0 nova_compute[253461]: 2025-11-22 03:51:26.472 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:26 compute-0 nova_compute[253461]: 2025-11-22 03:51:26.472 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:26 compute-0 nova_compute[253461]: 2025-11-22 03:51:26.472 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:51:26 compute-0 nova_compute[253461]: 2025-11-22 03:51:26.472 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:51:26 compute-0 podman[261998]: 2025-11-22 03:51:26.509615619 +0000 UTC m=+0.045150552 container create 337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:51:26 compute-0 nova_compute[253461]: 2025-11-22 03:51:26.521 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:26 compute-0 systemd[1]: Started libpod-conmon-337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60.scope.
Nov 22 03:51:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:51:26 compute-0 podman[261998]: 2025-11-22 03:51:26.487175639 +0000 UTC m=+0.022710591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:26 compute-0 podman[261998]: 2025-11-22 03:51:26.593413819 +0000 UTC m=+0.128948752 container init 337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:51:26 compute-0 podman[261998]: 2025-11-22 03:51:26.601901345 +0000 UTC m=+0.137436268 container start 337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:51:26 compute-0 sweet_moore[262016]: 167 167
Nov 22 03:51:26 compute-0 systemd[1]: libpod-337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60.scope: Deactivated successfully.
Nov 22 03:51:26 compute-0 podman[261998]: 2025-11-22 03:51:26.60699456 +0000 UTC m=+0.142529503 container attach 337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:51:26 compute-0 podman[261998]: 2025-11-22 03:51:26.607715387 +0000 UTC m=+0.143250310 container died 337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:51:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-193db8e3d082c5e92bd33dc2b365d902e70ee54c9844828edbd28abc15e96dcf-merged.mount: Deactivated successfully.
Nov 22 03:51:26 compute-0 podman[261998]: 2025-11-22 03:51:26.646239548 +0000 UTC m=+0.181774471 container remove 337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_moore, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:51:26 compute-0 systemd[1]: libpod-conmon-337171ad6c557a71085260bd3af341b55da0fdeba81e33e98637a47452147d60.scope: Deactivated successfully.
Nov 22 03:51:26 compute-0 ceph-mon[75011]: pgmap v956: 305 pgs: 305 active+clean; 42 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 5.3 KiB/s wr, 68 op/s
Nov 22 03:51:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3021834105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3021834105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:26 compute-0 podman[262060]: 2025-11-22 03:51:26.815295272 +0000 UTC m=+0.044667153 container create ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:51:26 compute-0 systemd[1]: Started libpod-conmon-ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240.scope.
Nov 22 03:51:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f29f1509d434fd0572f132582a5c739835ff82e8129d34fc02e1a32620690fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f29f1509d434fd0572f132582a5c739835ff82e8129d34fc02e1a32620690fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f29f1509d434fd0572f132582a5c739835ff82e8129d34fc02e1a32620690fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:26 compute-0 podman[262060]: 2025-11-22 03:51:26.794528549 +0000 UTC m=+0.023900411 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f29f1509d434fd0572f132582a5c739835ff82e8129d34fc02e1a32620690fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121625409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:51:26 compute-0 podman[262060]: 2025-11-22 03:51:26.905624371 +0000 UTC m=+0.134996302 container init ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:51:26 compute-0 nova_compute[253461]: 2025-11-22 03:51:26.910 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:51:26 compute-0 podman[262060]: 2025-11-22 03:51:26.913592427 +0000 UTC m=+0.142964279 container start ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dhawan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 03:51:26 compute-0 podman[262060]: 2025-11-22 03:51:26.916922777 +0000 UTC m=+0.146294639 container attach ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:51:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3371541181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3371541181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.110 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.112 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4726MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.113 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.113 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.194 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.195 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.214 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:51:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1743110788' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1743110788' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:51:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1797592217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.657 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:51:27 compute-0 sad_dhawan[262077]: {
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:     "0": [
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:         {
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "devices": [
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "/dev/loop3"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             ],
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_name": "ceph_lv0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_size": "21470642176",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "name": "ceph_lv0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "tags": {
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cluster_name": "ceph",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.crush_device_class": "",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.encrypted": "0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osd_id": "0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.type": "block",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.vdo": "0"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             },
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "type": "block",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "vg_name": "ceph_vg0"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:         }
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:     ],
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:     "1": [
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:         {
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "devices": [
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "/dev/loop4"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             ],
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_name": "ceph_lv1",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_size": "21470642176",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "name": "ceph_lv1",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "tags": {
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cluster_name": "ceph",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.crush_device_class": "",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.encrypted": "0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osd_id": "1",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.type": "block",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.vdo": "0"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             },
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "type": "block",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "vg_name": "ceph_vg1"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:         }
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:     ],
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:     "2": [
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:         {
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "devices": [
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "/dev/loop5"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             ],
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_name": "ceph_lv2",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_size": "21470642176",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "name": "ceph_lv2",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "tags": {
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.cluster_name": "ceph",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.crush_device_class": "",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.encrypted": "0",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osd_id": "2",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.type": "block",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:                 "ceph.vdo": "0"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             },
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "type": "block",
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:             "vg_name": "ceph_vg2"
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:         }
Nov 22 03:51:27 compute-0 sad_dhawan[262077]:     ]
Nov 22 03:51:27 compute-0 sad_dhawan[262077]: }
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.663 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.678 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:51:27 compute-0 systemd[1]: libpod-ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240.scope: Deactivated successfully.
Nov 22 03:51:27 compute-0 podman[262060]: 2025-11-22 03:51:27.685357302 +0000 UTC m=+0.914729154 container died ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.700 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:51:27 compute-0 nova_compute[253461]: 2025-11-22 03:51:27.700 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:51:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f29f1509d434fd0572f132582a5c739835ff82e8129d34fc02e1a32620690fe-merged.mount: Deactivated successfully.
Nov 22 03:51:27 compute-0 podman[262060]: 2025-11-22 03:51:27.738465178 +0000 UTC m=+0.967837060 container remove ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:51:27 compute-0 systemd[1]: libpod-conmon-ec3bc5dd5749f7569c4be4af7c50d43722127ba23a086025e49b8cf74d3d6240.scope: Deactivated successfully.
Nov 22 03:51:27 compute-0 sudo[261935]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3021834105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3021834105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2121625409' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3371541181' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3371541181' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1743110788' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1743110788' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1797592217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:51:27 compute-0 sudo[262124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:27 compute-0 sudo[262124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:27 compute-0 sudo[262124]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:27 compute-0 sudo[262149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:51:27 compute-0 sudo[262149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:27 compute-0 sudo[262149]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:27 compute-0 sudo[262174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:27 compute-0 sudo[262174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:27 compute-0 sudo[262174]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:28 compute-0 sudo[262199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:51:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 42 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 4.0 KiB/s wr, 73 op/s
Nov 22 03:51:28 compute-0 sudo[262199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:28 compute-0 podman[262262]: 2025-11-22 03:51:28.411549793 +0000 UTC m=+0.079357205 container create 6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:51:28 compute-0 podman[262262]: 2025-11-22 03:51:28.353048875 +0000 UTC m=+0.020856267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:28 compute-0 systemd[1]: Started libpod-conmon-6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3.scope.
Nov 22 03:51:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:51:28 compute-0 podman[262262]: 2025-11-22 03:51:28.558151425 +0000 UTC m=+0.225958867 container init 6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:51:28 compute-0 podman[262262]: 2025-11-22 03:51:28.56558836 +0000 UTC m=+0.233395752 container start 6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:51:28 compute-0 nifty_bhabha[262278]: 167 167
Nov 22 03:51:28 compute-0 systemd[1]: libpod-6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3.scope: Deactivated successfully.
Nov 22 03:51:28 compute-0 conmon[262278]: conmon 6b11f1c2b112c79fddc5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3.scope/container/memory.events
Nov 22 03:51:28 compute-0 podman[262262]: 2025-11-22 03:51:28.655284519 +0000 UTC m=+0.323091891 container attach 6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 03:51:28 compute-0 podman[262262]: 2025-11-22 03:51:28.65600914 +0000 UTC m=+0.323816552 container died 6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Nov 22 03:51:28 compute-0 nova_compute[253461]: 2025-11-22 03:51:28.697 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:28 compute-0 nova_compute[253461]: 2025-11-22 03:51:28.700 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:28 compute-0 nova_compute[253461]: 2025-11-22 03:51:28.701 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:28 compute-0 nova_compute[253461]: 2025-11-22 03:51:28.701 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:28 compute-0 nova_compute[253461]: 2025-11-22 03:51:28.702 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:28 compute-0 nova_compute[253461]: 2025-11-22 03:51:28.702 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:51:28 compute-0 ceph-mon[75011]: pgmap v957: 305 pgs: 305 active+clean; 42 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 4.0 KiB/s wr, 73 op/s
Nov 22 03:51:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b510512587183591a7f76b95f3c3290af77479834c200905b083a10b822d7bc8-merged.mount: Deactivated successfully.
Nov 22 03:51:29 compute-0 podman[262262]: 2025-11-22 03:51:29.198172166 +0000 UTC m=+0.865979588 container remove 6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_bhabha, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:51:29 compute-0 systemd[1]: libpod-conmon-6b11f1c2b112c79fddc55f5baec1069b7808674642406e4f545746af58fd47e3.scope: Deactivated successfully.
Nov 22 03:51:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4127224793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4127224793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:29 compute-0 podman[262302]: 2025-11-22 03:51:29.379111747 +0000 UTC m=+0.046025878 container create 529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_antonelli, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:51:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:29 compute-0 systemd[1]: Started libpod-conmon-529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd.scope.
Nov 22 03:51:29 compute-0 nova_compute[253461]: 2025-11-22 03:51:29.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:29 compute-0 podman[262302]: 2025-11-22 03:51:29.354663407 +0000 UTC m=+0.021577538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:51:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc4445fba2cb4724f0434f5774601f9e3f20e096772a726f789f06d4a052c53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc4445fba2cb4724f0434f5774601f9e3f20e096772a726f789f06d4a052c53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc4445fba2cb4724f0434f5774601f9e3f20e096772a726f789f06d4a052c53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc4445fba2cb4724f0434f5774601f9e3f20e096772a726f789f06d4a052c53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:51:29 compute-0 podman[262302]: 2025-11-22 03:51:29.490828021 +0000 UTC m=+0.157742202 container init 529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_antonelli, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:51:29 compute-0 podman[262302]: 2025-11-22 03:51:29.498905005 +0000 UTC m=+0.165819106 container start 529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_antonelli, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:51:29 compute-0 podman[262302]: 2025-11-22 03:51:29.507844566 +0000 UTC m=+0.174758747 container attach 529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_antonelli, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:51:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4127224793' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4127224793' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 42 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 69 op/s
Nov 22 03:51:30 compute-0 nova_compute[253461]: 2025-11-22 03:51:30.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:51:30 compute-0 bold_antonelli[262318]: {
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "osd_id": 1,
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "type": "bluestore"
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:     },
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "osd_id": 0,
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "type": "bluestore"
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:     },
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "osd_id": 2,
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:         "type": "bluestore"
Nov 22 03:51:30 compute-0 bold_antonelli[262318]:     }
Nov 22 03:51:30 compute-0 bold_antonelli[262318]: }
Nov 22 03:51:30 compute-0 systemd[1]: libpod-529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd.scope: Deactivated successfully.
Nov 22 03:51:30 compute-0 podman[262302]: 2025-11-22 03:51:30.539456845 +0000 UTC m=+1.206370976 container died 529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 03:51:30 compute-0 systemd[1]: libpod-529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd.scope: Consumed 1.039s CPU time.
Nov 22 03:51:30 compute-0 nova_compute[253461]: 2025-11-22 03:51:30.618 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bc4445fba2cb4724f0434f5774601f9e3f20e096772a726f789f06d4a052c53-merged.mount: Deactivated successfully.
Nov 22 03:51:30 compute-0 podman[262302]: 2025-11-22 03:51:30.817589396 +0000 UTC m=+1.484503537 container remove 529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_antonelli, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 03:51:30 compute-0 systemd[1]: libpod-conmon-529ddc57a53841f6aa02b8536f35a99aff27a0253a9ebbb24e494eb89d47bcdd.scope: Deactivated successfully.
Nov 22 03:51:30 compute-0 sudo[262199]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:51:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:51:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:51:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:51:31 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev bf1b7af3-9ebd-4713-b8cb-8d202e121869 does not exist
Nov 22 03:51:31 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ae327616-98f7-4427-a88a-d4321860ec09 does not exist
Nov 22 03:51:31 compute-0 ceph-mon[75011]: pgmap v958: 305 pgs: 305 active+clean; 42 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.8 KiB/s wr, 69 op/s
Nov 22 03:51:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:51:31 compute-0 sudo[262365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:51:31 compute-0 sudo[262365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:31 compute-0 sudo[262365]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:31 compute-0 sudo[262390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:51:31 compute-0 sudo[262390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:51:31 compute-0 sudo[262390]: pam_unix(sudo:session): session closed for user root
Nov 22 03:51:31 compute-0 nova_compute[253461]: 2025-11-22 03:51:31.556 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.4 KiB/s wr, 108 op/s
Nov 22 03:51:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:51:32 compute-0 nova_compute[253461]: 2025-11-22 03:51:32.536 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:32 compute-0 nova_compute[253461]: 2025-11-22 03:51:32.707 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/8334329' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/8334329' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:33 compute-0 ceph-mon[75011]: pgmap v959: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.4 KiB/s wr, 108 op/s
Nov 22 03:51:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/8334329' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/8334329' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.5 KiB/s wr, 94 op/s
Nov 22 03:51:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.403603) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783494403675, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1042, "num_deletes": 264, "total_data_size": 1273041, "memory_usage": 1292768, "flush_reason": "Manual Compaction"}
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783494432310, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1257678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18935, "largest_seqno": 19976, "table_properties": {"data_size": 1252455, "index_size": 2683, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11539, "raw_average_key_size": 19, "raw_value_size": 1241689, "raw_average_value_size": 2118, "num_data_blocks": 119, "num_entries": 586, "num_filter_entries": 586, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783430, "oldest_key_time": 1763783430, "file_creation_time": 1763783494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 28795 microseconds, and 7564 cpu microseconds.
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.432389) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1257678 bytes OK
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.432475) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.436155) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.436193) EVENT_LOG_v1 {"time_micros": 1763783494436183, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.436221) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1267895, prev total WAL file size 1267895, number of live WAL files 2.
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.437098) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1228KB)], [44(6262KB)]
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783494437148, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7670895, "oldest_snapshot_seqno": -1}
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4361 keys, 7544441 bytes, temperature: kUnknown
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783494540391, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7544441, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7513700, "index_size": 18727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 108080, "raw_average_key_size": 24, "raw_value_size": 7433165, "raw_average_value_size": 1704, "num_data_blocks": 782, "num_entries": 4361, "num_filter_entries": 4361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783494, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.540600) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7544441 bytes
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.698985) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.2 rd, 73.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 6.1 +0.0 blob) out(7.2 +0.0 blob), read-write-amplify(12.1) write-amplify(6.0) OK, records in: 4899, records dropped: 538 output_compression: NoCompression
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.699042) EVENT_LOG_v1 {"time_micros": 1763783494699018, "job": 22, "event": "compaction_finished", "compaction_time_micros": 103313, "compaction_time_cpu_micros": 31146, "output_level": 6, "num_output_files": 1, "total_output_size": 7544441, "num_input_records": 4899, "num_output_records": 4361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783494699800, "job": 22, "event": "table_file_deletion", "file_number": 46}
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783494702349, "job": 22, "event": "table_file_deletion", "file_number": 44}
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.437035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.702953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.702962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.702967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.702972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:51:34.702976) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:51:35 compute-0 nova_compute[253461]: 2025-11-22 03:51:35.550 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783480.5485833, 7a2bb77b-45b0-41b6-a9ae-27d62354c775 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:51:35 compute-0 nova_compute[253461]: 2025-11-22 03:51:35.550 253465 INFO nova.compute.manager [-] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] VM Stopped (Lifecycle Event)
Nov 22 03:51:35 compute-0 nova_compute[253461]: 2025-11-22 03:51:35.594 253465 DEBUG nova.compute.manager [None req-718913ec-d35a-4963-abe0-2533028eeb5f - - - - - -] [instance: 7a2bb77b-45b0-41b6-a9ae-27d62354c775] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:51:35 compute-0 nova_compute[253461]: 2025-11-22 03:51:35.622 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:35 compute-0 ceph-mon[75011]: pgmap v960: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 3.5 KiB/s wr, 94 op/s
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.3 KiB/s wr, 77 op/s
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:51:36
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'vms', 'default.rgw.log', 'backups', 'volumes', '.mgr', 'cephfs.cephfs.data']
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:51:36 compute-0 podman[262416]: 2025-11-22 03:51:36.411176087 +0000 UTC m=+0.082852108 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:51:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:51:36 compute-0 podman[262417]: 2025-11-22 03:51:36.452597387 +0000 UTC m=+0.121816954 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:51:36 compute-0 nova_compute[253461]: 2025-11-22 03:51:36.557 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:37 compute-0 ceph-mon[75011]: pgmap v961: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.3 KiB/s wr, 77 op/s
Nov 22 03:51:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 KiB/s wr, 69 op/s
Nov 22 03:51:38 compute-0 ceph-mon[75011]: pgmap v962: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1.8 KiB/s wr, 69 op/s
Nov 22 03:51:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.3 KiB/s wr, 52 op/s
Nov 22 03:51:40 compute-0 nova_compute[253461]: 2025-11-22 03:51:40.626 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:41 compute-0 ceph-mon[75011]: pgmap v963: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.3 KiB/s wr, 52 op/s
Nov 22 03:51:41 compute-0 nova_compute[253461]: 2025-11-22 03:51:41.598 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.3 KiB/s wr, 52 op/s
Nov 22 03:51:43 compute-0 ceph-mon[75011]: pgmap v964: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.3 KiB/s wr, 52 op/s
Nov 22 03:51:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 255 B/s wr, 6 op/s
Nov 22 03:51:44 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Nov 22 03:51:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:45 compute-0 ceph-mon[75011]: pgmap v965: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 255 B/s wr, 6 op/s
Nov 22 03:51:45 compute-0 nova_compute[253461]: 2025-11-22 03:51:45.629 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 687 KiB/s rd, 341 B/s wr, 6 op/s
Nov 22 03:51:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Nov 22 03:51:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Nov 22 03:51:46 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:51:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:51:46 compute-0 nova_compute[253461]: 2025-11-22 03:51:46.601 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:47 compute-0 ceph-mon[75011]: pgmap v966: 305 pgs: 305 active+clean; 41 MiB data, 230 MiB used, 60 GiB / 60 GiB avail; 687 KiB/s rd, 341 B/s wr, 6 op/s
Nov 22 03:51:47 compute-0 ceph-mon[75011]: osdmap e162: 3 total, 3 up, 3 in
Nov 22 03:51:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 22 03:51:49 compute-0 ceph-mon[75011]: pgmap v968: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 22 03:51:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2014471517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2014471517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 22 03:51:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2014471517' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2014471517' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:50 compute-0 podman[262463]: 2025-11-22 03:51:50.420849447 +0000 UTC m=+0.088213314 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 03:51:50 compute-0 nova_compute[253461]: 2025-11-22 03:51:50.632 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2089538584' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2089538584' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:50.987 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:51:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:50.988 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:51:50 compute-0 nova_compute[253461]: 2025-11-22 03:51:50.989 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:51 compute-0 ceph-mon[75011]: pgmap v969: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.2 KiB/s wr, 17 op/s
Nov 22 03:51:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2089538584' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2089538584' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:51 compute-0 nova_compute[253461]: 2025-11-22 03:51:51.602 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 46 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 60 op/s
Nov 22 03:51:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Nov 22 03:51:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Nov 22 03:51:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Nov 22 03:51:53 compute-0 ceph-mon[75011]: pgmap v970: 305 pgs: 305 active+clean; 46 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 60 op/s
Nov 22 03:51:53 compute-0 ceph-mon[75011]: osdmap e163: 3 total, 3 up, 3 in
Nov 22 03:51:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 49 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 361 KiB/s wr, 128 op/s
Nov 22 03:51:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:55 compute-0 ceph-mon[75011]: pgmap v972: 305 pgs: 305 active+clean; 49 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 361 KiB/s wr, 128 op/s
Nov 22 03:51:55 compute-0 nova_compute[253461]: 2025-11-22 03:51:55.636 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 49 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 980 KiB/s wr, 128 op/s
Nov 22 03:51:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/231848661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/231848661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:51:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3797505301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:51:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3797505301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:56 compute-0 nova_compute[253461]: 2025-11-22 03:51:56.605 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:51:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/231848661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/231848661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:51:56.991 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:51:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 2.1 MiB/s wr, 141 op/s
Nov 22 03:51:58 compute-0 ceph-mon[75011]: pgmap v973: 305 pgs: 305 active+clean; 49 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 980 KiB/s wr, 128 op/s
Nov 22 03:51:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3797505301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:51:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3797505301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:51:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:51:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Nov 22 03:51:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Nov 22 03:51:59 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Nov 22 03:52:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Nov 22 03:52:00 compute-0 ceph-mon[75011]: pgmap v974: 305 pgs: 305 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 2.1 MiB/s wr, 141 op/s
Nov 22 03:52:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:52:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4127083307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:52:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4127083307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:00 compute-0 nova_compute[253461]: 2025-11-22 03:52:00.638 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:01 compute-0 ceph-mon[75011]: osdmap e164: 3 total, 3 up, 3 in
Nov 22 03:52:01 compute-0 ceph-mon[75011]: pgmap v976: 305 pgs: 305 active+clean; 41 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 2.6 MiB/s wr, 123 op/s
Nov 22 03:52:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4127083307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4127083307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:01 compute-0 nova_compute[253461]: 2025-11-22 03:52:01.607 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 22 03:52:03 compute-0 ceph-mon[75011]: pgmap v977: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 2.1 MiB/s wr, 101 op/s
Nov 22 03:52:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 22 03:52:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:05 compute-0 ceph-mon[75011]: pgmap v978: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 22 03:52:05 compute-0 nova_compute[253461]: 2025-11-22 03:52:05.642 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.2 MiB/s wr, 48 op/s
Nov 22 03:52:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:52:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2815253391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:52:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2815253391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2815253391' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2815253391' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:06 compute-0 nova_compute[253461]: 2025-11-22 03:52:06.608 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:07 compute-0 podman[262483]: 2025-11-22 03:52:07.372616653 +0000 UTC m=+0.051992345 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:52:07 compute-0 podman[262484]: 2025-11-22 03:52:07.41925156 +0000 UTC m=+0.091473189 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:52:07 compute-0 ceph-mon[75011]: pgmap v979: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.2 MiB/s wr, 48 op/s
Nov 22 03:52:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.9 KiB/s wr, 20 op/s
Nov 22 03:52:08 compute-0 ovn_controller[152691]: 2025-11-22T03:52:08Z|00036|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 03:52:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Nov 22 03:52:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Nov 22 03:52:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Nov 22 03:52:08 compute-0 ceph-mon[75011]: pgmap v980: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.9 KiB/s wr, 20 op/s
Nov 22 03:52:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:09 compute-0 ceph-mon[75011]: osdmap e165: 3 total, 3 up, 3 in
Nov 22 03:52:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.9 KiB/s wr, 20 op/s
Nov 22 03:52:10 compute-0 nova_compute[253461]: 2025-11-22 03:52:10.645 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:10 compute-0 ceph-mon[75011]: pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.9 KiB/s wr, 20 op/s
Nov 22 03:52:11 compute-0 nova_compute[253461]: 2025-11-22 03:52:11.660 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Nov 22 03:52:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 28 op/s
Nov 22 03:52:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Nov 22 03:52:12 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Nov 22 03:52:13 compute-0 ceph-mon[75011]: pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 28 op/s
Nov 22 03:52:13 compute-0 ceph-mon[75011]: osdmap e166: 3 total, 3 up, 3 in
Nov 22 03:52:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:52:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811486325' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:52:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811486325' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Nov 22 03:52:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Nov 22 03:52:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Nov 22 03:52:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Nov 22 03:52:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2811486325' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2811486325' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Nov 22 03:52:15 compute-0 ceph-mon[75011]: pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Nov 22 03:52:15 compute-0 ceph-mon[75011]: osdmap e167: 3 total, 3 up, 3 in
Nov 22 03:52:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Nov 22 03:52:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Nov 22 03:52:15 compute-0 nova_compute[253461]: 2025-11-22 03:52:15.649 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.3 KiB/s wr, 93 op/s
Nov 22 03:52:16 compute-0 nova_compute[253461]: 2025-11-22 03:52:16.662 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:17 compute-0 ceph-mon[75011]: osdmap e168: 3 total, 3 up, 3 in
Nov 22 03:52:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 91 op/s
Nov 22 03:52:18 compute-0 ceph-mon[75011]: pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.3 KiB/s wr, 93 op/s
Nov 22 03:52:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.8 KiB/s wr, 70 op/s
Nov 22 03:52:20 compute-0 ceph-mon[75011]: pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.7 KiB/s wr, 91 op/s
Nov 22 03:52:20 compute-0 nova_compute[253461]: 2025-11-22 03:52:20.653 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:21 compute-0 podman[262528]: 2025-11-22 03:52:21.417027957 +0000 UTC m=+0.094900311 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 03:52:21 compute-0 nova_compute[253461]: 2025-11-22 03:52:21.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:21 compute-0 ceph-mon[75011]: pgmap v990: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.8 KiB/s wr, 70 op/s
Nov 22 03:52:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Nov 22 03:52:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.9 KiB/s wr, 72 op/s
Nov 22 03:52:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Nov 22 03:52:22 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Nov 22 03:52:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:52:23.004 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:52:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:52:23.004 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:52:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:52:23.005 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:52:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Nov 22 03:52:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Nov 22 03:52:23 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Nov 22 03:52:23 compute-0 ceph-mon[75011]: pgmap v991: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.9 KiB/s wr, 72 op/s
Nov 22 03:52:23 compute-0 ceph-mon[75011]: osdmap e169: 3 total, 3 up, 3 in
Nov 22 03:52:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 31 op/s
Nov 22 03:52:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Nov 22 03:52:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Nov 22 03:52:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Nov 22 03:52:25 compute-0 ceph-mon[75011]: osdmap e170: 3 total, 3 up, 3 in
Nov 22 03:52:25 compute-0 ceph-mon[75011]: pgmap v994: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 31 op/s
Nov 22 03:52:25 compute-0 ceph-mon[75011]: osdmap e171: 3 total, 3 up, 3 in
Nov 22 03:52:25 compute-0 nova_compute[253461]: 2025-11-22 03:52:25.656 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.0 KiB/s wr, 56 op/s
Nov 22 03:52:26 compute-0 nova_compute[253461]: 2025-11-22 03:52:26.668 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:26 compute-0 ceph-mon[75011]: pgmap v996: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.0 KiB/s wr, 56 op/s
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.514 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.515 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.661 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.661 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.661 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.662 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:52:27 compute-0 nova_compute[253461]: 2025-11-22 03:52:27.662 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:52:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Nov 22 03:52:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Nov 22 03:52:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 46 op/s
Nov 22 03:52:28 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Nov 22 03:52:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:52:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3206104430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:52:28 compute-0 nova_compute[253461]: 2025-11-22 03:52:28.200 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:52:28 compute-0 nova_compute[253461]: 2025-11-22 03:52:28.403 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:52:28 compute-0 nova_compute[253461]: 2025-11-22 03:52:28.404 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4813MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:52:28 compute-0 nova_compute[253461]: 2025-11-22 03:52:28.404 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:52:28 compute-0 nova_compute[253461]: 2025-11-22 03:52:28.404 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:52:28 compute-0 nova_compute[253461]: 2025-11-22 03:52:28.803 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:52:28 compute-0 nova_compute[253461]: 2025-11-22 03:52:28.804 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:52:28 compute-0 nova_compute[253461]: 2025-11-22 03:52:28.825 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:52:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:52:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395223404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:52:29 compute-0 nova_compute[253461]: 2025-11-22 03:52:29.290 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:52:29 compute-0 nova_compute[253461]: 2025-11-22 03:52:29.299 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:52:29 compute-0 ceph-mon[75011]: pgmap v997: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 KiB/s wr, 46 op/s
Nov 22 03:52:29 compute-0 ceph-mon[75011]: osdmap e172: 3 total, 3 up, 3 in
Nov 22 03:52:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3206104430' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:52:29 compute-0 nova_compute[253461]: 2025-11-22 03:52:29.445 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:52:29 compute-0 nova_compute[253461]: 2025-11-22 03:52:29.449 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:52:29 compute-0 nova_compute[253461]: 2025-11-22 03:52:29.449 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:52:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 927 B/s wr, 27 op/s
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.364 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.365 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.473 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.474 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.475 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.475 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.476 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.476 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:52:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1395223404' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:52:30 compute-0 nova_compute[253461]: 2025-11-22 03:52:30.660 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:52:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3201949768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:52:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3201949768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:31 compute-0 nova_compute[253461]: 2025-11-22 03:52:31.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:52:31 compute-0 sudo[262592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:31 compute-0 sudo[262592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:31 compute-0 sudo[262592]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:31 compute-0 sudo[262617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:52:31 compute-0 sudo[262617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:31 compute-0 sudo[262617]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:31 compute-0 sudo[262642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:31 compute-0 sudo[262642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:31 compute-0 sudo[262642]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:31 compute-0 nova_compute[253461]: 2025-11-22 03:52:31.670 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:31 compute-0 sudo[262667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:52:31 compute-0 sudo[262667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:31 compute-0 ceph-mon[75011]: pgmap v999: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 927 B/s wr, 27 op/s
Nov 22 03:52:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3201949768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3201949768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Nov 22 03:52:32 compute-0 sudo[262667]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:52:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:52:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:52:32 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:52:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:52:32 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:52:32 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev bc93fac6-27ad-47d9-8497-a8d1dc042c5c does not exist
Nov 22 03:52:32 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1f1f90bc-4545-4d9d-a8bf-56bb47961170 does not exist
Nov 22 03:52:32 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5306e696-99d6-4a28-81e7-28c03e90971e does not exist
Nov 22 03:52:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:52:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:52:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:52:32 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:52:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:52:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:52:32 compute-0 sudo[262722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:32 compute-0 sudo[262722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:32 compute-0 sudo[262722]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:32 compute-0 sudo[262747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:52:32 compute-0 sudo[262747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:32 compute-0 sudo[262747]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:32 compute-0 sudo[262772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:32 compute-0 sudo[262772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:32 compute-0 sudo[262772]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:32 compute-0 sudo[262797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:52:32 compute-0 sudo[262797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:33 compute-0 ceph-mon[75011]: pgmap v1000: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Nov 22 03:52:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:52:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:52:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:52:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:52:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:52:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:52:33 compute-0 podman[262863]: 2025-11-22 03:52:33.196154675 +0000 UTC m=+0.025529728 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:52:33 compute-0 podman[262863]: 2025-11-22 03:52:33.402756871 +0000 UTC m=+0.232131884 container create f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:52:33 compute-0 systemd[1]: Started libpod-conmon-f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da.scope.
Nov 22 03:52:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:52:33 compute-0 podman[262863]: 2025-11-22 03:52:33.917748282 +0000 UTC m=+0.747123345 container init f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:52:33 compute-0 podman[262863]: 2025-11-22 03:52:33.931820093 +0000 UTC m=+0.761195096 container start f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:52:33 compute-0 jolly_rhodes[262879]: 167 167
Nov 22 03:52:33 compute-0 systemd[1]: libpod-f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da.scope: Deactivated successfully.
Nov 22 03:52:34 compute-0 podman[262863]: 2025-11-22 03:52:34.004997223 +0000 UTC m=+0.834372286 container attach f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:52:34 compute-0 podman[262863]: 2025-11-22 03:52:34.008139187 +0000 UTC m=+0.837514150 container died f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:52:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.5 KiB/s wr, 33 op/s
Nov 22 03:52:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Nov 22 03:52:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d3e1c3162cd63b8fd09bdc1e2210e197bb8cfc6ed988890d097cca03e921991-merged.mount: Deactivated successfully.
Nov 22 03:52:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Nov 22 03:52:34 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Nov 22 03:52:35 compute-0 podman[262863]: 2025-11-22 03:52:35.053551266 +0000 UTC m=+1.882926259 container remove f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:52:35 compute-0 systemd[1]: libpod-conmon-f246dee3e86384857f173e95a9b30c2824da0ed9306c1749e1e88be27e8ed3da.scope: Deactivated successfully.
Nov 22 03:52:35 compute-0 podman[262905]: 2025-11-22 03:52:35.236677513 +0000 UTC m=+0.035616623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:52:35 compute-0 podman[262905]: 2025-11-22 03:52:35.348981857 +0000 UTC m=+0.147920947 container create c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 03:52:35 compute-0 systemd[1]: Started libpod-conmon-c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1.scope.
Nov 22 03:52:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc478b1e6ed95f0fde795971c026f934755c34bd8c6eb72a417d483dfaa408d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc478b1e6ed95f0fde795971c026f934755c34bd8c6eb72a417d483dfaa408d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc478b1e6ed95f0fde795971c026f934755c34bd8c6eb72a417d483dfaa408d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc478b1e6ed95f0fde795971c026f934755c34bd8c6eb72a417d483dfaa408d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc478b1e6ed95f0fde795971c026f934755c34bd8c6eb72a417d483dfaa408d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:35 compute-0 podman[262905]: 2025-11-22 03:52:35.664670195 +0000 UTC m=+0.463609325 container init c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:52:35 compute-0 nova_compute[253461]: 2025-11-22 03:52:35.664 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:35 compute-0 podman[262905]: 2025-11-22 03:52:35.677209689 +0000 UTC m=+0.476148779 container start c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:52:35 compute-0 ceph-mon[75011]: pgmap v1001: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.5 KiB/s wr, 33 op/s
Nov 22 03:52:35 compute-0 ceph-mon[75011]: osdmap e173: 3 total, 3 up, 3 in
Nov 22 03:52:35 compute-0 podman[262905]: 2025-11-22 03:52:35.810057174 +0000 UTC m=+0.608996444 container attach c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.1 KiB/s wr, 30 op/s
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:52:36
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr']
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:52:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:52:36 compute-0 nova_compute[253461]: 2025-11-22 03:52:36.672 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:36 compute-0 jovial_goldberg[262922]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:52:36 compute-0 jovial_goldberg[262922]: --> relative data size: 1.0
Nov 22 03:52:36 compute-0 jovial_goldberg[262922]: --> All data devices are unavailable
Nov 22 03:52:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Nov 22 03:52:36 compute-0 systemd[1]: libpod-c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1.scope: Deactivated successfully.
Nov 22 03:52:36 compute-0 systemd[1]: libpod-c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1.scope: Consumed 1.024s CPU time.
Nov 22 03:52:36 compute-0 podman[262905]: 2025-11-22 03:52:36.764547051 +0000 UTC m=+1.563486201 container died c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:52:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Nov 22 03:52:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Nov 22 03:52:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc478b1e6ed95f0fde795971c026f934755c34bd8c6eb72a417d483dfaa408d-merged.mount: Deactivated successfully.
Nov 22 03:52:37 compute-0 ceph-mon[75011]: pgmap v1003: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.1 KiB/s wr, 30 op/s
Nov 22 03:52:37 compute-0 ceph-mon[75011]: osdmap e174: 3 total, 3 up, 3 in
Nov 22 03:52:37 compute-0 podman[262905]: 2025-11-22 03:52:37.381888486 +0000 UTC m=+2.180827606 container remove c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 03:52:37 compute-0 sudo[262797]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:37 compute-0 systemd[1]: libpod-conmon-c4c049193a18a875af2bc9e7bdff79f74e6a14454063d024c14abb6d348879d1.scope: Deactivated successfully.
Nov 22 03:52:37 compute-0 sudo[262964]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:37 compute-0 sudo[262964]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:37 compute-0 sudo[262964]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:37 compute-0 podman[262971]: 2025-11-22 03:52:37.530782181 +0000 UTC m=+0.060631419 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:52:37 compute-0 sudo[263008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:52:37 compute-0 sudo[263008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:37 compute-0 sudo[263008]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:37 compute-0 podman[262988]: 2025-11-22 03:52:37.568058494 +0000 UTC m=+0.098270684 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:52:37 compute-0 sudo[263058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:37 compute-0 sudo[263058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:37 compute-0 sudo[263058]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:37 compute-0 sudo[263083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:52:37 compute-0 sudo[263083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:38 compute-0 podman[263146]: 2025-11-22 03:52:38.037880926 +0000 UTC m=+0.082164650 container create 068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:52:38 compute-0 podman[263146]: 2025-11-22 03:52:37.982117989 +0000 UTC m=+0.026401733 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:52:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Nov 22 03:52:38 compute-0 systemd[1]: Started libpod-conmon-068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46.scope.
Nov 22 03:52:38 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:52:38 compute-0 podman[263146]: 2025-11-22 03:52:38.418598757 +0000 UTC m=+0.462882511 container init 068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_morse, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:52:38 compute-0 podman[263146]: 2025-11-22 03:52:38.427224795 +0000 UTC m=+0.471508539 container start 068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_morse, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:52:38 compute-0 gallant_morse[263163]: 167 167
Nov 22 03:52:38 compute-0 systemd[1]: libpod-068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46.scope: Deactivated successfully.
Nov 22 03:52:38 compute-0 podman[263146]: 2025-11-22 03:52:38.557918388 +0000 UTC m=+0.602202322 container attach 068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_morse, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 03:52:38 compute-0 podman[263146]: 2025-11-22 03:52:38.559073737 +0000 UTC m=+0.603357491 container died 068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:52:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca426b6f8ceac09063e74193a96a894f6fc99d300dae29908b3ffe95b34f1081-merged.mount: Deactivated successfully.
Nov 22 03:52:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:39 compute-0 ceph-mon[75011]: pgmap v1005: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Nov 22 03:52:40 compute-0 podman[263146]: 2025-11-22 03:52:40.00264618 +0000 UTC m=+2.046929934 container remove 068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_morse, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:52:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1023 B/s wr, 29 op/s
Nov 22 03:52:40 compute-0 systemd[1]: libpod-conmon-068a02798b256219d79b708adb4069f3abe05e84e39a4657e76af55748fa3a46.scope: Deactivated successfully.
Nov 22 03:52:40 compute-0 podman[263187]: 2025-11-22 03:52:40.253808721 +0000 UTC m=+0.048580132 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:52:40 compute-0 podman[263187]: 2025-11-22 03:52:40.500094825 +0000 UTC m=+0.294866176 container create 4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kepler, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:52:40 compute-0 systemd[1]: Started libpod-conmon-4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2.scope.
Nov 22 03:52:40 compute-0 nova_compute[253461]: 2025-11-22 03:52:40.667 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e3758e4513ff2dae788ca863d5c753460b8d3d06837e7933f32109ed115e24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e3758e4513ff2dae788ca863d5c753460b8d3d06837e7933f32109ed115e24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e3758e4513ff2dae788ca863d5c753460b8d3d06837e7933f32109ed115e24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64e3758e4513ff2dae788ca863d5c753460b8d3d06837e7933f32109ed115e24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:41 compute-0 podman[263187]: 2025-11-22 03:52:41.011698738 +0000 UTC m=+0.806470099 container init 4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:52:41 compute-0 podman[263187]: 2025-11-22 03:52:41.027844741 +0000 UTC m=+0.822616092 container start 4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kepler, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:52:41 compute-0 ceph-mon[75011]: pgmap v1006: 305 pgs: 305 active+clean; 41 MiB data, 248 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1023 B/s wr, 29 op/s
Nov 22 03:52:41 compute-0 podman[263187]: 2025-11-22 03:52:41.204014272 +0000 UTC m=+0.998785593 container attach 4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:52:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:52:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3155390466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:52:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3155390466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:41 compute-0 nova_compute[253461]: 2025-11-22 03:52:41.676 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:41 compute-0 pensive_kepler[263202]: {
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:     "0": [
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:         {
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "devices": [
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "/dev/loop3"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             ],
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_name": "ceph_lv0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_size": "21470642176",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "name": "ceph_lv0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "tags": {
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cluster_name": "ceph",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.crush_device_class": "",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.encrypted": "0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osd_id": "0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.type": "block",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.vdo": "0"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             },
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "type": "block",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "vg_name": "ceph_vg0"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:         }
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:     ],
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:     "1": [
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:         {
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "devices": [
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "/dev/loop4"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             ],
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_name": "ceph_lv1",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_size": "21470642176",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "name": "ceph_lv1",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "tags": {
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cluster_name": "ceph",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.crush_device_class": "",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.encrypted": "0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osd_id": "1",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.type": "block",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.vdo": "0"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             },
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "type": "block",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "vg_name": "ceph_vg1"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:         }
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:     ],
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:     "2": [
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:         {
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "devices": [
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "/dev/loop5"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             ],
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_name": "ceph_lv2",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_size": "21470642176",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "name": "ceph_lv2",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "tags": {
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.cluster_name": "ceph",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.crush_device_class": "",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.encrypted": "0",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osd_id": "2",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.type": "block",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:                 "ceph.vdo": "0"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             },
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "type": "block",
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:             "vg_name": "ceph_vg2"
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:         }
Nov 22 03:52:41 compute-0 pensive_kepler[263202]:     ]
Nov 22 03:52:41 compute-0 pensive_kepler[263202]: }
Nov 22 03:52:41 compute-0 systemd[1]: libpod-4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2.scope: Deactivated successfully.
Nov 22 03:52:41 compute-0 podman[263187]: 2025-11-22 03:52:41.851405577 +0000 UTC m=+1.646176938 container died 4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kepler, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:52:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 KiB/s wr, 55 op/s
Nov 22 03:52:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-64e3758e4513ff2dae788ca863d5c753460b8d3d06837e7933f32109ed115e24-merged.mount: Deactivated successfully.
Nov 22 03:52:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3155390466' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:52:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3155390466' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:52:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 63 op/s
Nov 22 03:52:44 compute-0 podman[263187]: 2025-11-22 03:52:44.14367414 +0000 UTC m=+3.938445481 container remove 4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Nov 22 03:52:44 compute-0 systemd[1]: libpod-conmon-4dc652e4d9e12afb9d7a39db6e4019eae369d8b2322bffc96227e7acfa467da2.scope: Deactivated successfully.
Nov 22 03:52:44 compute-0 sudo[263083]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:44 compute-0 sudo[263227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:44 compute-0 sudo[263227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:44 compute-0 sudo[263227]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:44 compute-0 sudo[263252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:52:44 compute-0 sudo[263252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:44 compute-0 sudo[263252]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:44 compute-0 sudo[263277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:44 compute-0 sudo[263277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:44 compute-0 sudo[263277]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:44 compute-0 sudo[263302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:52:44 compute-0 sudo[263302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:44 compute-0 ceph-mon[75011]: pgmap v1007: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 KiB/s wr, 55 op/s
Nov 22 03:52:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:45 compute-0 podman[263361]: 2025-11-22 03:52:44.95641637 +0000 UTC m=+0.023381717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:52:45 compute-0 podman[263361]: 2025-11-22 03:52:45.184700679 +0000 UTC m=+0.251666036 container create e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:52:45 compute-0 systemd[1]: Started libpod-conmon-e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c.scope.
Nov 22 03:52:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:52:45 compute-0 nova_compute[253461]: 2025-11-22 03:52:45.671 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:45 compute-0 podman[263361]: 2025-11-22 03:52:45.877180685 +0000 UTC m=+0.944146032 container init e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:52:45 compute-0 podman[263361]: 2025-11-22 03:52:45.883962409 +0000 UTC m=+0.950927726 container start e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:52:45 compute-0 angry_colden[263378]: 167 167
Nov 22 03:52:45 compute-0 systemd[1]: libpod-e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c.scope: Deactivated successfully.
Nov 22 03:52:45 compute-0 ceph-mon[75011]: pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 KiB/s wr, 63 op/s
Nov 22 03:52:46 compute-0 podman[263361]: 2025-11-22 03:52:46.009960067 +0000 UTC m=+1.076925484 container attach e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:52:46 compute-0 podman[263361]: 2025-11-22 03:52:46.01047253 +0000 UTC m=+1.077437847 container died e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 6.994977860259165e-07 of space, bias 1.0, pg target 0.00020984933580777494 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:52:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:52:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1ae11d9cf6206830505fc6c26d4db8e6d82d2350cf2800e12e545578f73c0a4-merged.mount: Deactivated successfully.
Nov 22 03:52:46 compute-0 nova_compute[253461]: 2025-11-22 03:52:46.679 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:47 compute-0 podman[263361]: 2025-11-22 03:52:47.646804198 +0000 UTC m=+2.713769515 container remove e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_colden, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 03:52:47 compute-0 systemd[1]: libpod-conmon-e2478d73aff7a5cd47a3f2258037553dc9682f066bfa36bc5e416a29050a4a4c.scope: Deactivated successfully.
Nov 22 03:52:47 compute-0 ceph-mon[75011]: pgmap v1009: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 22 03:52:47 compute-0 podman[263404]: 2025-11-22 03:52:47.775728038 +0000 UTC m=+0.022905575 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:52:47 compute-0 podman[263404]: 2025-11-22 03:52:47.879821453 +0000 UTC m=+0.126998980 container create 9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 03:52:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 51 op/s
Nov 22 03:52:48 compute-0 systemd[1]: Started libpod-conmon-9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99.scope.
Nov 22 03:52:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5360dbe8737ebe1ff7bd5b9eb5d9dc1ae0df366816fa7390a9f836c8d49fcaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5360dbe8737ebe1ff7bd5b9eb5d9dc1ae0df366816fa7390a9f836c8d49fcaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5360dbe8737ebe1ff7bd5b9eb5d9dc1ae0df366816fa7390a9f836c8d49fcaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5360dbe8737ebe1ff7bd5b9eb5d9dc1ae0df366816fa7390a9f836c8d49fcaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:52:48 compute-0 podman[263404]: 2025-11-22 03:52:48.314791779 +0000 UTC m=+0.561969366 container init 9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:52:48 compute-0 podman[263404]: 2025-11-22 03:52:48.326840234 +0000 UTC m=+0.574017781 container start 9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:52:48 compute-0 podman[263404]: 2025-11-22 03:52:48.529187077 +0000 UTC m=+0.776364654 container attach 9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:52:49 compute-0 ceph-mon[75011]: pgmap v1010: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 51 op/s
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]: {
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "osd_id": 1,
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "type": "bluestore"
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:     },
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "osd_id": 0,
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "type": "bluestore"
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:     },
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "osd_id": 2,
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:         "type": "bluestore"
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]:     }
Nov 22 03:52:49 compute-0 condescending_bhabha[263420]: }
Nov 22 03:52:49 compute-0 podman[263404]: 2025-11-22 03:52:49.43540339 +0000 UTC m=+1.682580907 container died 9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:52:49 compute-0 systemd[1]: libpod-9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99.scope: Deactivated successfully.
Nov 22 03:52:49 compute-0 systemd[1]: libpod-9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99.scope: Consumed 1.116s CPU time.
Nov 22 03:52:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5360dbe8737ebe1ff7bd5b9eb5d9dc1ae0df366816fa7390a9f836c8d49fcaa-merged.mount: Deactivated successfully.
Nov 22 03:52:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:49 compute-0 podman[263404]: 2025-11-22 03:52:49.975964489 +0000 UTC m=+2.223142046 container remove 9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bhabha, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:52:49 compute-0 systemd[1]: libpod-conmon-9f834bb43b51889041e623f9d6bcb4d00c0d7f5526d159ce82dfc9bf0bc2de99.scope: Deactivated successfully.
Nov 22 03:52:50 compute-0 sudo[263302]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:52:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 22 03:52:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Nov 22 03:52:50 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:52:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:52:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Nov 22 03:52:50 compute-0 nova_compute[253461]: 2025-11-22 03:52:50.675 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Nov 22 03:52:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:52:51 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6a65f1aa-ffa7-4c26-88d2-1b1a7045eb0e does not exist
Nov 22 03:52:51 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev bb6fedca-07fd-41ff-bb91-fcd877fe3867 does not exist
Nov 22 03:52:51 compute-0 sudo[263467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:52:51 compute-0 sudo[263467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:51 compute-0 sudo[263467]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:51 compute-0 sudo[263492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:52:51 compute-0 sudo[263492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:52:51 compute-0 sudo[263492]: pam_unix(sudo:session): session closed for user root
Nov 22 03:52:51 compute-0 nova_compute[253461]: 2025-11-22 03:52:51.725 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:51 compute-0 ceph-mon[75011]: pgmap v1011: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.1 KiB/s wr, 42 op/s
Nov 22 03:52:51 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:52:51 compute-0 ceph-mon[75011]: osdmap e175: 3 total, 3 up, 3 in
Nov 22 03:52:51 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:52:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.4 KiB/s wr, 57 op/s
Nov 22 03:52:52 compute-0 podman[263517]: 2025-11-22 03:52:52.420627242 +0000 UTC m=+0.087700699 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 03:52:53 compute-0 ceph-mon[75011]: pgmap v1013: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 2.4 KiB/s wr, 57 op/s
Nov 22 03:52:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Nov 22 03:52:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:52:55 compute-0 nova_compute[253461]: 2025-11-22 03:52:55.679 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Nov 22 03:52:55 compute-0 ceph-mon[75011]: pgmap v1014: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.3 KiB/s wr, 45 op/s
Nov 22 03:52:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Nov 22 03:52:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Nov 22 03:52:56 compute-0 nova_compute[253461]: 2025-11-22 03:52:56.752 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:52:56 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Nov 22 03:52:58 compute-0 ceph-mon[75011]: pgmap v1015: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.3 KiB/s wr, 52 op/s
Nov 22 03:52:58 compute-0 ceph-mon[75011]: osdmap e176: 3 total, 3 up, 3 in
Nov 22 03:52:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.1 KiB/s wr, 73 op/s
Nov 22 03:52:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Nov 22 03:52:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Nov 22 03:52:59 compute-0 ceph-mon[75011]: pgmap v1017: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.1 KiB/s wr, 73 op/s
Nov 22 03:53:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 22 03:53:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Nov 22 03:53:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/514520555' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/514520555' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:00 compute-0 nova_compute[253461]: 2025-11-22 03:53:00.682 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:01 compute-0 ceph-mon[75011]: osdmap e177: 3 total, 3 up, 3 in
Nov 22 03:53:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/514520555' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/514520555' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:01 compute-0 nova_compute[253461]: 2025-11-22 03:53:01.754 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 77 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.9 MiB/s wr, 69 op/s
Nov 22 03:53:02 compute-0 ceph-mon[75011]: pgmap v1019: 305 pgs: 305 active+clean; 41 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 22 03:53:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 MiB/s wr, 59 op/s
Nov 22 03:53:04 compute-0 ceph-mon[75011]: pgmap v1020: 305 pgs: 305 active+clean; 77 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.9 MiB/s wr, 69 op/s
Nov 22 03:53:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Nov 22 03:53:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Nov 22 03:53:05 compute-0 nova_compute[253461]: 2025-11-22 03:53:05.685 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Nov 22 03:53:05 compute-0 ceph-mon[75011]: pgmap v1021: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 MiB/s wr, 59 op/s
Nov 22 03:53:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 47 op/s
Nov 22 03:53:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:06 compute-0 nova_compute[253461]: 2025-11-22 03:53:06.756 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Nov 22 03:53:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.6 MiB/s wr, 62 op/s
Nov 22 03:53:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Nov 22 03:53:08 compute-0 ceph-mon[75011]: osdmap e178: 3 total, 3 up, 3 in
Nov 22 03:53:08 compute-0 ceph-mon[75011]: pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.7 MiB/s wr, 47 op/s
Nov 22 03:53:08 compute-0 podman[263539]: 2025-11-22 03:53:08.410800992 +0000 UTC m=+0.083827593 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 03:53:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Nov 22 03:53:08 compute-0 podman[263540]: 2025-11-22 03:53:08.476187699 +0000 UTC m=+0.142240076 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:53:09 compute-0 ceph-mon[75011]: pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 2.6 MiB/s wr, 62 op/s
Nov 22 03:53:09 compute-0 ceph-mon[75011]: osdmap e179: 3 total, 3 up, 3 in
Nov 22 03:53:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1013567904' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1013567904' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 755 KiB/s wr, 31 op/s
Nov 22 03:53:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1013567904' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1013567904' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:10 compute-0 nova_compute[253461]: 2025-11-22 03:53:10.687 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:11 compute-0 nova_compute[253461]: 2025-11-22 03:53:11.759 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:11 compute-0 ceph-mon[75011]: pgmap v1026: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 755 KiB/s wr, 31 op/s
Nov 22 03:53:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 48 op/s
Nov 22 03:53:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2745114436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2745114436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:13 compute-0 ceph-mon[75011]: pgmap v1027: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.0 KiB/s wr, 48 op/s
Nov 22 03:53:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2745114436' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2745114436' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 KiB/s wr, 63 op/s
Nov 22 03:53:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1707094017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1707094017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:14.701 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:53:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:14.702 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:53:14 compute-0 nova_compute[253461]: 2025-11-22 03:53:14.738 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902454422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3902454422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:15 compute-0 ceph-mon[75011]: pgmap v1028: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 KiB/s wr, 63 op/s
Nov 22 03:53:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1707094017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1707094017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3902454422' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3902454422' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:15 compute-0 nova_compute[253461]: 2025-11-22 03:53:15.690 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1846209387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1846209387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.4 KiB/s wr, 62 op/s
Nov 22 03:53:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1846209387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1846209387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:16 compute-0 nova_compute[253461]: 2025-11-22 03:53:16.799 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:17 compute-0 ceph-mon[75011]: pgmap v1029: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.4 KiB/s wr, 62 op/s
Nov 22 03:53:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.0 KiB/s wr, 75 op/s
Nov 22 03:53:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Nov 22 03:53:19 compute-0 ceph-mon[75011]: pgmap v1030: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.0 KiB/s wr, 75 op/s
Nov 22 03:53:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Nov 22 03:53:19 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Nov 22 03:53:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.0 KiB/s wr, 75 op/s
Nov 22 03:53:20 compute-0 ceph-mon[75011]: osdmap e180: 3 total, 3 up, 3 in
Nov 22 03:53:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:20 compute-0 nova_compute[253461]: 2025-11-22 03:53:20.721 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:21 compute-0 ceph-mon[75011]: pgmap v1032: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.0 KiB/s wr, 75 op/s
Nov 22 03:53:21 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:21.704 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136856322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2136856322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:21 compute-0 nova_compute[253461]: 2025-11-22 03:53:21.800 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.8 KiB/s wr, 87 op/s
Nov 22 03:53:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2136856322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2136856322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:23.005 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:23.005 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:23.005 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/764825017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/764825017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:23 compute-0 podman[263584]: 2025-11-22 03:53:23.386362426 +0000 UTC m=+0.071166518 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Nov 22 03:53:23 compute-0 ceph-mon[75011]: pgmap v1033: 305 pgs: 305 active+clean; 88 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.8 KiB/s wr, 87 op/s
Nov 22 03:53:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/764825017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/764825017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.7 KiB/s wr, 92 op/s
Nov 22 03:53:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292801398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292801398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:25 compute-0 ceph-mon[75011]: pgmap v1034: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.7 KiB/s wr, 92 op/s
Nov 22 03:53:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3292801398' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3292801398' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:25 compute-0 nova_compute[253461]: 2025-11-22 03:53:25.725 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3819280910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3819280910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.1 KiB/s wr, 94 op/s
Nov 22 03:53:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3819280910' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3819280910' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.569106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783606569159, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1417, "num_deletes": 259, "total_data_size": 2011077, "memory_usage": 2040912, "flush_reason": "Manual Compaction"}
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783606585791, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1987914, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19977, "largest_seqno": 21393, "table_properties": {"data_size": 1981025, "index_size": 3964, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14811, "raw_average_key_size": 20, "raw_value_size": 1967172, "raw_average_value_size": 2739, "num_data_blocks": 177, "num_entries": 718, "num_filter_entries": 718, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783494, "oldest_key_time": 1763783494, "file_creation_time": 1763783606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 16750 microseconds, and 6117 cpu microseconds.
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.585856) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1987914 bytes OK
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.585886) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.589009) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.589035) EVENT_LOG_v1 {"time_micros": 1763783606589026, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.589059) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2004607, prev total WAL file size 2004607, number of live WAL files 2.
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.590474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1941KB)], [47(7367KB)]
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783606590539, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9532355, "oldest_snapshot_seqno": -1}
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4552 keys, 7789559 bytes, temperature: kUnknown
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783606656200, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7789559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7757068, "index_size": 20029, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 112933, "raw_average_key_size": 24, "raw_value_size": 7672625, "raw_average_value_size": 1685, "num_data_blocks": 832, "num_entries": 4552, "num_filter_entries": 4552, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.656547) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7789559 bytes
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.658087) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.0 rd, 118.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(8.7) write-amplify(3.9) OK, records in: 5079, records dropped: 527 output_compression: NoCompression
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.658114) EVENT_LOG_v1 {"time_micros": 1763783606658101, "job": 24, "event": "compaction_finished", "compaction_time_micros": 65751, "compaction_time_cpu_micros": 23611, "output_level": 6, "num_output_files": 1, "total_output_size": 7789559, "num_input_records": 5079, "num_output_records": 4552, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783606658891, "job": 24, "event": "table_file_deletion", "file_number": 49}
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783606661286, "job": 24, "event": "table_file_deletion", "file_number": 47}
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.590348) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.661404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.661410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.661412) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.661413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:53:26 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:53:26.661415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:53:26 compute-0 nova_compute[253461]: 2025-11-22 03:53:26.803 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:27 compute-0 ceph-mon[75011]: pgmap v1035: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.1 KiB/s wr, 94 op/s
Nov 22 03:53:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.6 KiB/s wr, 78 op/s
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.426 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.428 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.428 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.440 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.441 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.441 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.441 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.465 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.466 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.466 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.467 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.467 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:29 compute-0 ceph-mon[75011]: pgmap v1036: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 3.6 KiB/s wr, 78 op/s
Nov 22 03:53:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:53:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517143283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:29 compute-0 nova_compute[253461]: 2025-11-22 03:53:29.941 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.4 KiB/s wr, 73 op/s
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.126 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.128 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4785MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.128 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.128 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.189 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.189 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.206 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/517143283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:53:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2527433484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.688 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.694 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.710 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.712 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.712 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:30 compute-0 nova_compute[253461]: 2025-11-22 03:53:30.726 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:31 compute-0 ceph-mon[75011]: pgmap v1037: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 3.4 KiB/s wr, 73 op/s
Nov 22 03:53:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2527433484' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.700 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.701 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.702 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.702 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.722 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.722 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.779 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.805 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.853 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.853 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.860 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.860 253465 INFO nova.compute.claims [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:53:31 compute-0 nova_compute[253461]: 2025-11-22 03:53:31.980 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.8 KiB/s wr, 68 op/s
Nov 22 03:53:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:53:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3064003660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:53:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3064003660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:53:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3405974375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.431 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.440 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.449 253465 DEBUG nova.compute.provider_tree [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.465 253465 DEBUG nova.scheduler.client.report [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.487 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.488 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.537 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.537 253465 DEBUG nova.network.neutron [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.557 253465 INFO nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.579 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:53:32 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3064003660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:53:32 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3064003660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:53:32 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3405974375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.697 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.699 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.699 253465 INFO nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Creating image(s)
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.722 253465 DEBUG nova.storage.rbd_utils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 7534ee27-8821-44c9-b66c-83a8f2e43711_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.744 253465 DEBUG nova.storage.rbd_utils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 7534ee27-8821-44c9-b66c-83a8f2e43711_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.768 253465 DEBUG nova.storage.rbd_utils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 7534ee27-8821-44c9-b66c-83a8f2e43711_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.773 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.823 253465 DEBUG nova.policy [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb6a4080968040f8a28c3b9e7c4296b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7db9d09fb4a241818f75d0198445d55c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.833 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.834 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.835 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.836 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.863 253465 DEBUG nova.storage.rbd_utils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 7534ee27-8821-44c9-b66c-83a8f2e43711_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:32 compute-0 nova_compute[253461]: 2025-11-22 03:53:32.868 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 7534ee27-8821-44c9-b66c-83a8f2e43711_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:33 compute-0 nova_compute[253461]: 2025-11-22 03:53:33.200 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 7534ee27-8821-44c9-b66c-83a8f2e43711_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:33 compute-0 nova_compute[253461]: 2025-11-22 03:53:33.271 253465 DEBUG nova.storage.rbd_utils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] resizing rbd image 7534ee27-8821-44c9-b66c-83a8f2e43711_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:53:33 compute-0 nova_compute[253461]: 2025-11-22 03:53:33.388 253465 DEBUG nova.objects.instance [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lazy-loading 'migration_context' on Instance uuid 7534ee27-8821-44c9-b66c-83a8f2e43711 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:53:33 compute-0 nova_compute[253461]: 2025-11-22 03:53:33.407 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:53:33 compute-0 nova_compute[253461]: 2025-11-22 03:53:33.408 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Ensure instance console log exists: /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:53:33 compute-0 nova_compute[253461]: 2025-11-22 03:53:33.409 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:33 compute-0 nova_compute[253461]: 2025-11-22 03:53:33.409 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:33 compute-0 nova_compute[253461]: 2025-11-22 03:53:33.410 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:33 compute-0 ceph-mon[75011]: pgmap v1038: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.8 KiB/s wr, 68 op/s
Nov 22 03:53:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.2 KiB/s wr, 43 op/s
Nov 22 03:53:34 compute-0 nova_compute[253461]: 2025-11-22 03:53:34.330 253465 DEBUG nova.network.neutron [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Successfully created port: 6f060257-b046-4e5b-80e9-23d0778a934b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:53:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:35 compute-0 ceph-mon[75011]: pgmap v1039: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.2 KiB/s wr, 43 op/s
Nov 22 03:53:35 compute-0 nova_compute[253461]: 2025-11-22 03:53:35.729 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:35 compute-0 nova_compute[253461]: 2025-11-22 03:53:35.743 253465 DEBUG nova.network.neutron [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Successfully updated port: 6f060257-b046-4e5b-80e9-23d0778a934b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:53:35 compute-0 nova_compute[253461]: 2025-11-22 03:53:35.764 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "refresh_cache-7534ee27-8821-44c9-b66c-83a8f2e43711" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:53:35 compute-0 nova_compute[253461]: 2025-11-22 03:53:35.764 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquired lock "refresh_cache-7534ee27-8821-44c9-b66c-83a8f2e43711" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:53:35 compute-0 nova_compute[253461]: 2025-11-22 03:53:35.764 253465 DEBUG nova.network.neutron [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:53:35 compute-0 nova_compute[253461]: 2025-11-22 03:53:35.904 253465 DEBUG nova.compute.manager [req-d755b59c-a351-447d-aa7e-b36b89ed70f2 req-25b54b4e-279c-4d2c-b30e-70a03fd8059f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-changed-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:35 compute-0 nova_compute[253461]: 2025-11-22 03:53:35.905 253465 DEBUG nova.compute.manager [req-d755b59c-a351-447d-aa7e-b36b89ed70f2 req-25b54b4e-279c-4d2c-b30e-70a03fd8059f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Refreshing instance network info cache due to event network-changed-6f060257-b046-4e5b-80e9-23d0778a934b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:53:35 compute-0 nova_compute[253461]: 2025-11-22 03:53:35.905 253465 DEBUG oslo_concurrency.lockutils [req-d755b59c-a351-447d-aa7e-b36b89ed70f2 req-25b54b4e-279c-4d2c-b30e-70a03fd8059f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-7534ee27-8821-44c9-b66c-83a8f2e43711" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 107 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 165 KiB/s wr, 52 op/s
Nov 22 03:53:36 compute-0 nova_compute[253461]: 2025-11-22 03:53:36.165 253465 DEBUG nova.network.neutron [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:53:36
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', 'images', 'default.rgw.log', '.mgr', 'volumes']
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:53:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:53:36 compute-0 nova_compute[253461]: 2025-11-22 03:53:36.806 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.190 253465 DEBUG nova.network.neutron [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Updating instance_info_cache with network_info: [{"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.209 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Releasing lock "refresh_cache-7534ee27-8821-44c9-b66c-83a8f2e43711" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.210 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Instance network_info: |[{"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.211 253465 DEBUG oslo_concurrency.lockutils [req-d755b59c-a351-447d-aa7e-b36b89ed70f2 req-25b54b4e-279c-4d2c-b30e-70a03fd8059f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-7534ee27-8821-44c9-b66c-83a8f2e43711" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.211 253465 DEBUG nova.network.neutron [req-d755b59c-a351-447d-aa7e-b36b89ed70f2 req-25b54b4e-279c-4d2c-b30e-70a03fd8059f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Refreshing network info cache for port 6f060257-b046-4e5b-80e9-23d0778a934b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.214 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Start _get_guest_xml network_info=[{"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.218 253465 WARNING nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.222 253465 DEBUG nova.virt.libvirt.host [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.222 253465 DEBUG nova.virt.libvirt.host [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.225 253465 DEBUG nova.virt.libvirt.host [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.225 253465 DEBUG nova.virt.libvirt.host [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.225 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.226 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.226 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.226 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.227 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.227 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.227 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.227 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.227 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.227 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.228 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.228 253465 DEBUG nova.virt.hardware [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.230 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:53:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3724020728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.644 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:37 compute-0 ceph-mon[75011]: pgmap v1040: 305 pgs: 305 active+clean; 107 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 165 KiB/s wr, 52 op/s
Nov 22 03:53:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3724020728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.692 253465 DEBUG nova.storage.rbd_utils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 7534ee27-8821-44c9-b66c-83a8f2e43711_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:37 compute-0 nova_compute[253461]: 2025-11-22 03:53:37.695 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 22 03:53:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:53:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857181944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.135 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.136 253465 DEBUG nova.virt.libvirt.vif [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2125267483',display_name='tempest-VolumesActionsTest-instance-2125267483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2125267483',id=2,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7db9d09fb4a241818f75d0198445d55c',ramdisk_id='',reservation_id='r-26vvyg76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1537057398',owner_user_name='tempest-VolumesActionsTest-1537057398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:53:32Z,user_data=None,user_id='fb6a4080968040f8a28c3b9e7c4296b8',uuid=7534ee27-8821-44c9-b66c-83a8f2e43711,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.137 253465 DEBUG nova.network.os_vif_util [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converting VIF {"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.137 253465 DEBUG nova.network.os_vif_util [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:05:ec,bridge_name='br-int',has_traffic_filtering=True,id=6f060257-b046-4e5b-80e9-23d0778a934b,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f060257-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.139 253465 DEBUG nova.objects.instance [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lazy-loading 'pci_devices' on Instance uuid 7534ee27-8821-44c9-b66c-83a8f2e43711 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.160 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <uuid>7534ee27-8821-44c9-b66c-83a8f2e43711</uuid>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <name>instance-00000002</name>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesActionsTest-instance-2125267483</nova:name>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:53:37</nova:creationTime>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <nova:user uuid="fb6a4080968040f8a28c3b9e7c4296b8">tempest-VolumesActionsTest-1537057398-project-member</nova:user>
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <nova:project uuid="7db9d09fb4a241818f75d0198445d55c">tempest-VolumesActionsTest-1537057398</nova:project>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <nova:port uuid="6f060257-b046-4e5b-80e9-23d0778a934b">
Nov 22 03:53:38 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <system>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <entry name="serial">7534ee27-8821-44c9-b66c-83a8f2e43711</entry>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <entry name="uuid">7534ee27-8821-44c9-b66c-83a8f2e43711</entry>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </system>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <os>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   </os>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <features>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   </features>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/7534ee27-8821-44c9-b66c-83a8f2e43711_disk">
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       </source>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/7534ee27-8821-44c9-b66c-83a8f2e43711_disk.config">
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       </source>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:53:38 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:0f:05:ec"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <target dev="tap6f060257-b0"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711/console.log" append="off"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <video>
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </video>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:53:38 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:53:38 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:53:38 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:53:38 compute-0 nova_compute[253461]: </domain>
Nov 22 03:53:38 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.161 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Preparing to wait for external event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.162 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.162 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.162 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.163 253465 DEBUG nova.virt.libvirt.vif [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2125267483',display_name='tempest-VolumesActionsTest-instance-2125267483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2125267483',id=2,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7db9d09fb4a241818f75d0198445d55c',ramdisk_id='',reservation_id='r-26vvyg76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1537057398',owner_user_name='tempest-VolumesActionsTest-1537057398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:53:32Z,user_data=None,user_id='fb6a4080968040f8a28c3b9e7c4296b8',uuid=7534ee27-8821-44c9-b66c-83a8f2e43711,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.164 253465 DEBUG nova.network.os_vif_util [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converting VIF {"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.164 253465 DEBUG nova.network.os_vif_util [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:05:ec,bridge_name='br-int',has_traffic_filtering=True,id=6f060257-b046-4e5b-80e9-23d0778a934b,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f060257-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.164 253465 DEBUG os_vif [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:05:ec,bridge_name='br-int',has_traffic_filtering=True,id=6f060257-b046-4e5b-80e9-23d0778a934b,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f060257-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.165 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.165 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.166 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.169 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.170 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6f060257-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.170 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6f060257-b0, col_values=(('external_ids', {'iface-id': '6f060257-b046-4e5b-80e9-23d0778a934b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:05:ec', 'vm-uuid': '7534ee27-8821-44c9-b66c-83a8f2e43711'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.172 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:38 compute-0 NetworkManager[48916]: <info>  [1763783618.1727] manager: (tap6f060257-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.174 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.182 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.183 253465 INFO os_vif [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:05:ec,bridge_name='br-int',has_traffic_filtering=True,id=6f060257-b046-4e5b-80e9-23d0778a934b,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f060257-b0')
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.231 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.231 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.231 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] No VIF found with MAC fa:16:3e:0f:05:ec, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.232 253465 INFO nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Using config drive
Nov 22 03:53:38 compute-0 nova_compute[253461]: 2025-11-22 03:53:38.254 253465 DEBUG nova.storage.rbd_utils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 7534ee27-8821-44c9-b66c-83a8f2e43711_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3857181944' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.290 253465 INFO nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Creating config drive at /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711/disk.config
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.299 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmwmyzbze execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.337 253465 DEBUG nova.network.neutron [req-d755b59c-a351-447d-aa7e-b36b89ed70f2 req-25b54b4e-279c-4d2c-b30e-70a03fd8059f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Updated VIF entry in instance network info cache for port 6f060257-b046-4e5b-80e9-23d0778a934b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.337 253465 DEBUG nova.network.neutron [req-d755b59c-a351-447d-aa7e-b36b89ed70f2 req-25b54b4e-279c-4d2c-b30e-70a03fd8059f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Updating instance_info_cache with network_info: [{"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.353 253465 DEBUG oslo_concurrency.lockutils [req-d755b59c-a351-447d-aa7e-b36b89ed70f2 req-25b54b4e-279c-4d2c-b30e-70a03fd8059f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-7534ee27-8821-44c9-b66c-83a8f2e43711" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:53:39 compute-0 podman[263919]: 2025-11-22 03:53:39.413228743 +0000 UTC m=+0.090968917 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.441 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmwmyzbze" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:39 compute-0 podman[263920]: 2025-11-22 03:53:39.44636096 +0000 UTC m=+0.123653817 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.467 253465 DEBUG nova.storage.rbd_utils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 7534ee27-8821-44c9-b66c-83a8f2e43711_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.470 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711/disk.config 7534ee27-8821-44c9-b66c-83a8f2e43711_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:39 compute-0 ceph-mon[75011]: pgmap v1041: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.707 253465 DEBUG oslo_concurrency.processutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711/disk.config 7534ee27-8821-44c9-b66c-83a8f2e43711_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.708 253465 INFO nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Deleting local config drive /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711/disk.config because it was imported into RBD.
Nov 22 03:53:39 compute-0 kernel: tap6f060257-b0: entered promiscuous mode
Nov 22 03:53:39 compute-0 NetworkManager[48916]: <info>  [1763783619.7730] manager: (tap6f060257-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 22 03:53:39 compute-0 ovn_controller[152691]: 2025-11-22T03:53:39Z|00037|binding|INFO|Claiming lport 6f060257-b046-4e5b-80e9-23d0778a934b for this chassis.
Nov 22 03:53:39 compute-0 ovn_controller[152691]: 2025-11-22T03:53:39Z|00038|binding|INFO|6f060257-b046-4e5b-80e9-23d0778a934b: Claiming fa:16:3e:0f:05:ec 10.100.0.4
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.828 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:39 compute-0 systemd-udevd[264014]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.840 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:05:ec 10.100.0.4'], port_security=['fa:16:3e:0f:05:ec 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7534ee27-8821-44c9-b66c-83a8f2e43711', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d738119f-cffc-4235-aea9-bf290e9aca77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7db9d09fb4a241818f75d0198445d55c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6a9d1a2c-1ada-4410-8b2b-640ade242853', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=735d6d4b-7c7b-4c2a-a66c-4ccd96675388, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=6f060257-b046-4e5b-80e9-23d0778a934b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.841 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 6f060257-b046-4e5b-80e9-23d0778a934b in datapath d738119f-cffc-4235-aea9-bf290e9aca77 bound to our chassis
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.843 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d738119f-cffc-4235-aea9-bf290e9aca77
Nov 22 03:53:39 compute-0 NetworkManager[48916]: <info>  [1763783619.8445] device (tap6f060257-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:53:39 compute-0 NetworkManager[48916]: <info>  [1763783619.8456] device (tap6f060257-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.855 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1605833d-a39f-4f1e-8eb0-4b874164dc8d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.856 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd738119f-c1 in ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:53:39 compute-0 systemd-machined[215728]: New machine qemu-2-instance-00000002.
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.858 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd738119f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.858 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[72616988-18d6-4dfc-86b0-540c57e189fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.859 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c9124f64-73c9-42b5-8b30-426f8eb382e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.869 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[23e03263-ccb7-437e-a065-10f5f347ccbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:39 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.900 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3640bc5e-da0c-4f17-b39e-ba61c6fb9b54]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.925 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:39 compute-0 ovn_controller[152691]: 2025-11-22T03:53:39Z|00039|binding|INFO|Setting lport 6f060257-b046-4e5b-80e9-23d0778a934b ovn-installed in OVS
Nov 22 03:53:39 compute-0 ovn_controller[152691]: 2025-11-22T03:53:39Z|00040|binding|INFO|Setting lport 6f060257-b046-4e5b-80e9-23d0778a934b up in Southbound
Nov 22 03:53:39 compute-0 nova_compute[253461]: 2025-11-22 03:53:39.929 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.934 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[47889286-b27c-46cb-ac5f-d941b2cc7f7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:39 compute-0 systemd-udevd[264018]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:53:39 compute-0 NetworkManager[48916]: <info>  [1763783619.9438] manager: (tapd738119f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.943 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[09d3e520-f284-4727-bb8d-ffe30e573cd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.976 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e84231-3a55-41ea-b4cb-9da1cf8298d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:39.980 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[803d9bf7-61b4-42a9-b1a7-46de8b0b7bf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:40 compute-0 NetworkManager[48916]: <info>  [1763783620.0028] device (tapd738119f-c0): carrier: link connected
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.008 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b4813b-d08d-47d5-a48e-8a8876508a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.029 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8673a733-5222-47b7-83ac-1bc7c2901b5a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd738119f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:09:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387260, 'reachable_time': 27240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264050, 'error': None, 'target': 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.049 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2d57cc06-c655-4b3b-add8-8c00d6a96035]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:946'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 387260, 'tstamp': 387260}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264052, 'error': None, 'target': 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.069 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0d7f18-0d6e-4482-add2-a396f35dde59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd738119f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:09:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387260, 'reachable_time': 27240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264053, 'error': None, 'target': 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.115 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa4f2f5-7504-4491-8217-c2e5e7565de6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.193 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ccb6dd-e261-46d1-8abd-253a058b6623]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.195 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd738119f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.195 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.196 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd738119f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.199 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:40 compute-0 kernel: tapd738119f-c0: entered promiscuous mode
Nov 22 03:53:40 compute-0 NetworkManager[48916]: <info>  [1763783620.2032] manager: (tapd738119f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.203 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.204 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd738119f-c0, col_values=(('external_ids', {'iface-id': 'cfd2c61f-02e2-42f5-ba0c-9da1b93469ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:40 compute-0 ovn_controller[152691]: 2025-11-22T03:53:40Z|00041|binding|INFO|Releasing lport cfd2c61f-02e2-42f5-ba0c-9da1b93469ce from this chassis (sb_readonly=0)
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.206 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.236 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.237 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d738119f-cffc-4235-aea9-bf290e9aca77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d738119f-cffc-4235-aea9-bf290e9aca77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.238 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[38e3be73-138c-4fc8-a4c1-bf7a72516b98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.239 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-d738119f-cffc-4235-aea9-bf290e9aca77
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/d738119f-cffc-4235-aea9-bf290e9aca77.pid.haproxy
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID d738119f-cffc-4235-aea9-bf290e9aca77
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:53:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:40.240 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'env', 'PROCESS_TAG=haproxy-d738119f-cffc-4235-aea9-bf290e9aca77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d738119f-cffc-4235-aea9-bf290e9aca77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.442 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783620.4416802, 7534ee27-8821-44c9-b66c-83a8f2e43711 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.443 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] VM Started (Lifecycle Event)
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.469 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.474 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783620.4418852, 7534ee27-8821-44c9-b66c-83a8f2e43711 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.475 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] VM Paused (Lifecycle Event)
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.493 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.498 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:53:40 compute-0 nova_compute[253461]: 2025-11-22 03:53:40.523 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:53:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:40 compute-0 podman[264127]: 2025-11-22 03:53:40.649574991 +0000 UTC m=+0.054672121 container create e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:53:40 compute-0 systemd[1]: Started libpod-conmon-e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542.scope.
Nov 22 03:53:40 compute-0 podman[264127]: 2025-11-22 03:53:40.619581802 +0000 UTC m=+0.024678952 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:53:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:53:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df938db531d7de2793a71bf77edd79412a52288a1ba546ab6ad489b226eb9f26/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:40 compute-0 podman[264127]: 2025-11-22 03:53:40.757388941 +0000 UTC m=+0.162486081 container init e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:53:40 compute-0 podman[264127]: 2025-11-22 03:53:40.76860682 +0000 UTC m=+0.173703970 container start e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 03:53:40 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[264142]: [NOTICE]   (264146) : New worker (264148) forked
Nov 22 03:53:40 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[264142]: [NOTICE]   (264146) : Loading success.
Nov 22 03:53:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Nov 22 03:53:41 compute-0 ceph-mon[75011]: pgmap v1042: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Nov 22 03:53:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Nov 22 03:53:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Nov 22 03:53:41 compute-0 nova_compute[253461]: 2025-11-22 03:53:41.808 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Nov 22 03:53:42 compute-0 ceph-mon[75011]: osdmap e181: 3 total, 3 up, 3 in
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.867 253465 DEBUG nova.compute.manager [req-f90653fb-88e6-41d0-9415-76822d9dc6c2 req-821d5e80-44f1-42f2-9f0a-e07e07d3fd7b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.867 253465 DEBUG oslo_concurrency.lockutils [req-f90653fb-88e6-41d0-9415-76822d9dc6c2 req-821d5e80-44f1-42f2-9f0a-e07e07d3fd7b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.868 253465 DEBUG oslo_concurrency.lockutils [req-f90653fb-88e6-41d0-9415-76822d9dc6c2 req-821d5e80-44f1-42f2-9f0a-e07e07d3fd7b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.868 253465 DEBUG oslo_concurrency.lockutils [req-f90653fb-88e6-41d0-9415-76822d9dc6c2 req-821d5e80-44f1-42f2-9f0a-e07e07d3fd7b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.869 253465 DEBUG nova.compute.manager [req-f90653fb-88e6-41d0-9415-76822d9dc6c2 req-821d5e80-44f1-42f2-9f0a-e07e07d3fd7b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Processing event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.870 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.874 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783622.874516, 7534ee27-8821-44c9-b66c-83a8f2e43711 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.875 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] VM Resumed (Lifecycle Event)
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.878 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.883 253465 INFO nova.virt.libvirt.driver [-] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Instance spawned successfully.
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.885 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.901 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.911 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.918 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.919 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.920 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.921 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.922 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.923 253465 DEBUG nova.virt.libvirt.driver [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.932 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.974 253465 INFO nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Took 10.28 seconds to spawn the instance on the hypervisor.
Nov 22 03:53:42 compute-0 nova_compute[253461]: 2025-11-22 03:53:42.979 253465 DEBUG nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:53:43 compute-0 nova_compute[253461]: 2025-11-22 03:53:43.041 253465 INFO nova.compute.manager [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Took 11.20 seconds to build instance.
Nov 22 03:53:43 compute-0 nova_compute[253461]: 2025-11-22 03:53:43.066 253465 DEBUG oslo_concurrency.lockutils [None req-914ac625-1477-4497-9585-d4f04f039ddf fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:43 compute-0 nova_compute[253461]: 2025-11-22 03:53:43.174 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Nov 22 03:53:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Nov 22 03:53:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Nov 22 03:53:43 compute-0 ceph-mon[75011]: pgmap v1044: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Nov 22 03:53:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.4 MiB/s wr, 62 op/s
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.697 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.699 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.699 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.700 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.700 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.702 253465 INFO nova.compute.manager [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Terminating instance
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.704 253465 DEBUG nova.compute.manager [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:53:44 compute-0 ceph-mon[75011]: osdmap e182: 3 total, 3 up, 3 in
Nov 22 03:53:44 compute-0 kernel: tap6f060257-b0 (unregistering): left promiscuous mode
Nov 22 03:53:44 compute-0 NetworkManager[48916]: <info>  [1763783624.7601] device (tap6f060257-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00042|binding|INFO|Releasing lport 6f060257-b046-4e5b-80e9-23d0778a934b from this chassis (sb_readonly=0)
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.769 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00043|binding|INFO|Setting lport 6f060257-b046-4e5b-80e9-23d0778a934b down in Southbound
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00044|binding|INFO|Removing iface tap6f060257-b0 ovn-installed in OVS
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.772 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:44.779 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:05:ec 10.100.0.4'], port_security=['fa:16:3e:0f:05:ec 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7534ee27-8821-44c9-b66c-83a8f2e43711', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d738119f-cffc-4235-aea9-bf290e9aca77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7db9d09fb4a241818f75d0198445d55c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6a9d1a2c-1ada-4410-8b2b-640ade242853', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=735d6d4b-7c7b-4c2a-a66c-4ccd96675388, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=6f060257-b046-4e5b-80e9-23d0778a934b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:53:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:44.782 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 6f060257-b046-4e5b-80e9-23d0778a934b in datapath d738119f-cffc-4235-aea9-bf290e9aca77 unbound from our chassis
Nov 22 03:53:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:44.785 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d738119f-cffc-4235-aea9-bf290e9aca77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:53:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:44.786 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c576923e-6dd1-4226-92b1-8ccc9231119d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:44.787 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 namespace which is not needed anymore
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.787 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:44 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 22 03:53:44 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 2.629s CPU time.
Nov 22 03:53:44 compute-0 systemd-machined[215728]: Machine qemu-2-instance-00000002 terminated.
Nov 22 03:53:44 compute-0 kernel: tap6f060257-b0: entered promiscuous mode
Nov 22 03:53:44 compute-0 kernel: tap6f060257-b0 (unregistering): left promiscuous mode
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.930 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00045|binding|INFO|Claiming lport 6f060257-b046-4e5b-80e9-23d0778a934b for this chassis.
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00046|binding|INFO|6f060257-b046-4e5b-80e9-23d0778a934b: Claiming fa:16:3e:0f:05:ec 10.100.0.4
Nov 22 03:53:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:44.938 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:05:ec 10.100.0.4'], port_security=['fa:16:3e:0f:05:ec 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7534ee27-8821-44c9-b66c-83a8f2e43711', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d738119f-cffc-4235-aea9-bf290e9aca77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7db9d09fb4a241818f75d0198445d55c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6a9d1a2c-1ada-4410-8b2b-640ade242853', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=735d6d4b-7c7b-4c2a-a66c-4ccd96675388, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=6f060257-b046-4e5b-80e9-23d0778a934b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.946 253465 INFO nova.virt.libvirt.driver [-] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Instance destroyed successfully.
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.947 253465 DEBUG nova.objects.instance [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lazy-loading 'resources' on Instance uuid 7534ee27-8821-44c9-b66c-83a8f2e43711 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00047|binding|INFO|Setting lport 6f060257-b046-4e5b-80e9-23d0778a934b ovn-installed in OVS
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00048|binding|INFO|Setting lport 6f060257-b046-4e5b-80e9-23d0778a934b up in Southbound
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00049|binding|INFO|Releasing lport 6f060257-b046-4e5b-80e9-23d0778a934b from this chassis (sb_readonly=1)
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00050|if_status|INFO|Not setting lport 6f060257-b046-4e5b-80e9-23d0778a934b down as sb is readonly
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00051|binding|INFO|Removing iface tap6f060257-b0 ovn-installed in OVS
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00052|binding|INFO|Releasing lport 6f060257-b046-4e5b-80e9-23d0778a934b from this chassis (sb_readonly=0)
Nov 22 03:53:44 compute-0 ovn_controller[152691]: 2025-11-22T03:53:44Z|00053|binding|INFO|Setting lport 6f060257-b046-4e5b-80e9-23d0778a934b down in Southbound
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.958 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.963 253465 DEBUG nova.compute.manager [req-c3ec4800-00e9-4fa9-8cee-f4fc5599acf3 req-48084900-c373-4fc1-ba62-8fd29ff39b78 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.964 253465 DEBUG oslo_concurrency.lockutils [req-c3ec4800-00e9-4fa9-8cee-f4fc5599acf3 req-48084900-c373-4fc1-ba62-8fd29ff39b78 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.964 253465 DEBUG oslo_concurrency.lockutils [req-c3ec4800-00e9-4fa9-8cee-f4fc5599acf3 req-48084900-c373-4fc1-ba62-8fd29ff39b78 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:44.964 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:05:ec 10.100.0.4'], port_security=['fa:16:3e:0f:05:ec 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7534ee27-8821-44c9-b66c-83a8f2e43711', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d738119f-cffc-4235-aea9-bf290e9aca77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7db9d09fb4a241818f75d0198445d55c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6a9d1a2c-1ada-4410-8b2b-640ade242853', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=735d6d4b-7c7b-4c2a-a66c-4ccd96675388, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=6f060257-b046-4e5b-80e9-23d0778a934b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.964 253465 DEBUG oslo_concurrency.lockutils [req-c3ec4800-00e9-4fa9-8cee-f4fc5599acf3 req-48084900-c373-4fc1-ba62-8fd29ff39b78 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.965 253465 DEBUG nova.compute.manager [req-c3ec4800-00e9-4fa9-8cee-f4fc5599acf3 req-48084900-c373-4fc1-ba62-8fd29ff39b78 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] No waiting events found dispatching network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.965 253465 WARNING nova.compute.manager [req-c3ec4800-00e9-4fa9-8cee-f4fc5599acf3 req-48084900-c373-4fc1-ba62-8fd29ff39b78 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received unexpected event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b for instance with vm_state active and task_state deleting.
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.968 253465 DEBUG nova.virt.libvirt.vif [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:53:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2125267483',display_name='tempest-VolumesActionsTest-instance-2125267483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2125267483',id=2,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:53:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7db9d09fb4a241818f75d0198445d55c',ramdisk_id='',reservation_id='r-26vvyg76',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1537057398',owner_user_name='tempest-VolumesActionsTest-1537057398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:53:43Z,user_data=None,user_id='fb6a4080968040f8a28c3b9e7c4296b8',uuid=7534ee27-8821-44c9-b66c-83a8f2e43711,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.968 253465 DEBUG nova.network.os_vif_util [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converting VIF {"id": "6f060257-b046-4e5b-80e9-23d0778a934b", "address": "fa:16:3e:0f:05:ec", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6f060257-b0", "ovs_interfaceid": "6f060257-b046-4e5b-80e9-23d0778a934b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.970 253465 DEBUG nova.network.os_vif_util [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:05:ec,bridge_name='br-int',has_traffic_filtering=True,id=6f060257-b046-4e5b-80e9-23d0778a934b,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f060257-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.971 253465 DEBUG os_vif [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:05:ec,bridge_name='br-int',has_traffic_filtering=True,id=6f060257-b046-4e5b-80e9-23d0778a934b,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f060257-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.973 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.973 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6f060257-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.975 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.976 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:53:44 compute-0 nova_compute[253461]: 2025-11-22 03:53:44.978 253465 INFO os_vif [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:05:ec,bridge_name='br-int',has_traffic_filtering=True,id=6f060257-b046-4e5b-80e9-23d0778a934b,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6f060257-b0')
Nov 22 03:53:44 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[264142]: [NOTICE]   (264146) : haproxy version is 2.8.14-c23fe91
Nov 22 03:53:44 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[264142]: [NOTICE]   (264146) : path to executable is /usr/sbin/haproxy
Nov 22 03:53:44 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[264142]: [WARNING]  (264146) : Exiting Master process...
Nov 22 03:53:44 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[264142]: [WARNING]  (264146) : Exiting Master process...
Nov 22 03:53:44 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[264142]: [ALERT]    (264146) : Current worker (264148) exited with code 143 (Terminated)
Nov 22 03:53:44 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[264142]: [WARNING]  (264146) : All workers exited. Exiting... (0)
Nov 22 03:53:44 compute-0 systemd[1]: libpod-e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542.scope: Deactivated successfully.
Nov 22 03:53:44 compute-0 podman[264182]: 2025-11-22 03:53:44.995643261 +0000 UTC m=+0.076077262 container died e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 03:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542-userdata-shm.mount: Deactivated successfully.
Nov 22 03:53:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-df938db531d7de2793a71bf77edd79412a52288a1ba546ab6ad489b226eb9f26-merged.mount: Deactivated successfully.
Nov 22 03:53:45 compute-0 podman[264182]: 2025-11-22 03:53:45.057774773 +0000 UTC m=+0.138208744 container cleanup e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:53:45 compute-0 systemd[1]: libpod-conmon-e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542.scope: Deactivated successfully.
Nov 22 03:53:45 compute-0 podman[264236]: 2025-11-22 03:53:45.13180692 +0000 UTC m=+0.051881411 container remove e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.137 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9d17cb8f-b200-4057-9b30-ef5f7093e378]: (4, ('Sat Nov 22 03:53:44 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 (e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542)\ne79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542\nSat Nov 22 03:53:45 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 (e79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542)\ne79950eee53f5672e1c8215b5fd72765910d6573557e10290c06b99036c2d542\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.139 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2726845c-3194-4c16-a904-5655eeff4ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.140 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd738119f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:45 compute-0 kernel: tapd738119f-c0: left promiscuous mode
Nov 22 03:53:45 compute-0 nova_compute[253461]: 2025-11-22 03:53:45.142 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:45 compute-0 nova_compute[253461]: 2025-11-22 03:53:45.157 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.160 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[604066aa-7423-499e-9c3c-ea6cd4acaecd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.174 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[33aaae00-d7b6-48b3-8a9f-528bb02de388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.176 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[de05e7b2-3a0f-4897-8a03-5504b5f4b4f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.188 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c2fa55c2-4646-494a-a06a-4a825092fc85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 387252, 'reachable_time': 32597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264250, 'error': None, 'target': 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.191 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.191 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3e7971-946b-49df-8ad4-1b5d17728dbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.192 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 6f060257-b046-4e5b-80e9-23d0778a934b in datapath d738119f-cffc-4235-aea9-bf290e9aca77 unbound from our chassis
Nov 22 03:53:45 compute-0 systemd[1]: run-netns-ovnmeta\x2dd738119f\x2dcffc\x2d4235\x2daea9\x2dbf290e9aca77.mount: Deactivated successfully.
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.193 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d738119f-cffc-4235-aea9-bf290e9aca77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.194 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[37d72334-b21c-438f-9ee3-82a03f322d7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.194 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 6f060257-b046-4e5b-80e9-23d0778a934b in datapath d738119f-cffc-4235-aea9-bf290e9aca77 unbound from our chassis
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.196 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d738119f-cffc-4235-aea9-bf290e9aca77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:53:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:45.196 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7662b1-7208-41a2-9272-0d5e6b583617]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:45 compute-0 nova_compute[253461]: 2025-11-22 03:53:45.406 253465 INFO nova.virt.libvirt.driver [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Deleting instance files /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711_del
Nov 22 03:53:45 compute-0 nova_compute[253461]: 2025-11-22 03:53:45.407 253465 INFO nova.virt.libvirt.driver [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Deletion of /var/lib/nova/instances/7534ee27-8821-44c9-b66c-83a8f2e43711_del complete
Nov 22 03:53:45 compute-0 nova_compute[253461]: 2025-11-22 03:53:45.453 253465 INFO nova.compute.manager [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Took 0.75 seconds to destroy the instance on the hypervisor.
Nov 22 03:53:45 compute-0 nova_compute[253461]: 2025-11-22 03:53:45.454 253465 DEBUG oslo.service.loopingcall [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:53:45 compute-0 nova_compute[253461]: 2025-11-22 03:53:45.454 253465 DEBUG nova.compute.manager [-] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:53:45 compute-0 nova_compute[253461]: 2025-11-22 03:53:45.454 253465 DEBUG nova.network.neutron [-] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:53:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Nov 22 03:53:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Nov 22 03:53:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Nov 22 03:53:45 compute-0 ceph-mon[75011]: pgmap v1046: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.4 MiB/s wr, 62 op/s
Nov 22 03:53:45 compute-0 ceph-mon[75011]: osdmap e183: 3 total, 3 up, 3 in
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 115 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 27 KiB/s wr, 67 op/s
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000314837594419483 of space, bias 1.0, pg target 0.09445127832584489 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034669653903757246 of space, bias 1.0, pg target 0.10400896171127173 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:53:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:53:46 compute-0 nova_compute[253461]: 2025-11-22 03:53:46.500 253465 DEBUG nova.network.neutron [-] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:53:46 compute-0 nova_compute[253461]: 2025-11-22 03:53:46.521 253465 INFO nova.compute.manager [-] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Took 1.07 seconds to deallocate network for instance.
Nov 22 03:53:46 compute-0 nova_compute[253461]: 2025-11-22 03:53:46.573 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:46 compute-0 nova_compute[253461]: 2025-11-22 03:53:46.574 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:46 compute-0 nova_compute[253461]: 2025-11-22 03:53:46.636 253465 DEBUG oslo_concurrency.processutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:46 compute-0 nova_compute[253461]: 2025-11-22 03:53:46.810 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:53:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2824677497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.056 253465 DEBUG oslo_concurrency.processutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.062 253465 DEBUG nova.compute.provider_tree [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.386 253465 DEBUG nova.scheduler.client.report [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.418 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.454 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-unplugged-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.454 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.455 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.455 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.456 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] No waiting events found dispatching network-vif-unplugged-6f060257-b046-4e5b-80e9-23d0778a934b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.456 253465 WARNING nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received unexpected event network-vif-unplugged-6f060257-b046-4e5b-80e9-23d0778a934b for instance with vm_state deleted and task_state None.
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.457 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.457 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.458 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.458 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.458 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] No waiting events found dispatching network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.459 253465 WARNING nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received unexpected event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b for instance with vm_state deleted and task_state None.
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.459 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.460 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.460 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.460 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.461 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] No waiting events found dispatching network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.461 253465 WARNING nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received unexpected event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b for instance with vm_state deleted and task_state None.
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.462 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.462 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.463 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.463 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.463 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] No waiting events found dispatching network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.463 253465 WARNING nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received unexpected event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b for instance with vm_state deleted and task_state None.
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.464 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-unplugged-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.464 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.464 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.464 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.465 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] No waiting events found dispatching network-vif-unplugged-6f060257-b046-4e5b-80e9-23d0778a934b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.465 253465 WARNING nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received unexpected event network-vif-unplugged-6f060257-b046-4e5b-80e9-23d0778a934b for instance with vm_state deleted and task_state None.
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.465 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-deleted-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.465 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.466 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.466 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.466 253465 DEBUG oslo_concurrency.lockutils [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.466 253465 DEBUG nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] No waiting events found dispatching network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.467 253465 WARNING nova.compute.manager [req-de356064-c114-4dc4-944a-40a112b8f69d req-a21b8212-8e7e-45bb-86a2-138a1574a82a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Received unexpected event network-vif-plugged-6f060257-b046-4e5b-80e9-23d0778a934b for instance with vm_state deleted and task_state None.
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.468 253465 INFO nova.scheduler.client.report [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Deleted allocations for instance 7534ee27-8821-44c9-b66c-83a8f2e43711
Nov 22 03:53:47 compute-0 nova_compute[253461]: 2025-11-22 03:53:47.544 253465 DEBUG oslo_concurrency.lockutils [None req-6495b126-c19a-4885-9de2-5cba366dd269 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "7534ee27-8821-44c9-b66c-83a8f2e43711" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:47 compute-0 ceph-mon[75011]: pgmap v1048: 305 pgs: 305 active+clean; 115 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 490 KiB/s rd, 27 KiB/s wr, 67 op/s
Nov 22 03:53:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2824677497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 KiB/s wr, 222 op/s
Nov 22 03:53:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Nov 22 03:53:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Nov 22 03:53:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Nov 22 03:53:48 compute-0 ceph-mon[75011]: pgmap v1049: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 KiB/s wr, 222 op/s
Nov 22 03:53:48 compute-0 ceph-mon[75011]: osdmap e184: 3 total, 3 up, 3 in
Nov 22 03:53:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Nov 22 03:53:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Nov 22 03:53:49 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Nov 22 03:53:50 compute-0 nova_compute[253461]: 2025-11-22 03:53:50.008 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 KiB/s wr, 226 op/s
Nov 22 03:53:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:53:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4754 writes, 21K keys, 4754 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4754 writes, 4754 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1515 writes, 7095 keys, 1515 commit groups, 1.0 writes per commit group, ingest: 9.46 MB, 0.02 MB/s
                                           Interval WAL: 1515 writes, 1515 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     43.9      0.57              0.10        12    0.048       0      0       0.0       0.0
                                             L6      1/0    7.43 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    135.8    111.3      0.72              0.26        11    0.065     49K   5827       0.0       0.0
                                            Sum      1/0    7.43 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     75.7     81.4      1.29              0.35        23    0.056     49K   5827       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.3     69.2     69.5      0.79              0.20        12    0.066     29K   3634       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    135.8    111.3      0.72              0.26        11    0.065     49K   5827       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     43.9      0.57              0.10        11    0.052       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 1.3 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5574942991f0#2 capacity: 304.00 MB usage: 8.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000105 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(558,8.37 MB,2.75236%) FilterBlock(24,142.61 KB,0.0458115%) IndexBlock(24,275.55 KB,0.088516%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 03:53:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:50 compute-0 nova_compute[253461]: 2025-11-22 03:53:50.829 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:50 compute-0 nova_compute[253461]: 2025-11-22 03:53:50.829 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:50 compute-0 nova_compute[253461]: 2025-11-22 03:53:50.920 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:53:50 compute-0 ceph-mon[75011]: osdmap e185: 3 total, 3 up, 3 in
Nov 22 03:53:50 compute-0 ceph-mon[75011]: pgmap v1052: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 KiB/s wr, 226 op/s
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.160 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.160 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.169 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.170 253465 INFO nova.compute.claims [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.227 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.228 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.248 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.316 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.347 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:51 compute-0 sudo[264275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:51 compute-0 sudo[264275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:51 compute-0 sudo[264275]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:51 compute-0 sudo[264301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:53:51 compute-0 sudo[264301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:51 compute-0 sudo[264301]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:51 compute-0 sudo[264326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:51 compute-0 sudo[264326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:51 compute-0 sudo[264326]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:51 compute-0 sudo[264370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:53:51 compute-0 sudo[264370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:53:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1710242296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.811 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.836 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:51 compute-0 nova_compute[253461]: 2025-11-22 03:53:51.844 253465 DEBUG nova.compute.provider_tree [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.067 253465 DEBUG nova.scheduler.client.report [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:53:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.6 KiB/s wr, 252 op/s
Nov 22 03:53:52 compute-0 sudo[264370]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.192 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.032s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.193 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.195 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.201 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.202 253465 INFO nova.compute.claims [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:53:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:53:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:53:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1710242296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Nov 22 03:53:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:53:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Nov 22 03:53:52 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1956ae94-da0f-4a12-a98d-6b85230b2828 does not exist
Nov 22 03:53:52 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 44dd9efa-0b53-4f83-b635-5553e7f04650 does not exist
Nov 22 03:53:52 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5263bab1-3bbd-4b84-afc4-78c09d007a6f does not exist
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:53:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.336 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:53:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.337 253465 DEBUG nova.network.neutron [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:53:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.380 253465 INFO nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:53:52 compute-0 sudo[264428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:52 compute-0 sudo[264428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:52 compute-0 sudo[264428]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.417 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.423 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:52 compute-0 sudo[264453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:53:52 compute-0 sudo[264453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:52 compute-0 sudo[264453]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.522 253465 DEBUG nova.policy [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb6a4080968040f8a28c3b9e7c4296b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7db9d09fb4a241818f75d0198445d55c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:53:52 compute-0 sudo[264479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:52 compute-0 sudo[264479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:52 compute-0 sudo[264479]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.549 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.551 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.551 253465 INFO nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Creating image(s)
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.578 253465 DEBUG nova.storage.rbd_utils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.607 253465 DEBUG nova.storage.rbd_utils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:52 compute-0 sudo[264523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:53:52 compute-0 sudo[264523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.636 253465 DEBUG nova.storage.rbd_utils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.640 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.711 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.712 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.712 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.713 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.734 253465 DEBUG nova.storage.rbd_utils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.737 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:53:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/691132756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.861 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.870 253465 DEBUG nova.compute.provider_tree [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:53:52 compute-0 nova_compute[253461]: 2025-11-22 03:53:52.907 253465 DEBUG nova.scheduler.client.report [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.009 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.010 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:53:53 compute-0 podman[264683]: 2025-11-22 03:53:53.018497487 +0000 UTC m=+0.075856321 container create 0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:53:53 compute-0 podman[264683]: 2025-11-22 03:53:52.969493349 +0000 UTC m=+0.026852193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:53 compute-0 systemd[1]: Started libpod-conmon-0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3.scope.
Nov 22 03:53:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:53:53 compute-0 podman[264683]: 2025-11-22 03:53:53.144092812 +0000 UTC m=+0.201451646 container init 0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:53:53 compute-0 podman[264683]: 2025-11-22 03:53:53.155519262 +0000 UTC m=+0.212878086 container start 0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 03:53:53 compute-0 podman[264683]: 2025-11-22 03:53:53.159368925 +0000 UTC m=+0.216727759 container attach 0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.158 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.160 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.160 253465 DEBUG nova.network.neutron [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:53:53 compute-0 friendly_pascal[264699]: 167 167
Nov 22 03:53:53 compute-0 systemd[1]: libpod-0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3.scope: Deactivated successfully.
Nov 22 03:53:53 compute-0 podman[264683]: 2025-11-22 03:53:53.168682445 +0000 UTC m=+0.226041279 container died 0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:53:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ed8cbc1f6c325f75a4b4ced6d764f37e597ffe35e6f0da255f0911c5e76ebca-merged.mount: Deactivated successfully.
Nov 22 03:53:53 compute-0 podman[264683]: 2025-11-22 03:53:53.22779646 +0000 UTC m=+0.285155294 container remove 0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.227 253465 INFO nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:53:53 compute-0 systemd[1]: libpod-conmon-0c331def4d4523c42823dd2482f1348941fdb09d4bcab4b33e465076ab807fc3.scope: Deactivated successfully.
Nov 22 03:53:53 compute-0 ceph-mon[75011]: pgmap v1053: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 6.6 KiB/s wr, 252 op/s
Nov 22 03:53:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:53:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:53:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:53:53 compute-0 ceph-mon[75011]: osdmap e186: 3 total, 3 up, 3 in
Nov 22 03:53:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:53:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:53:53 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:53:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/691132756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.282 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.296 253465 DEBUG nova.storage.rbd_utils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] resizing rbd image 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.419 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.421 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.422 253465 INFO nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Creating image(s)
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.451 253465 DEBUG nova.storage.rbd_utils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] rbd image 1ed5ef11-db1e-4030-bda2-67534d28d084_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:53 compute-0 podman[264777]: 2025-11-22 03:53:53.464884226 +0000 UTC m=+0.060047770 container create 761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dijkstra, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.480 253465 DEBUG nova.storage.rbd_utils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] rbd image 1ed5ef11-db1e-4030-bda2-67534d28d084_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.512 253465 DEBUG nova.storage.rbd_utils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] rbd image 1ed5ef11-db1e-4030-bda2-67534d28d084_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.517 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:53 compute-0 systemd[1]: Started libpod-conmon-761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30.scope.
Nov 22 03:53:53 compute-0 podman[264777]: 2025-11-22 03:53:53.442008835 +0000 UTC m=+0.037172449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8038091af71dcc789b8293eb75b3878efe757f8a6dc019d63fd165432657be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8038091af71dcc789b8293eb75b3878efe757f8a6dc019d63fd165432657be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8038091af71dcc789b8293eb75b3878efe757f8a6dc019d63fd165432657be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8038091af71dcc789b8293eb75b3878efe757f8a6dc019d63fd165432657be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca8038091af71dcc789b8293eb75b3878efe757f8a6dc019d63fd165432657be/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.566 253465 DEBUG nova.objects.instance [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lazy-loading 'migration_context' on Instance uuid 1fb96b71-bbd5-4ced-a830-30ae58784b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.579 253465 DEBUG nova.policy [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '08c5b8ebe09040fbb8538108ea659e5c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '72120bdf58ce486690a1373cf734f4d9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:53:53 compute-0 podman[264777]: 2025-11-22 03:53:53.585062496 +0000 UTC m=+0.180226070 container init 761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.587 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.587 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Ensure instance console log exists: /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.588 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.588 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.589 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:53 compute-0 podman[264777]: 2025-11-22 03:53:53.592856796 +0000 UTC m=+0.188020340 container start 761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:53:53 compute-0 podman[264777]: 2025-11-22 03:53:53.597599167 +0000 UTC m=+0.192762701 container attach 761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dijkstra, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:53:53 compute-0 podman[264845]: 2025-11-22 03:53:53.603971837 +0000 UTC m=+0.096283334 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.611 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.612 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.613 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.614 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.631 253465 DEBUG nova.storage.rbd_utils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] rbd image 1ed5ef11-db1e-4030-bda2-67534d28d084_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.634 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 1ed5ef11-db1e-4030-bda2-67534d28d084_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:53 compute-0 nova_compute[253461]: 2025-11-22 03:53:53.989 253465 DEBUG nova.network.neutron [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Successfully created port: 088a5157-3fe2-4543-a6b7-e25cc34ed035 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.047 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 1ed5ef11-db1e-4030-bda2-67534d28d084_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.110 253465 DEBUG nova.storage.rbd_utils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] resizing rbd image 1ed5ef11-db1e-4030-bda2-67534d28d084_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:53:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.212 253465 DEBUG nova.objects.instance [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 1ed5ef11-db1e-4030-bda2-67534d28d084 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.231 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.231 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Ensure instance console log exists: /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.232 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.233 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.233 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Nov 22 03:53:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Nov 22 03:53:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Nov 22 03:53:54 compute-0 nova_compute[253461]: 2025-11-22 03:53:54.660 253465 DEBUG nova.network.neutron [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Successfully created port: 0192a6ee-e052-42ec-ba5d-39345610c279 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:53:54 compute-0 pensive_dijkstra[264877]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:53:54 compute-0 pensive_dijkstra[264877]: --> relative data size: 1.0
Nov 22 03:53:54 compute-0 pensive_dijkstra[264877]: --> All data devices are unavailable
Nov 22 03:53:54 compute-0 systemd[1]: libpod-761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30.scope: Deactivated successfully.
Nov 22 03:53:54 compute-0 systemd[1]: libpod-761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30.scope: Consumed 1.164s CPU time.
Nov 22 03:53:54 compute-0 conmon[264877]: conmon 761d21734ef19e626856 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30.scope/container/memory.events
Nov 22 03:53:54 compute-0 podman[264777]: 2025-11-22 03:53:54.849275076 +0000 UTC m=+1.444438610 container died 761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 03:53:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca8038091af71dcc789b8293eb75b3878efe757f8a6dc019d63fd165432657be-merged.mount: Deactivated successfully.
Nov 22 03:53:54 compute-0 podman[264777]: 2025-11-22 03:53:54.905793042 +0000 UTC m=+1.500956576 container remove 761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dijkstra, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:53:54 compute-0 systemd[1]: libpod-conmon-761d21734ef19e626856cb1a1700e3eac524f7728e181ff8195822632d885a30.scope: Deactivated successfully.
Nov 22 03:53:54 compute-0 sudo[264523]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.011 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:55 compute-0 sudo[265037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:55 compute-0 sudo[265037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:55 compute-0 sudo[265037]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:55 compute-0 sudo[265062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:53:55 compute-0 sudo[265062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:55 compute-0 sudo[265062]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:55 compute-0 sudo[265087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:55 compute-0 sudo[265087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:55 compute-0 sudo[265087]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:55 compute-0 sudo[265112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:53:55 compute-0 sudo[265112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Nov 22 03:53:55 compute-0 ceph-mon[75011]: pgmap v1055: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 43 op/s
Nov 22 03:53:55 compute-0 ceph-mon[75011]: osdmap e187: 3 total, 3 up, 3 in
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.324 253465 DEBUG nova.network.neutron [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Successfully updated port: 088a5157-3fe2-4543-a6b7-e25cc34ed035 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.448 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "refresh_cache-1fb96b71-bbd5-4ced-a830-30ae58784b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.449 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquired lock "refresh_cache-1fb96b71-bbd5-4ced-a830-30ae58784b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.449 253465 DEBUG nova.network.neutron [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.488 253465 DEBUG nova.compute.manager [req-755e967b-3876-4a6f-95a4-0a5267ad61e7 req-2678962f-1304-4858-aa6d-5df22f7b0bf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received event network-changed-088a5157-3fe2-4543-a6b7-e25cc34ed035 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.488 253465 DEBUG nova.compute.manager [req-755e967b-3876-4a6f-95a4-0a5267ad61e7 req-2678962f-1304-4858-aa6d-5df22f7b0bf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Refreshing instance network info cache due to event network-changed-088a5157-3fe2-4543-a6b7-e25cc34ed035. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.489 253465 DEBUG oslo_concurrency.lockutils [req-755e967b-3876-4a6f-95a4-0a5267ad61e7 req-2678962f-1304-4858-aa6d-5df22f7b0bf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-1fb96b71-bbd5-4ced-a830-30ae58784b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:53:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Nov 22 03:53:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Nov 22 03:53:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:53:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Nov 22 03:53:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Nov 22 03:53:55 compute-0 nova_compute[253461]: 2025-11-22 03:53:55.647 253465 DEBUG nova.network.neutron [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:53:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Nov 22 03:53:55 compute-0 podman[265179]: 2025-11-22 03:53:55.661279139 +0000 UTC m=+0.130388105 container create b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 03:53:55 compute-0 podman[265179]: 2025-11-22 03:53:55.570052922 +0000 UTC m=+0.039161928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:55 compute-0 systemd[1]: Started libpod-conmon-b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131.scope.
Nov 22 03:53:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:53:55 compute-0 podman[265179]: 2025-11-22 03:53:55.922134754 +0000 UTC m=+0.391243780 container init b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:53:55 compute-0 podman[265179]: 2025-11-22 03:53:55.934933873 +0000 UTC m=+0.404042839 container start b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:53:55 compute-0 admiring_keldysh[265195]: 167 167
Nov 22 03:53:55 compute-0 systemd[1]: libpod-b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131.scope: Deactivated successfully.
Nov 22 03:53:56 compute-0 podman[265179]: 2025-11-22 03:53:56.010914563 +0000 UTC m=+0.480023579 container attach b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:53:56 compute-0 podman[265179]: 2025-11-22 03:53:56.012357275 +0000 UTC m=+0.481466231 container died b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:53:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 98 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 557 KiB/s wr, 45 op/s
Nov 22 03:53:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9faff2fc2ae22987e8e3c3695ee697f09729566ee0c45446a96c64449feaece-merged.mount: Deactivated successfully.
Nov 22 03:53:56 compute-0 podman[265179]: 2025-11-22 03:53:56.211281526 +0000 UTC m=+0.680390452 container remove b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:53:56 compute-0 systemd[1]: libpod-conmon-b9101d2aab3aa606df467c3f2ab7190ddb7c6a2434607ffc2d1f8895948a9131.scope: Deactivated successfully.
Nov 22 03:53:56 compute-0 nova_compute[253461]: 2025-11-22 03:53:56.326 253465 DEBUG nova.network.neutron [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Successfully updated port: 0192a6ee-e052-42ec-ba5d-39345610c279 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:53:56 compute-0 nova_compute[253461]: 2025-11-22 03:53:56.341 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:53:56 compute-0 nova_compute[253461]: 2025-11-22 03:53:56.341 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquired lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:53:56 compute-0 nova_compute[253461]: 2025-11-22 03:53:56.341 253465 DEBUG nova.network.neutron [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:53:56 compute-0 podman[265220]: 2025-11-22 03:53:56.397286436 +0000 UTC m=+0.045640243 container create 8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:53:56 compute-0 systemd[1]: Started libpod-conmon-8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4.scope.
Nov 22 03:53:56 compute-0 podman[265220]: 2025-11-22 03:53:56.376341462 +0000 UTC m=+0.024695309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106cbf31280710e5500230627239e0bc12773d7dc65e9f431205d922f4bbc8e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106cbf31280710e5500230627239e0bc12773d7dc65e9f431205d922f4bbc8e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106cbf31280710e5500230627239e0bc12773d7dc65e9f431205d922f4bbc8e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/106cbf31280710e5500230627239e0bc12773d7dc65e9f431205d922f4bbc8e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:56 compute-0 podman[265220]: 2025-11-22 03:53:56.502179231 +0000 UTC m=+0.150533058 container init 8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jemison, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:53:56 compute-0 podman[265220]: 2025-11-22 03:53:56.513846125 +0000 UTC m=+0.162199932 container start 8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:53:56 compute-0 podman[265220]: 2025-11-22 03:53:56.51773219 +0000 UTC m=+0.166085997 container attach 8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jemison, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 03:53:56 compute-0 ceph-mon[75011]: osdmap e188: 3 total, 3 up, 3 in
Nov 22 03:53:56 compute-0 ceph-mon[75011]: osdmap e189: 3 total, 3 up, 3 in
Nov 22 03:53:56 compute-0 nova_compute[253461]: 2025-11-22 03:53:56.528 253465 DEBUG nova.network.neutron [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:53:56 compute-0 nova_compute[253461]: 2025-11-22 03:53:56.812 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]: {
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:     "0": [
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:         {
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "devices": [
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "/dev/loop3"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             ],
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_name": "ceph_lv0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_size": "21470642176",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "name": "ceph_lv0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "tags": {
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cluster_name": "ceph",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.crush_device_class": "",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.encrypted": "0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osd_id": "0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.type": "block",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.vdo": "0"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             },
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "type": "block",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "vg_name": "ceph_vg0"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:         }
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:     ],
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:     "1": [
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:         {
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "devices": [
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "/dev/loop4"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             ],
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_name": "ceph_lv1",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_size": "21470642176",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "name": "ceph_lv1",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "tags": {
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cluster_name": "ceph",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.crush_device_class": "",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.encrypted": "0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osd_id": "1",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.type": "block",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.vdo": "0"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             },
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "type": "block",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "vg_name": "ceph_vg1"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:         }
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:     ],
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:     "2": [
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:         {
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "devices": [
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "/dev/loop5"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             ],
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_name": "ceph_lv2",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_size": "21470642176",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "name": "ceph_lv2",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "tags": {
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.cluster_name": "ceph",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.crush_device_class": "",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.encrypted": "0",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osd_id": "2",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.type": "block",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:                 "ceph.vdo": "0"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             },
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "type": "block",
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:             "vg_name": "ceph_vg2"
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:         }
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]:     ]
Nov 22 03:53:57 compute-0 hopeful_jemison[265237]: }
Nov 22 03:53:57 compute-0 systemd[1]: libpod-8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4.scope: Deactivated successfully.
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.296 253465 DEBUG nova.network.neutron [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Updating instance_info_cache with network_info: [{"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:53:57 compute-0 podman[265246]: 2025-11-22 03:53:57.307390844 +0000 UTC m=+0.023061463 container died 8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.325 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Releasing lock "refresh_cache-1fb96b71-bbd5-4ced-a830-30ae58784b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.325 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Instance network_info: |[{"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.326 253465 DEBUG oslo_concurrency.lockutils [req-755e967b-3876-4a6f-95a4-0a5267ad61e7 req-2678962f-1304-4858-aa6d-5df22f7b0bf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-1fb96b71-bbd5-4ced-a830-30ae58784b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.326 253465 DEBUG nova.network.neutron [req-755e967b-3876-4a6f-95a4-0a5267ad61e7 req-2678962f-1304-4858-aa6d-5df22f7b0bf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Refreshing network info cache for port 088a5157-3fe2-4543-a6b7-e25cc34ed035 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:53:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-106cbf31280710e5500230627239e0bc12773d7dc65e9f431205d922f4bbc8e7-merged.mount: Deactivated successfully.
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.333 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Start _get_guest_xml network_info=[{"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.340 253465 WARNING nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.354 253465 DEBUG nova.virt.libvirt.host [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.355 253465 DEBUG nova.virt.libvirt.host [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.359 253465 DEBUG nova.virt.libvirt.host [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.359 253465 DEBUG nova.virt.libvirt.host [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.360 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.360 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.360 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.361 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.361 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.361 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.361 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.362 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.362 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.362 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.362 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.363 253465 DEBUG nova.virt.hardware [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.366 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:57 compute-0 podman[265246]: 2025-11-22 03:53:57.370261597 +0000 UTC m=+0.085932216 container remove 8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:53:57 compute-0 systemd[1]: libpod-conmon-8a339a0623a94add9e55d0d8650b340187bed329b5df388c3975bd071f2b35a4.scope: Deactivated successfully.
Nov 22 03:53:57 compute-0 sudo[265112]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:57 compute-0 sudo[265262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:57 compute-0 sudo[265262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:57 compute-0 sudo[265262]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:57 compute-0 sudo[265297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:53:57 compute-0 sudo[265297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:57 compute-0 sudo[265297]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.571 253465 DEBUG nova.compute.manager [req-9a60e287-6c17-4b7e-9fa3-c72c520e04d5 req-894ba909-7a97-4579-a199-feb410ff2822 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event network-changed-0192a6ee-e052-42ec-ba5d-39345610c279 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.572 253465 DEBUG nova.compute.manager [req-9a60e287-6c17-4b7e-9fa3-c72c520e04d5 req-894ba909-7a97-4579-a199-feb410ff2822 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Refreshing instance network info cache due to event network-changed-0192a6ee-e052-42ec-ba5d-39345610c279. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.572 253465 DEBUG oslo_concurrency.lockutils [req-9a60e287-6c17-4b7e-9fa3-c72c520e04d5 req-894ba909-7a97-4579-a199-feb410ff2822 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:53:57 compute-0 sudo[265331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:57 compute-0 sudo[265331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:57 compute-0 sudo[265331]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Nov 22 03:53:57 compute-0 ceph-mon[75011]: pgmap v1059: 305 pgs: 305 active+clean; 98 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 557 KiB/s wr, 45 op/s
Nov 22 03:53:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Nov 22 03:53:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Nov 22 03:53:57 compute-0 sudo[265356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:53:57 compute-0 sudo[265356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.786 253465 DEBUG nova.network.neutron [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updating instance_info_cache with network_info: [{"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:53:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:53:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179846119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.803 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Releasing lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.804 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Instance network_info: |[{"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.804 253465 DEBUG oslo_concurrency.lockutils [req-9a60e287-6c17-4b7e-9fa3-c72c520e04d5 req-894ba909-7a97-4579-a199-feb410ff2822 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.805 253465 DEBUG nova.network.neutron [req-9a60e287-6c17-4b7e-9fa3-c72c520e04d5 req-894ba909-7a97-4579-a199-feb410ff2822 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Refreshing network info cache for port 0192a6ee-e052-42ec-ba5d-39345610c279 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.809 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Start _get_guest_xml network_info=[{"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.813 253465 WARNING nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.822 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.848 253465 DEBUG nova.storage.rbd_utils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.853 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.872 253465 DEBUG nova.virt.libvirt.host [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.873 253465 DEBUG nova.virt.libvirt.host [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.878 253465 DEBUG nova.virt.libvirt.host [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.879 253465 DEBUG nova.virt.libvirt.host [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.879 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.880 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.880 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.881 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.881 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.881 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.881 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.882 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.882 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.882 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.882 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.882 253465 DEBUG nova.virt.hardware [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:53:57 compute-0 nova_compute[253461]: 2025-11-22 03:53:57.886 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:57 compute-0 podman[265441]: 2025-11-22 03:53:57.97003565 +0000 UTC m=+0.043958714 container create 55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:53:58 compute-0 systemd[1]: Started libpod-conmon-55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa.scope.
Nov 22 03:53:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:53:58 compute-0 podman[265441]: 2025-11-22 03:53:57.952781179 +0000 UTC m=+0.026704283 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:58 compute-0 podman[265441]: 2025-11-22 03:53:58.049622761 +0000 UTC m=+0.123545845 container init 55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:53:58 compute-0 podman[265441]: 2025-11-22 03:53:58.0578995 +0000 UTC m=+0.131822554 container start 55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:53:58 compute-0 podman[265441]: 2025-11-22 03:53:58.062294722 +0000 UTC m=+0.136217816 container attach 55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:53:58 compute-0 crazy_hoover[265495]: 167 167
Nov 22 03:53:58 compute-0 systemd[1]: libpod-55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa.scope: Deactivated successfully.
Nov 22 03:53:58 compute-0 conmon[265495]: conmon 55be0c303df4a5797b93 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa.scope/container/memory.events
Nov 22 03:53:58 compute-0 podman[265441]: 2025-11-22 03:53:58.064727663 +0000 UTC m=+0.138650717 container died 55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:53:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-47c86b3acdae7af07b2b6049520b5f34d4522d87b74c0e3a877a056fd7accf8d-merged.mount: Deactivated successfully.
Nov 22 03:53:58 compute-0 podman[265441]: 2025-11-22 03:53:58.100196439 +0000 UTC m=+0.174119493 container remove 55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 03:53:58 compute-0 systemd[1]: libpod-conmon-55be0c303df4a5797b93705e1d8e5e7c7e35a979f8aed52ab5eb62fa133fabaa.scope: Deactivated successfully.
Nov 22 03:53:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 180 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 11 MiB/s wr, 231 op/s
Nov 22 03:53:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:53:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1502723514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:58 compute-0 podman[265519]: 2025-11-22 03:53:58.299237696 +0000 UTC m=+0.082352453 container create 6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.311 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.313 253465 DEBUG nova.virt.libvirt.vif [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1210319555',display_name='tempest-VolumesActionsTest-instance-1210319555',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1210319555',id=3,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7db9d09fb4a241818f75d0198445d55c',ramdisk_id='',reservation_id='r-06fj55w0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1537057398',owner_user_name='tempest-VolumesActionsTest-1537057398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:53:52Z,user_data=None,user_id='fb6a4080968040f8a28c3b9e7c4296b8',uuid=1fb96b71-bbd5-4ced-a830-30ae58784b0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.313 253465 DEBUG nova.network.os_vif_util [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converting VIF {"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.314 253465 DEBUG nova.network.os_vif_util [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:13:3a,bridge_name='br-int',has_traffic_filtering=True,id=088a5157-3fe2-4543-a6b7-e25cc34ed035,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap088a5157-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.315 253465 DEBUG nova.objects.instance [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lazy-loading 'pci_devices' on Instance uuid 1fb96b71-bbd5-4ced-a830-30ae58784b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:53:58 compute-0 systemd[1]: Started libpod-conmon-6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1.scope.
Nov 22 03:53:58 compute-0 podman[265519]: 2025-11-22 03:53:58.239297161 +0000 UTC m=+0.022411968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.338 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <uuid>1fb96b71-bbd5-4ced-a830-30ae58784b0d</uuid>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <name>instance-00000003</name>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesActionsTest-instance-1210319555</nova:name>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:53:57</nova:creationTime>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:user uuid="fb6a4080968040f8a28c3b9e7c4296b8">tempest-VolumesActionsTest-1537057398-project-member</nova:user>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:project uuid="7db9d09fb4a241818f75d0198445d55c">tempest-VolumesActionsTest-1537057398</nova:project>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:port uuid="088a5157-3fe2-4543-a6b7-e25cc34ed035">
Nov 22 03:53:58 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <system>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="serial">1fb96b71-bbd5-4ced-a830-30ae58784b0d</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="uuid">1fb96b71-bbd5-4ced-a830-30ae58784b0d</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </system>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <os>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </os>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <features>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </features>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </source>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk.config">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </source>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:6d:13:3a"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <target dev="tap088a5157-3f"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d/console.log" append="off"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <video>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </video>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:53:58 compute-0 nova_compute[253461]: </domain>
Nov 22 03:53:58 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.338 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Preparing to wait for external event network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.338 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.339 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.339 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.340 253465 DEBUG nova.virt.libvirt.vif [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1210319555',display_name='tempest-VolumesActionsTest-instance-1210319555',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1210319555',id=3,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7db9d09fb4a241818f75d0198445d55c',ramdisk_id='',reservation_id='r-06fj55w0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1537057398',owner_user_name='tempest-VolumesActionsTest-1537057398-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:53:52Z,user_data=None,user_id='fb6a4080968040f8a28c3b9e7c4296b8',uuid=1fb96b71-bbd5-4ced-a830-30ae58784b0d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.340 253465 DEBUG nova.network.os_vif_util [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converting VIF {"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.341 253465 DEBUG nova.network.os_vif_util [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:13:3a,bridge_name='br-int',has_traffic_filtering=True,id=088a5157-3fe2-4543-a6b7-e25cc34ed035,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap088a5157-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.341 253465 DEBUG os_vif [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:13:3a,bridge_name='br-int',has_traffic_filtering=True,id=088a5157-3fe2-4543-a6b7-e25cc34ed035,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap088a5157-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.342 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.342 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:53:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3281702155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.343 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.346 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.346 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088a5157-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.347 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap088a5157-3f, col_values=(('external_ids', {'iface-id': '088a5157-3fe2-4543-a6b7-e25cc34ed035', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:13:3a', 'vm-uuid': '1fb96b71-bbd5-4ced-a830-30ae58784b0d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:53:58 compute-0 NetworkManager[48916]: <info>  [1763783638.3503] manager: (tap088a5157-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.352 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df179acc10ec7e8b3b80197d83b9e4317e4010ac2a2b9621728ff6eace1e3ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.356 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df179acc10ec7e8b3b80197d83b9e4317e4010ac2a2b9621728ff6eace1e3ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df179acc10ec7e8b3b80197d83b9e4317e4010ac2a2b9621728ff6eace1e3ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.357 253465 INFO os_vif [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:13:3a,bridge_name='br-int',has_traffic_filtering=True,id=088a5157-3fe2-4543-a6b7-e25cc34ed035,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap088a5157-3f')
Nov 22 03:53:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3df179acc10ec7e8b3b80197d83b9e4317e4010ac2a2b9621728ff6eace1e3ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.362 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:58 compute-0 podman[265519]: 2025-11-22 03:53:58.376699557 +0000 UTC m=+0.159814334 container init 6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.381 253465 DEBUG nova.storage.rbd_utils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] rbd image 1ed5ef11-db1e-4030-bda2-67534d28d084_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:58 compute-0 podman[265519]: 2025-11-22 03:53:58.385274155 +0000 UTC m=+0.168388912 container start 6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.386 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:58 compute-0 podman[265519]: 2025-11-22 03:53:58.388406369 +0000 UTC m=+0.171521136 container attach 6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.415 253465 DEBUG nova.network.neutron [req-755e967b-3876-4a6f-95a4-0a5267ad61e7 req-2678962f-1304-4858-aa6d-5df22f7b0bf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Updated VIF entry in instance network info cache for port 088a5157-3fe2-4543-a6b7-e25cc34ed035. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.416 253465 DEBUG nova.network.neutron [req-755e967b-3876-4a6f-95a4-0a5267ad61e7 req-2678962f-1304-4858-aa6d-5df22f7b0bf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Updating instance_info_cache with network_info: [{"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.447 253465 DEBUG oslo_concurrency.lockutils [req-755e967b-3876-4a6f-95a4-0a5267ad61e7 req-2678962f-1304-4858-aa6d-5df22f7b0bf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-1fb96b71-bbd5-4ced-a830-30ae58784b0d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.467 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.467 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.467 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] No VIF found with MAC fa:16:3e:6d:13:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.468 253465 INFO nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Using config drive
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.490 253465 DEBUG nova.storage.rbd_utils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:58 compute-0 ceph-mon[75011]: osdmap e190: 3 total, 3 up, 3 in
Nov 22 03:53:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/179846119' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1502723514' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3281702155' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:53:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3625990341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.821 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.823 253465 DEBUG nova.virt.libvirt.vif [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-16801461',display_name='tempest-VolumesExtendAttachedTest-instance-16801461',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-16801461',id=4,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk6mCo0ew4H3dCxLB1NJE5trOKL3afdpkxvu5h6EBFin9bk/PJWM29cyXCphBdJi6MaYjq7H3PGH/nsiHbgmOUIOKfv/uY0hu+mxaA6Y8nTiXyLeETLbkRHxDqZN/YXgA==',key_name='tempest-keypair-1370176256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72120bdf58ce486690a1373cf734f4d9',ramdisk_id='',reservation_id='r-nwdvz0a5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1122643524',owner_user_name='tempest-VolumesExtendAttachedTest-1122643524-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:53:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c5b8ebe09040fbb8538108ea659e5c',uuid=1ed5ef11-db1e-4030-bda2-67534d28d084,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.823 253465 DEBUG nova.network.os_vif_util [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Converting VIF {"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.824 253465 DEBUG nova.network.os_vif_util [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:20,bridge_name='br-int',has_traffic_filtering=True,id=0192a6ee-e052-42ec-ba5d-39345610c279,network=Network(088b40f3-90e0-4306-ab07-be0dfd55e4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0192a6ee-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.826 253465 DEBUG nova.objects.instance [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1ed5ef11-db1e-4030-bda2-67534d28d084 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.845 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <uuid>1ed5ef11-db1e-4030-bda2-67534d28d084</uuid>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <name>instance-00000004</name>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesExtendAttachedTest-instance-16801461</nova:name>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:53:57</nova:creationTime>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:user uuid="08c5b8ebe09040fbb8538108ea659e5c">tempest-VolumesExtendAttachedTest-1122643524-project-member</nova:user>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:project uuid="72120bdf58ce486690a1373cf734f4d9">tempest-VolumesExtendAttachedTest-1122643524</nova:project>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <nova:port uuid="0192a6ee-e052-42ec-ba5d-39345610c279">
Nov 22 03:53:58 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <system>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="serial">1ed5ef11-db1e-4030-bda2-67534d28d084</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="uuid">1ed5ef11-db1e-4030-bda2-67534d28d084</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </system>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <os>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </os>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <features>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </features>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/1ed5ef11-db1e-4030-bda2-67534d28d084_disk">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </source>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/1ed5ef11-db1e-4030-bda2-67534d28d084_disk.config">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </source>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:53:58 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:7d:2f:20"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <target dev="tap0192a6ee-e0"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084/console.log" append="off"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <video>
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </video>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:53:58 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:53:58 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:53:58 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:53:58 compute-0 nova_compute[253461]: </domain>
Nov 22 03:53:58 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.846 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Preparing to wait for external event network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.847 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.847 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.847 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.848 253465 DEBUG nova.virt.libvirt.vif [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-16801461',display_name='tempest-VolumesExtendAttachedTest-instance-16801461',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-16801461',id=4,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk6mCo0ew4H3dCxLB1NJE5trOKL3afdpkxvu5h6EBFin9bk/PJWM29cyXCphBdJi6MaYjq7H3PGH/nsiHbgmOUIOKfv/uY0hu+mxaA6Y8nTiXyLeETLbkRHxDqZN/YXgA==',key_name='tempest-keypair-1370176256',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='72120bdf58ce486690a1373cf734f4d9',ramdisk_id='',reservation_id='r-nwdvz0a5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-1122643524',owner_user_name='tempest-VolumesExtendAttachedTest-1122643524-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:53:53Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c5b8ebe09040fbb8538108ea659e5c',uuid=1ed5ef11-db1e-4030-bda2-67534d28d084,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.849 253465 DEBUG nova.network.os_vif_util [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Converting VIF {"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.850 253465 DEBUG nova.network.os_vif_util [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:20,bridge_name='br-int',has_traffic_filtering=True,id=0192a6ee-e052-42ec-ba5d-39345610c279,network=Network(088b40f3-90e0-4306-ab07-be0dfd55e4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0192a6ee-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.850 253465 DEBUG os_vif [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:20,bridge_name='br-int',has_traffic_filtering=True,id=0192a6ee-e052-42ec-ba5d-39345610c279,network=Network(088b40f3-90e0-4306-ab07-be0dfd55e4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0192a6ee-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.851 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.852 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.853 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.858 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.859 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0192a6ee-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.860 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0192a6ee-e0, col_values=(('external_ids', {'iface-id': '0192a6ee-e052-42ec-ba5d-39345610c279', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:2f:20', 'vm-uuid': '1ed5ef11-db1e-4030-bda2-67534d28d084'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:58 compute-0 NetworkManager[48916]: <info>  [1763783638.9058] manager: (tap0192a6ee-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.912 253465 INFO nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Creating config drive at /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d/disk.config
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.922 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsvrah7su execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.946 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:58 compute-0 nova_compute[253461]: 2025-11-22 03:53:58.956 253465 INFO os_vif [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:2f:20,bridge_name='br-int',has_traffic_filtering=True,id=0192a6ee-e052-42ec-ba5d-39345610c279,network=Network(088b40f3-90e0-4306-ab07-be0dfd55e4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0192a6ee-e0')
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.013 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.013 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.014 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] No VIF found with MAC fa:16:3e:7d:2f:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.015 253465 INFO nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Using config drive
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.047 253465 DEBUG nova.storage.rbd_utils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] rbd image 1ed5ef11-db1e-4030-bda2-67534d28d084_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.054 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsvrah7su" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.076 253465 DEBUG nova.storage.rbd_utils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] rbd image 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.079 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d/disk.config 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.137 253465 DEBUG nova.network.neutron [req-9a60e287-6c17-4b7e-9fa3-c72c520e04d5 req-894ba909-7a97-4579-a199-feb410ff2822 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updated VIF entry in instance network info cache for port 0192a6ee-e052-42ec-ba5d-39345610c279. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.138 253465 DEBUG nova.network.neutron [req-9a60e287-6c17-4b7e-9fa3-c72c520e04d5 req-894ba909-7a97-4579-a199-feb410ff2822 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updating instance_info_cache with network_info: [{"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.162 253465 DEBUG oslo_concurrency.lockutils [req-9a60e287-6c17-4b7e-9fa3-c72c520e04d5 req-894ba909-7a97-4579-a199-feb410ff2822 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.231 253465 DEBUG oslo_concurrency.processutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d/disk.config 1fb96b71-bbd5-4ced-a830-30ae58784b0d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.231 253465 INFO nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Deleting local config drive /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d/disk.config because it was imported into RBD.
Nov 22 03:53:59 compute-0 kernel: tap088a5157-3f: entered promiscuous mode
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.2814] manager: (tap088a5157-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Nov 22 03:53:59 compute-0 ovn_controller[152691]: 2025-11-22T03:53:59Z|00054|binding|INFO|Claiming lport 088a5157-3fe2-4543-a6b7-e25cc34ed035 for this chassis.
Nov 22 03:53:59 compute-0 ovn_controller[152691]: 2025-11-22T03:53:59Z|00055|binding|INFO|088a5157-3fe2-4543-a6b7-e25cc34ed035: Claiming fa:16:3e:6d:13:3a 10.100.0.9
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.287 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.294 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:13:3a 10.100.0.9'], port_security=['fa:16:3e:6d:13:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1fb96b71-bbd5-4ced-a830-30ae58784b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d738119f-cffc-4235-aea9-bf290e9aca77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7db9d09fb4a241818f75d0198445d55c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6a9d1a2c-1ada-4410-8b2b-640ade242853', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=735d6d4b-7c7b-4c2a-a66c-4ccd96675388, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=088a5157-3fe2-4543-a6b7-e25cc34ed035) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.295 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 088a5157-3fe2-4543-a6b7-e25cc34ed035 in datapath d738119f-cffc-4235-aea9-bf290e9aca77 bound to our chassis
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.296 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d738119f-cffc-4235-aea9-bf290e9aca77
Nov 22 03:53:59 compute-0 ovn_controller[152691]: 2025-11-22T03:53:59Z|00056|binding|INFO|Setting lport 088a5157-3fe2-4543-a6b7-e25cc34ed035 ovn-installed in OVS
Nov 22 03:53:59 compute-0 ovn_controller[152691]: 2025-11-22T03:53:59Z|00057|binding|INFO|Setting lport 088a5157-3fe2-4543-a6b7-e25cc34ed035 up in Southbound
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.307 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:59 compute-0 systemd-machined[215728]: New machine qemu-3-instance-00000003.
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.307 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0802a6-b636-4624-8035-dc735491a88f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.308 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd738119f-c1 in ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.311 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd738119f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.311 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3bfdbcaa-72c6-4e4b-a9ad-7bcb60ae859e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.313 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[993a0e98-8444-4338-ade8-dbe237f8c44b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.315 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:59 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.332 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[a97af5bf-54dc-4d30-8684-579d760407a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 systemd-udevd[265704]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.3632] device (tap088a5157-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.3643] device (tap088a5157-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.363 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[76b476cb-182f-464a-b84d-0ddaf5643c5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.397 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[03fb7490-1a7d-4553-8f23-3677bffe1e3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.4047] manager: (tapd738119f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.404 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3a2a67-621b-4bea-9e2f-1a21a4465496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.412 253465 INFO nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Creating config drive at /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084/disk.config
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]: {
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "osd_id": 1,
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "type": "bluestore"
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:     },
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "osd_id": 0,
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "type": "bluestore"
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:     },
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "osd_id": 2,
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:         "type": "bluestore"
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]:     }
Nov 22 03:53:59 compute-0 inspiring_gauss[265538]: }
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.427 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmporv8z9_2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.443 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6c7fa4-099a-459a-9206-030a348854f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.446 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec97f00-dce0-4dd2-89fc-50907533ae49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.4666] device (tapd738119f-c0): carrier: link connected
Nov 22 03:53:59 compute-0 systemd[1]: libpod-6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1.scope: Deactivated successfully.
Nov 22 03:53:59 compute-0 systemd[1]: libpod-6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1.scope: Consumed 1.056s CPU time.
Nov 22 03:53:59 compute-0 podman[265519]: 2025-11-22 03:53:59.469767176 +0000 UTC m=+1.252881943 container died 6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.473 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[486b3976-5d64-455b-b465-6e8040e1dc2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.492 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[82c3ffc7-e10e-4372-a3da-f00f62e571c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd738119f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:09:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389206, 'reachable_time': 35467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265749, 'error': None, 'target': 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3df179acc10ec7e8b3b80197d83b9e4317e4010ac2a2b9621728ff6eace1e3ad-merged.mount: Deactivated successfully.
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.512 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[463252b3-1087-48b4-9172-998953ffe8e0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:946'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389206, 'tstamp': 389206}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265760, 'error': None, 'target': 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.528 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7b3ac8-e1d3-4aec-9663-31ca013cce97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd738119f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:09:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389206, 'reachable_time': 35467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265761, 'error': None, 'target': 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 podman[265519]: 2025-11-22 03:53:59.535744445 +0000 UTC m=+1.318859202 container remove 6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_gauss, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:53:59 compute-0 systemd[1]: libpod-conmon-6109376c6ca566b2e6a4ae039da44c35726db3cfbe17828d29845915c5a3c3a1.scope: Deactivated successfully.
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.559 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmporv8z9_2" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:59 compute-0 sudo[265356]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.571 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a079d926-4037-43b1-922f-5974df8624a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:53:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:53:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:53:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev fffe64da-7260-460d-b874-2ac3033c0db4 does not exist
Nov 22 03:53:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3ef058cd-0c72-4b52-979b-6af4616b53f3 does not exist
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.597 253465 DEBUG nova.storage.rbd_utils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] rbd image 1ed5ef11-db1e-4030-bda2-67534d28d084_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.604 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084/disk.config 1ed5ef11-db1e-4030-bda2-67534d28d084_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.626 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8317e4-4dbc-4441-98aa-e7ea73c996c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.628 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd738119f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.628 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.629 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd738119f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.6317] manager: (tapd738119f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.631 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:59 compute-0 kernel: tapd738119f-c0: entered promiscuous mode
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.637 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.638 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd738119f-c0, col_values=(('external_ids', {'iface-id': 'cfd2c61f-02e2-42f5-ba0c-9da1b93469ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:53:59 compute-0 ovn_controller[152691]: 2025-11-22T03:53:59Z|00058|binding|INFO|Releasing lport cfd2c61f-02e2-42f5-ba0c-9da1b93469ce from this chassis (sb_readonly=0)
Nov 22 03:53:59 compute-0 sudo[265781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:53:59 compute-0 sudo[265781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:59 compute-0 sudo[265781]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.657 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:59 compute-0 ceph-mon[75011]: pgmap v1061: 305 pgs: 305 active+clean; 180 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 11 MiB/s wr, 231 op/s
Nov 22 03:53:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3625990341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:53:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:53:59 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.661 253465 DEBUG nova.compute.manager [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received event network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.662 253465 DEBUG oslo_concurrency.lockutils [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.662 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d738119f-cffc-4235-aea9-bf290e9aca77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d738119f-cffc-4235-aea9-bf290e9aca77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.662 253465 DEBUG oslo_concurrency.lockutils [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.663 253465 DEBUG oslo_concurrency.lockutils [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.663 253465 DEBUG nova.compute.manager [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Processing event network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.663 253465 DEBUG nova.compute.manager [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received event network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.663 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3b01546b-9d43-4279-8e82-2c9781b63ac7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.664 253465 DEBUG oslo_concurrency.lockutils [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.664 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-d738119f-cffc-4235-aea9-bf290e9aca77
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/d738119f-cffc-4235-aea9-bf290e9aca77.pid.haproxy
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID d738119f-cffc-4235-aea9-bf290e9aca77
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.664 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'env', 'PROCESS_TAG=haproxy-d738119f-cffc-4235-aea9-bf290e9aca77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d738119f-cffc-4235-aea9-bf290e9aca77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.664 253465 DEBUG oslo_concurrency.lockutils [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.665 253465 DEBUG oslo_concurrency.lockutils [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.666 253465 DEBUG nova.compute.manager [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] No waiting events found dispatching network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.666 253465 WARNING nova.compute.manager [req-f959e1c6-fad6-4981-8703-d0990d6c8a34 req-6bfd394e-f9c9-4e17-b4a5-32341f395237 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received unexpected event network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 for instance with vm_state building and task_state spawning.
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Nov 22 03:53:59 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Nov 22 03:53:59 compute-0 sudo[265822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:53:59 compute-0 sudo[265822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:53:59 compute-0 sudo[265822]: pam_unix(sudo:session): session closed for user root
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.844 253465 DEBUG oslo_concurrency.processutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084/disk.config 1ed5ef11-db1e-4030-bda2-67534d28d084_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.844 253465 INFO nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Deleting local config drive /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084/disk.config because it was imported into RBD.
Nov 22 03:53:59 compute-0 kernel: tap0192a6ee-e0: entered promiscuous mode
Nov 22 03:53:59 compute-0 systemd-udevd[265739]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.9054] manager: (tap0192a6ee-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.907 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:53:59 compute-0 ovn_controller[152691]: 2025-11-22T03:53:59Z|00059|binding|INFO|Claiming lport 0192a6ee-e052-42ec-ba5d-39345610c279 for this chassis.
Nov 22 03:53:59 compute-0 ovn_controller[152691]: 2025-11-22T03:53:59Z|00060|binding|INFO|0192a6ee-e052-42ec-ba5d-39345610c279: Claiming fa:16:3e:7d:2f:20 10.100.0.10
Nov 22 03:53:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:53:59.918 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:2f:20 10.100.0.10'], port_security=['fa:16:3e:7d:2f:20 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ed5ef11-db1e-4030-bda2-67534d28d084', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088b40f3-90e0-4306-ab07-be0dfd55e4f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72120bdf58ce486690a1373cf734f4d9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4d4a79a3-8cf0-4f3a-8d88-6b34e08377f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc03686c-7407-4c79-9f8a-d90d96b47d90, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=0192a6ee-e052-42ec-ba5d-39345610c279) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.9207] device (tap0192a6ee-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:53:59 compute-0 NetworkManager[48916]: <info>  [1763783639.9222] device (tap0192a6ee-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.925 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783639.92541, 1fb96b71-bbd5-4ced-a830-30ae58784b0d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.926 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] VM Started (Lifecycle Event)
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.928 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:53:59 compute-0 systemd-machined[215728]: New machine qemu-4-instance-00000004.
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.936 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.943 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.943 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783624.9430528, 7534ee27-8821-44c9-b66c-83a8f2e43711 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.944 253465 INFO nova.compute.manager [-] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] VM Stopped (Lifecycle Event)
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.948 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.950 253465 INFO nova.virt.libvirt.driver [-] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Instance spawned successfully.
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.951 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:53:59 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.980 253465 DEBUG nova.compute.manager [None req-020448e8-e2a9-413d-bb91-487b1b010285 - - - - - -] [instance: 7534ee27-8821-44c9-b66c-83a8f2e43711] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.984 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.985 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783639.9255135, 1fb96b71-bbd5-4ced-a830-30ae58784b0d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:53:59 compute-0 nova_compute[253461]: 2025-11-22 03:53:59.985 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] VM Paused (Lifecycle Event)
Nov 22 03:54:00 compute-0 ovn_controller[152691]: 2025-11-22T03:54:00Z|00061|binding|INFO|Setting lport 0192a6ee-e052-42ec-ba5d-39345610c279 ovn-installed in OVS
Nov 22 03:54:00 compute-0 ovn_controller[152691]: 2025-11-22T03:54:00Z|00062|binding|INFO|Setting lport 0192a6ee-e052-42ec-ba5d-39345610c279 up in Southbound
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.007 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.010 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.011 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.011 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.011 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.012 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.012 253465 DEBUG nova.virt.libvirt.driver [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.041 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.042 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783639.9345958, 1fb96b71-bbd5-4ced-a830-30ae58784b0d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.042 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] VM Resumed (Lifecycle Event)
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.062 253465 INFO nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Took 7.51 seconds to spawn the instance on the hypervisor.
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.062 253465 DEBUG nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.063 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.070 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:54:00 compute-0 podman[265940]: 2025-11-22 03:54:00.078307943 +0000 UTC m=+0.079384165 container create 982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.105 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:54:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 180 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 9.3 MiB/s wr, 201 op/s
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.123 253465 INFO nova.compute.manager [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Took 8.99 seconds to build instance.
Nov 22 03:54:00 compute-0 systemd[1]: Started libpod-conmon-982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef.scope.
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.139 253465 DEBUG oslo_concurrency.lockutils [None req-6d730eca-b536-4201-bff1-4f766869f510 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:00 compute-0 podman[265940]: 2025-11-22 03:54:00.055099882 +0000 UTC m=+0.056176114 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:54:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:54:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:54:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624895307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:54:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624895307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbe08d48a0dc13f12694612b83cb83eb9cf2e3d8b9a560f486fc61eeabc3a99c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:00 compute-0 podman[265940]: 2025-11-22 03:54:00.196682633 +0000 UTC m=+0.197758935 container init 982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:54:00 compute-0 podman[265940]: 2025-11-22 03:54:00.206710995 +0000 UTC m=+0.207787247 container start 982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 03:54:00 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[265961]: [NOTICE]   (265965) : New worker (265983) forked
Nov 22 03:54:00 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[265961]: [NOTICE]   (265965) : Loading success.
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.287 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 0192a6ee-e052-42ec-ba5d-39345610c279 in datapath 088b40f3-90e0-4306-ab07-be0dfd55e4f4 unbound from our chassis
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.290 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 088b40f3-90e0-4306-ab07-be0dfd55e4f4
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.301 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba9b4a3-8e20-4361-913d-65ec99c0e4d1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.303 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap088b40f3-91 in ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.305 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap088b40f3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.305 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[caea19f1-9a24-47a2-a1c6-1935d4619e2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.306 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2356670f-a779-4e3a-9923-308a0c9319f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.317 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[00afe8b8-754e-4178-bb49-8e0045dc5ac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.339 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[98e656d9-eba1-4c37-bfa7-b4872b3c0a5e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.373 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[832bdf36-d68f-4fd4-9dca-13c1e0d01607]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.379 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7dcd62-0365-44e6-8f34-d37453a88e58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 NetworkManager[48916]: <info>  [1763783640.3807] manager: (tap088b40f3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.416 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2a1d8b-4b83-4968-be63-9d27a8561a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.419 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8da736-0151-447d-88fc-c7c1a8e07bf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.437 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783640.4375858, 1ed5ef11-db1e-4030-bda2-67534d28d084 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.438 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] VM Started (Lifecycle Event)
Nov 22 03:54:00 compute-0 NetworkManager[48916]: <info>  [1763783640.4501] device (tap088b40f3-90): carrier: link connected
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.456 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[dc62ccf0-34c5-49a6-a855-22749609cdaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.463 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.468 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783640.4377894, 1ed5ef11-db1e-4030-bda2-67534d28d084 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.468 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] VM Paused (Lifecycle Event)
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.482 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8c990e7b-39a6-4a4f-8bcb-df26f6605cf9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088b40f3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:af:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389304, 'reachable_time': 28922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266028, 'error': None, 'target': 'ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.493 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.496 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.501 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[339f1ebe-f814-4072-b309-2170c308c036]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:afbe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389304, 'tstamp': 389304}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266029, 'error': None, 'target': 'ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.517 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.524 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3b89e4ff-1b68-45d3-bccf-60a6e9cd6028]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap088b40f3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:af:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389304, 'reachable_time': 28922, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266030, 'error': None, 'target': 'ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.558 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c55190af-6075-4c8a-84f4-cb0c540189ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Nov 22 03:54:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Nov 22 03:54:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.626 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d537bc97-f71a-431c-936a-83e034ed8e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.627 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088b40f3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.627 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.628 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap088b40f3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.629 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:00 compute-0 NetworkManager[48916]: <info>  [1763783640.6303] manager: (tap088b40f3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 22 03:54:00 compute-0 kernel: tap088b40f3-90: entered promiscuous mode
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.631 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.631 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap088b40f3-90, col_values=(('external_ids', {'iface-id': '5d8a43bc-3755-462d-bbf0-5366a00c641c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:54:00 compute-0 ovn_controller[152691]: 2025-11-22T03:54:00Z|00063|binding|INFO|Releasing lport 5d8a43bc-3755-462d-bbf0-5366a00c641c from this chassis (sb_readonly=0)
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.647 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.648 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/088b40f3-90e0-4306-ab07-be0dfd55e4f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/088b40f3-90e0-4306-ab07-be0dfd55e4f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.649 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[dade018e-ad81-48c8-805d-82fd759b9889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.650 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-088b40f3-90e0-4306-ab07-be0dfd55e4f4
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/088b40f3-90e0-4306-ab07-be0dfd55e4f4.pid.haproxy
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 088b40f3-90e0-4306-ab07-be0dfd55e4f4
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:54:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:00.650 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4', 'env', 'PROCESS_TAG=haproxy-088b40f3-90e0-4306-ab07-be0dfd55e4f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/088b40f3-90e0-4306-ab07-be0dfd55e4f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:54:00 compute-0 ceph-mon[75011]: osdmap e191: 3 total, 3 up, 3 in
Nov 22 03:54:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2624895307' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2624895307' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:00 compute-0 ceph-mon[75011]: osdmap e192: 3 total, 3 up, 3 in
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.677 253465 DEBUG nova.compute.manager [req-6caa874a-d510-4c26-9a2a-c13afc820669 req-3c58389a-3181-4fcf-b28e-8df7c8fbf865 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.678 253465 DEBUG oslo_concurrency.lockutils [req-6caa874a-d510-4c26-9a2a-c13afc820669 req-3c58389a-3181-4fcf-b28e-8df7c8fbf865 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.678 253465 DEBUG oslo_concurrency.lockutils [req-6caa874a-d510-4c26-9a2a-c13afc820669 req-3c58389a-3181-4fcf-b28e-8df7c8fbf865 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.678 253465 DEBUG oslo_concurrency.lockutils [req-6caa874a-d510-4c26-9a2a-c13afc820669 req-3c58389a-3181-4fcf-b28e-8df7c8fbf865 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.678 253465 DEBUG nova.compute.manager [req-6caa874a-d510-4c26-9a2a-c13afc820669 req-3c58389a-3181-4fcf-b28e-8df7c8fbf865 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Processing event network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.679 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.682 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783640.6825397, 1ed5ef11-db1e-4030-bda2-67534d28d084 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.683 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] VM Resumed (Lifecycle Event)
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.684 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.688 253465 INFO nova.virt.libvirt.driver [-] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Instance spawned successfully.
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.689 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.714 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.715 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.715 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.716 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.716 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.717 253465 DEBUG nova.virt.libvirt.driver [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.721 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.728 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.771 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.785 253465 INFO nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Took 7.37 seconds to spawn the instance on the hypervisor.
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.785 253465 DEBUG nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.878 253465 INFO nova.compute.manager [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Took 9.59 seconds to build instance.
Nov 22 03:54:00 compute-0 nova_compute[253461]: 2025-11-22 03:54:00.894 253465 DEBUG oslo_concurrency.lockutils [None req-58707104-91bb-4658-a16e-72787054191c 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:01 compute-0 podman[266062]: 2025-11-22 03:54:01.141687956 +0000 UTC m=+0.078432986 container create b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 03:54:01 compute-0 podman[266062]: 2025-11-22 03:54:01.097486779 +0000 UTC m=+0.034231879 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:54:01 compute-0 systemd[1]: Started libpod-conmon-b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa.scope.
Nov 22 03:54:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:54:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2447908861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:54:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2447908861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:54:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6f1c47fdce606c3378cd8eba7ada7cd092e9f32ccaff324e6743c11a6708505/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:54:01 compute-0 podman[266062]: 2025-11-22 03:54:01.249740023 +0000 UTC m=+0.186485053 container init b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 03:54:01 compute-0 podman[266062]: 2025-11-22 03:54:01.257645213 +0000 UTC m=+0.194390243 container start b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:54:01 compute-0 neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4[266077]: [NOTICE]   (266081) : New worker (266083) forked
Nov 22 03:54:01 compute-0 neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4[266077]: [NOTICE]   (266081) : Loading success.
Nov 22 03:54:01 compute-0 ceph-mon[75011]: pgmap v1063: 305 pgs: 305 active+clean; 180 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 134 KiB/s rd, 9.3 MiB/s wr, 201 op/s
Nov 22 03:54:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2447908861' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2447908861' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:01 compute-0 nova_compute[253461]: 2025-11-22 03:54:01.814 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 180 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.8 MiB/s wr, 324 op/s
Nov 22 03:54:02 compute-0 nova_compute[253461]: 2025-11-22 03:54:02.759 253465 DEBUG nova.compute.manager [req-6135920b-2362-4547-8d00-eb1755eeb013 req-29e4604e-8ab0-4bf7-bc19-f835ed04dcd5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:02 compute-0 nova_compute[253461]: 2025-11-22 03:54:02.759 253465 DEBUG oslo_concurrency.lockutils [req-6135920b-2362-4547-8d00-eb1755eeb013 req-29e4604e-8ab0-4bf7-bc19-f835ed04dcd5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:02 compute-0 nova_compute[253461]: 2025-11-22 03:54:02.759 253465 DEBUG oslo_concurrency.lockutils [req-6135920b-2362-4547-8d00-eb1755eeb013 req-29e4604e-8ab0-4bf7-bc19-f835ed04dcd5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:02 compute-0 nova_compute[253461]: 2025-11-22 03:54:02.760 253465 DEBUG oslo_concurrency.lockutils [req-6135920b-2362-4547-8d00-eb1755eeb013 req-29e4604e-8ab0-4bf7-bc19-f835ed04dcd5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:02 compute-0 nova_compute[253461]: 2025-11-22 03:54:02.760 253465 DEBUG nova.compute.manager [req-6135920b-2362-4547-8d00-eb1755eeb013 req-29e4604e-8ab0-4bf7-bc19-f835ed04dcd5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] No waiting events found dispatching network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:54:02 compute-0 nova_compute[253461]: 2025-11-22 03:54:02.760 253465 WARNING nova.compute.manager [req-6135920b-2362-4547-8d00-eb1755eeb013 req-29e4604e-8ab0-4bf7-bc19-f835ed04dcd5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received unexpected event network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 for instance with vm_state active and task_state None.
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.153 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.154 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.154 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.154 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.155 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.157 253465 INFO nova.compute.manager [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Terminating instance
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.158 253465 DEBUG nova.compute.manager [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:54:03 compute-0 kernel: tap088a5157-3f (unregistering): left promiscuous mode
Nov 22 03:54:03 compute-0 NetworkManager[48916]: <info>  [1763783643.2092] device (tap088a5157-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.214 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 ovn_controller[152691]: 2025-11-22T03:54:03Z|00064|binding|INFO|Releasing lport 088a5157-3fe2-4543-a6b7-e25cc34ed035 from this chassis (sb_readonly=0)
Nov 22 03:54:03 compute-0 ovn_controller[152691]: 2025-11-22T03:54:03Z|00065|binding|INFO|Setting lport 088a5157-3fe2-4543-a6b7-e25cc34ed035 down in Southbound
Nov 22 03:54:03 compute-0 ovn_controller[152691]: 2025-11-22T03:54:03Z|00066|binding|INFO|Removing iface tap088a5157-3f ovn-installed in OVS
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.216 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.222 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:13:3a 10.100.0.9'], port_security=['fa:16:3e:6d:13:3a 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '1fb96b71-bbd5-4ced-a830-30ae58784b0d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d738119f-cffc-4235-aea9-bf290e9aca77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7db9d09fb4a241818f75d0198445d55c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6a9d1a2c-1ada-4410-8b2b-640ade242853', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=735d6d4b-7c7b-4c2a-a66c-4ccd96675388, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=088a5157-3fe2-4543-a6b7-e25cc34ed035) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.223 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 088a5157-3fe2-4543-a6b7-e25cc34ed035 in datapath d738119f-cffc-4235-aea9-bf290e9aca77 unbound from our chassis
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.225 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d738119f-cffc-4235-aea9-bf290e9aca77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.226 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[81e2a584-d5f0-450f-8567-104cac4bde76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.226 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 namespace which is not needed anymore
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.230 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 22 03:54:03 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 3.841s CPU time.
Nov 22 03:54:03 compute-0 systemd-machined[215728]: Machine qemu-3-instance-00000003 terminated.
Nov 22 03:54:03 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[265961]: [NOTICE]   (265965) : haproxy version is 2.8.14-c23fe91
Nov 22 03:54:03 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[265961]: [NOTICE]   (265965) : path to executable is /usr/sbin/haproxy
Nov 22 03:54:03 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[265961]: [ALERT]    (265965) : Current worker (265983) exited with code 143 (Terminated)
Nov 22 03:54:03 compute-0 neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77[265961]: [WARNING]  (265965) : All workers exited. Exiting... (0)
Nov 22 03:54:03 compute-0 systemd[1]: libpod-982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef.scope: Deactivated successfully.
Nov 22 03:54:03 compute-0 podman[266116]: 2025-11-22 03:54:03.367994445 +0000 UTC m=+0.047651855 container died 982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.392 253465 INFO nova.virt.libvirt.driver [-] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Instance destroyed successfully.
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.392 253465 DEBUG nova.objects.instance [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lazy-loading 'resources' on Instance uuid 1fb96b71-bbd5-4ced-a830-30ae58784b0d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef-userdata-shm.mount: Deactivated successfully.
Nov 22 03:54:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbe08d48a0dc13f12694612b83cb83eb9cf2e3d8b9a560f486fc61eeabc3a99c-merged.mount: Deactivated successfully.
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.407 253465 DEBUG nova.virt.libvirt.vif [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1210319555',display_name='tempest-VolumesActionsTest-instance-1210319555',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1210319555',id=3,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:54:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7db9d09fb4a241818f75d0198445d55c',ramdisk_id='',reservation_id='r-06fj55w0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1537057398',owner_user_name='tempest-VolumesActionsTest-1537057398-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:54:00Z,user_data=None,user_id='fb6a4080968040f8a28c3b9e7c4296b8',uuid=1fb96b71-bbd5-4ced-a830-30ae58784b0d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.408 253465 DEBUG nova.network.os_vif_util [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converting VIF {"id": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "address": "fa:16:3e:6d:13:3a", "network": {"id": "d738119f-cffc-4235-aea9-bf290e9aca77", "bridge": "br-int", "label": "tempest-VolumesActionsTest-96135778-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7db9d09fb4a241818f75d0198445d55c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap088a5157-3f", "ovs_interfaceid": "088a5157-3fe2-4543-a6b7-e25cc34ed035", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.408 253465 DEBUG nova.network.os_vif_util [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:13:3a,bridge_name='br-int',has_traffic_filtering=True,id=088a5157-3fe2-4543-a6b7-e25cc34ed035,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap088a5157-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.409 253465 DEBUG os_vif [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:13:3a,bridge_name='br-int',has_traffic_filtering=True,id=088a5157-3fe2-4543-a6b7-e25cc34ed035,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap088a5157-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.412 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.412 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088a5157-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.414 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.415 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.417 253465 INFO os_vif [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:13:3a,bridge_name='br-int',has_traffic_filtering=True,id=088a5157-3fe2-4543-a6b7-e25cc34ed035,network=Network(d738119f-cffc-4235-aea9-bf290e9aca77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap088a5157-3f')
Nov 22 03:54:03 compute-0 podman[266116]: 2025-11-22 03:54:03.418269095 +0000 UTC m=+0.097926475 container cleanup 982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:54:03 compute-0 systemd[1]: libpod-conmon-982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef.scope: Deactivated successfully.
Nov 22 03:54:03 compute-0 NetworkManager[48916]: <info>  [1763783643.4335] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 22 03:54:03 compute-0 NetworkManager[48916]: <info>  [1763783643.4342] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.449 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.479 253465 DEBUG nova.compute.manager [req-ded57913-f5cc-4a8e-82ad-948326f8578b req-10055eae-a807-4eb2-8457-4bcb67e14ca4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received event network-vif-unplugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.479 253465 DEBUG oslo_concurrency.lockutils [req-ded57913-f5cc-4a8e-82ad-948326f8578b req-10055eae-a807-4eb2-8457-4bcb67e14ca4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.479 253465 DEBUG oslo_concurrency.lockutils [req-ded57913-f5cc-4a8e-82ad-948326f8578b req-10055eae-a807-4eb2-8457-4bcb67e14ca4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.479 253465 DEBUG oslo_concurrency.lockutils [req-ded57913-f5cc-4a8e-82ad-948326f8578b req-10055eae-a807-4eb2-8457-4bcb67e14ca4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.480 253465 DEBUG nova.compute.manager [req-ded57913-f5cc-4a8e-82ad-948326f8578b req-10055eae-a807-4eb2-8457-4bcb67e14ca4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] No waiting events found dispatching network-vif-unplugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.480 253465 DEBUG nova.compute.manager [req-ded57913-f5cc-4a8e-82ad-948326f8578b req-10055eae-a807-4eb2-8457-4bcb67e14ca4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received event network-vif-unplugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:54:03 compute-0 podman[266167]: 2025-11-22 03:54:03.50459397 +0000 UTC m=+0.047989940 container remove 982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.513 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[056881b6-c788-4973-b469-cfbae4b37f9f]: (4, ('Sat Nov 22 03:54:03 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 (982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef)\n982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef\nSat Nov 22 03:54:03 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 (982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef)\n982efbfd6e7ae15697ff2c7d99f3a2c3f6b412af51e28679473191965a841cef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.515 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5c38f226-c3ef-4e02-9bd1-4be577306d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.516 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd738119f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.528 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 kernel: tapd738119f-c0: left promiscuous mode
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.540 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 ovn_controller[152691]: 2025-11-22T03:54:03Z|00067|binding|INFO|Releasing lport 5d8a43bc-3755-462d-bbf0-5366a00c641c from this chassis (sb_readonly=0)
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.544 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b1145c27-00eb-40eb-a952-ca459c4280fa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.558 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.560 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[68ccf8ca-23d1-4e7b-9011-f5f0ed633019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.561 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[854aeee1-b1bb-4d59-b124-6475a4361e4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.574 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.578 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f0882499-1b38-45f1-831c-6c1130e8868d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389198, 'reachable_time': 42509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266185, 'error': None, 'target': 'ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:03 compute-0 systemd[1]: run-netns-ovnmeta\x2dd738119f\x2dcffc\x2d4235\x2daea9\x2dbf290e9aca77.mount: Deactivated successfully.
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.583 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d738119f-cffc-4235-aea9-bf290e9aca77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:54:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:03.584 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[8b788651-3429-4164-8509-77ff07bd63ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:03 compute-0 ceph-mon[75011]: pgmap v1065: 305 pgs: 305 active+clean; 180 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.8 MiB/s wr, 324 op/s
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.932 253465 INFO nova.virt.libvirt.driver [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Deleting instance files /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d_del
Nov 22 03:54:03 compute-0 nova_compute[253461]: 2025-11-22 03:54:03.933 253465 INFO nova.virt.libvirt.driver [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Deletion of /var/lib/nova/instances/1fb96b71-bbd5-4ced-a830-30ae58784b0d_del complete
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.006 253465 INFO nova.compute.manager [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Took 0.85 seconds to destroy the instance on the hypervisor.
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.007 253465 DEBUG oslo.service.loopingcall [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.007 253465 DEBUG nova.compute.manager [-] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.008 253465 DEBUG nova.network.neutron [-] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:54:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 181 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 52 KiB/s wr, 304 op/s
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.475 253465 DEBUG nova.network.neutron [-] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.491 253465 INFO nova.compute.manager [-] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Took 0.48 seconds to deallocate network for instance.
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.539 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.540 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.608 253465 DEBUG oslo_concurrency.processutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.895 253465 DEBUG nova.compute.manager [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event network-changed-0192a6ee-e052-42ec-ba5d-39345610c279 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.895 253465 DEBUG nova.compute.manager [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Refreshing instance network info cache due to event network-changed-0192a6ee-e052-42ec-ba5d-39345610c279. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.896 253465 DEBUG oslo_concurrency.lockutils [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.896 253465 DEBUG oslo_concurrency.lockutils [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:54:04 compute-0 nova_compute[253461]: 2025-11-22 03:54:04.897 253465 DEBUG nova.network.neutron [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Refreshing network info cache for port 0192a6ee-e052-42ec-ba5d-39345610c279 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:54:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:54:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173490871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.153 253465 DEBUG oslo_concurrency.processutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.160 253465 DEBUG nova.compute.provider_tree [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.179 253465 DEBUG nova.scheduler.client.report [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.203 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.231 253465 INFO nova.scheduler.client.report [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Deleted allocations for instance 1fb96b71-bbd5-4ced-a830-30ae58784b0d
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.292 253465 DEBUG oslo_concurrency.lockutils [None req-8d01e85f-8d4d-41dc-8f8d-5d7c438699f5 fb6a4080968040f8a28c3b9e7c4296b8 7db9d09fb4a241818f75d0198445d55c - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.579 253465 DEBUG nova.compute.manager [req-faa7a3b0-e248-4859-a573-9eda8708eeb2 req-0c633b94-f5e6-4115-8df3-07ed454bde67 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received event network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.580 253465 DEBUG oslo_concurrency.lockutils [req-faa7a3b0-e248-4859-a573-9eda8708eeb2 req-0c633b94-f5e6-4115-8df3-07ed454bde67 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.581 253465 DEBUG oslo_concurrency.lockutils [req-faa7a3b0-e248-4859-a573-9eda8708eeb2 req-0c633b94-f5e6-4115-8df3-07ed454bde67 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.582 253465 DEBUG oslo_concurrency.lockutils [req-faa7a3b0-e248-4859-a573-9eda8708eeb2 req-0c633b94-f5e6-4115-8df3-07ed454bde67 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1fb96b71-bbd5-4ced-a830-30ae58784b0d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.582 253465 DEBUG nova.compute.manager [req-faa7a3b0-e248-4859-a573-9eda8708eeb2 req-0c633b94-f5e6-4115-8df3-07ed454bde67 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] No waiting events found dispatching network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:54:05 compute-0 nova_compute[253461]: 2025-11-22 03:54:05.583 253465 WARNING nova.compute.manager [req-faa7a3b0-e248-4859-a573-9eda8708eeb2 req-0c633b94-f5e6-4115-8df3-07ed454bde67 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received unexpected event network-vif-plugged-088a5157-3fe2-4543-a6b7-e25cc34ed035 for instance with vm_state deleted and task_state None.
Nov 22 03:54:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Nov 22 03:54:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Nov 22 03:54:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Nov 22 03:54:05 compute-0 ceph-mon[75011]: pgmap v1066: 305 pgs: 305 active+clean; 181 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 52 KiB/s wr, 304 op/s
Nov 22 03:54:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/173490871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:05 compute-0 ceph-mon[75011]: osdmap e193: 3 total, 3 up, 3 in
Nov 22 03:54:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 173 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 54 KiB/s wr, 321 op/s
Nov 22 03:54:06 compute-0 nova_compute[253461]: 2025-11-22 03:54:06.154 253465 DEBUG nova.network.neutron [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updated VIF entry in instance network info cache for port 0192a6ee-e052-42ec-ba5d-39345610c279. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:54:06 compute-0 nova_compute[253461]: 2025-11-22 03:54:06.154 253465 DEBUG nova.network.neutron [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updating instance_info_cache with network_info: [{"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:54:06 compute-0 nova_compute[253461]: 2025-11-22 03:54:06.177 253465 DEBUG oslo_concurrency.lockutils [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:54:06 compute-0 nova_compute[253461]: 2025-11-22 03:54:06.178 253465 DEBUG nova.compute.manager [req-c320f648-bf1a-444f-b7cd-9cf41e3e6f6a req-5086ca47-d999-4904-af68-27b40979347d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Received event network-vif-deleted-088a5157-3fe2-4543-a6b7-e25cc34ed035 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:06 compute-0 nova_compute[253461]: 2025-11-22 03:54:06.858 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:07 compute-0 ceph-mon[75011]: pgmap v1068: 305 pgs: 305 active+clean; 173 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 54 KiB/s wr, 321 op/s
Nov 22 03:54:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 44 KiB/s wr, 309 op/s
Nov 22 03:54:08 compute-0 nova_compute[253461]: 2025-11-22 03:54:08.416 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:09 compute-0 ceph-mon[75011]: pgmap v1069: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 44 KiB/s wr, 309 op/s
Nov 22 03:54:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 37 KiB/s wr, 260 op/s
Nov 22 03:54:10 compute-0 podman[266209]: 2025-11-22 03:54:10.476657196 +0000 UTC m=+0.142890699 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:54:10 compute-0 podman[266210]: 2025-11-22 03:54:10.520333572 +0000 UTC m=+0.188511604 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 03:54:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:11 compute-0 ceph-mon[75011]: pgmap v1070: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 37 KiB/s wr, 260 op/s
Nov 22 03:54:11 compute-0 nova_compute[253461]: 2025-11-22 03:54:11.860 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 198 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.4 MiB/s wr, 133 op/s
Nov 22 03:54:13 compute-0 nova_compute[253461]: 2025-11-22 03:54:13.418 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:13 compute-0 ceph-mon[75011]: pgmap v1071: 305 pgs: 305 active+clean; 198 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.4 MiB/s wr, 133 op/s
Nov 22 03:54:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 234 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 661 KiB/s rd, 10 MiB/s wr, 102 op/s
Nov 22 03:54:14 compute-0 ovn_controller[152691]: 2025-11-22T03:54:14Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:2f:20 10.100.0.10
Nov 22 03:54:14 compute-0 ovn_controller[152691]: 2025-11-22T03:54:14Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:2f:20 10.100.0.10
Nov 22 03:54:14 compute-0 ceph-mon[75011]: pgmap v1072: 305 pgs: 305 active+clean; 234 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 661 KiB/s rd, 10 MiB/s wr, 102 op/s
Nov 22 03:54:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 349 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 748 KiB/s rd, 21 MiB/s wr, 109 op/s
Nov 22 03:54:16 compute-0 nova_compute[253461]: 2025-11-22 03:54:16.864 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:17 compute-0 ceph-mon[75011]: pgmap v1073: 305 pgs: 305 active+clean; 349 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 748 KiB/s rd, 21 MiB/s wr, 109 op/s
Nov 22 03:54:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 511 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 743 KiB/s rd, 31 MiB/s wr, 151 op/s
Nov 22 03:54:18 compute-0 nova_compute[253461]: 2025-11-22 03:54:18.395 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783643.3938878, 1fb96b71-bbd5-4ced-a830-30ae58784b0d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:54:18 compute-0 nova_compute[253461]: 2025-11-22 03:54:18.395 253465 INFO nova.compute.manager [-] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] VM Stopped (Lifecycle Event)
Nov 22 03:54:18 compute-0 nova_compute[253461]: 2025-11-22 03:54:18.421 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:18 compute-0 nova_compute[253461]: 2025-11-22 03:54:18.425 253465 DEBUG nova.compute.manager [None req-2b2d6887-edb4-4af1-9dd8-de95bebd503a - - - - - -] [instance: 1fb96b71-bbd5-4ced-a830-30ae58784b0d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:19 compute-0 ceph-mon[75011]: pgmap v1074: 305 pgs: 305 active+clean; 511 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 743 KiB/s rd, 31 MiB/s wr, 151 op/s
Nov 22 03:54:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 511 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 31 MiB/s wr, 118 op/s
Nov 22 03:54:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:21 compute-0 ceph-mon[75011]: pgmap v1075: 305 pgs: 305 active+clean; 511 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 31 MiB/s wr, 118 op/s
Nov 22 03:54:21 compute-0 nova_compute[253461]: 2025-11-22 03:54:21.866 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 679 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 45 MiB/s wr, 120 op/s
Nov 22 03:54:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:22.498 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:54:22 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:22.499 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:54:22 compute-0 nova_compute[253461]: 2025-11-22 03:54:22.531 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:23.005 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:23.006 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:23.007 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:23 compute-0 ceph-mon[75011]: pgmap v1076: 305 pgs: 305 active+clean; 679 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 45 MiB/s wr, 120 op/s
Nov 22 03:54:23 compute-0 nova_compute[253461]: 2025-11-22 03:54:23.424 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 759 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 46 MiB/s wr, 139 op/s
Nov 22 03:54:24 compute-0 podman[266253]: 2025-11-22 03:54:24.424493591 +0000 UTC m=+0.089151615 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 22 03:54:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:24.501 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:54:25 compute-0 ceph-mon[75011]: pgmap v1077: 305 pgs: 305 active+clean; 759 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 371 KiB/s rd, 46 MiB/s wr, 139 op/s
Nov 22 03:54:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 855 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 51 MiB/s wr, 101 op/s
Nov 22 03:54:26 compute-0 nova_compute[253461]: 2025-11-22 03:54:26.868 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:27 compute-0 ceph-mon[75011]: pgmap v1078: 305 pgs: 305 active+clean; 855 MiB data, 973 MiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 51 MiB/s wr, 101 op/s
Nov 22 03:54:27 compute-0 nova_compute[253461]: 2025-11-22 03:54:27.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 1023 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 55 MiB/s wr, 114 op/s
Nov 22 03:54:28 compute-0 nova_compute[253461]: 2025-11-22 03:54:28.427 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:54:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2813688620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:54:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2813688620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:29 compute-0 ceph-mon[75011]: pgmap v1079: 305 pgs: 305 active+clean; 1023 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 55 MiB/s wr, 114 op/s
Nov 22 03:54:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2813688620' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2813688620' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:29 compute-0 nova_compute[253461]: 2025-11-22 03:54:29.475 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:29 compute-0 nova_compute[253461]: 2025-11-22 03:54:29.475 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:54:29 compute-0 nova_compute[253461]: 2025-11-22 03:54:29.476 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:54:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 1023 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 43 MiB/s wr, 50 op/s
Nov 22 03:54:30 compute-0 nova_compute[253461]: 2025-11-22 03:54:30.278 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:54:30 compute-0 nova_compute[253461]: 2025-11-22 03:54:30.278 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:54:30 compute-0 nova_compute[253461]: 2025-11-22 03:54:30.279 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 03:54:30 compute-0 nova_compute[253461]: 2025-11-22 03:54:30.279 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1ed5ef11-db1e-4030-bda2-67534d28d084 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:54:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Nov 22 03:54:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Nov 22 03:54:30 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Nov 22 03:54:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e194 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:54:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1343327105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:54:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1343327105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:31 compute-0 ceph-mon[75011]: pgmap v1080: 305 pgs: 305 active+clean; 1023 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 43 MiB/s wr, 50 op/s
Nov 22 03:54:31 compute-0 ceph-mon[75011]: osdmap e194: 3 total, 3 up, 3 in
Nov 22 03:54:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1343327105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1343327105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.603 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updating instance_info_cache with network_info: [{"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.616 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-1ed5ef11-db1e-4030-bda2-67534d28d084" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.617 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.618 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.618 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.619 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.619 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.646 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.647 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.647 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.647 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.648 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:31 compute-0 nova_compute[253461]: 2025-11-22 03:54:31.900 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:54:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3162990440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.115 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 47 MiB/s wr, 74 op/s
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.198 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.199 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:54:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Nov 22 03:54:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Nov 22 03:54:32 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Nov 22 03:54:32 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3162990440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.465 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.467 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=59.94271469116211GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.467 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.468 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.793 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 1ed5ef11-db1e-4030-bda2-67534d28d084 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.794 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.794 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:54:32 compute-0 nova_compute[253461]: 2025-11-22 03:54:32.899 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:54:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248501552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:33 compute-0 nova_compute[253461]: 2025-11-22 03:54:33.379 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:33 compute-0 nova_compute[253461]: 2025-11-22 03:54:33.385 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:54:33 compute-0 nova_compute[253461]: 2025-11-22 03:54:33.401 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:54:33 compute-0 nova_compute[253461]: 2025-11-22 03:54:33.426 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:54:33 compute-0 nova_compute[253461]: 2025-11-22 03:54:33.427 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:33 compute-0 nova_compute[253461]: 2025-11-22 03:54:33.427 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:33 compute-0 nova_compute[253461]: 2025-11-22 03:54:33.427 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 03:54:33 compute-0 nova_compute[253461]: 2025-11-22 03:54:33.429 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:33 compute-0 ceph-mon[75011]: pgmap v1082: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 47 MiB/s wr, 74 op/s
Nov 22 03:54:33 compute-0 ceph-mon[75011]: osdmap e195: 3 total, 3 up, 3 in
Nov 22 03:54:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2248501552' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:54:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952058081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:54:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2952058081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 42 MiB/s wr, 103 op/s
Nov 22 03:54:34 compute-0 nova_compute[253461]: 2025-11-22 03:54:34.247 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:34 compute-0 nova_compute[253461]: 2025-11-22 03:54:34.247 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:34 compute-0 nova_compute[253461]: 2025-11-22 03:54:34.263 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:34 compute-0 nova_compute[253461]: 2025-11-22 03:54:34.264 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:34 compute-0 nova_compute[253461]: 2025-11-22 03:54:34.264 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:54:34 compute-0 nova_compute[253461]: 2025-11-22 03:54:34.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:34 compute-0 ovn_controller[152691]: 2025-11-22T03:54:34Z|00068|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 22 03:54:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2952058081' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2952058081' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:35 compute-0 nova_compute[253461]: 2025-11-22 03:54:35.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:54:35 compute-0 nova_compute[253461]: 2025-11-22 03:54:35.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 03:54:35 compute-0 nova_compute[253461]: 2025-11-22 03:54:35.452 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 03:54:35 compute-0 ceph-mon[75011]: pgmap v1084: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 1.2 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 42 MiB/s wr, 103 op/s
Nov 22 03:54:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:35 compute-0 nova_compute[253461]: 2025-11-22 03:54:35.674 253465 DEBUG oslo_concurrency.lockutils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:35 compute-0 nova_compute[253461]: 2025-11-22 03:54:35.674 253465 DEBUG oslo_concurrency.lockutils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:35 compute-0 nova_compute[253461]: 2025-11-22 03:54:35.691 253465 DEBUG nova.objects.instance [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lazy-loading 'flavor' on Instance uuid 1ed5ef11-db1e-4030-bda2-67534d28d084 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:54:35 compute-0 nova_compute[253461]: 2025-11-22 03:54:35.715 253465 INFO nova.virt.libvirt.driver [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Ignoring supplied device name: /dev/vdb
Nov 22 03:54:35 compute-0 nova_compute[253461]: 2025-11-22 03:54:35.730 253465 DEBUG oslo_concurrency.lockutils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.066 253465 DEBUG oslo_concurrency.lockutils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.067 253465 DEBUG oslo_concurrency.lockutils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.067 253465 INFO nova.compute.manager [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Attaching volume 26d44156-0a54-4470-99a5-818c18e945e0 to /dev/vdb
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 775 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 21 MiB/s wr, 78 op/s
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:54:36
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['volumes', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.207 253465 DEBUG os_brick.utils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.209 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.229 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.229 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8bbf97-e07a-4b50-a8a9-468e952013c4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.233 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.249 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.250 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc38ba0-39dc-4d36-a5f5-7a1d7752ced9]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.252 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.269 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.269 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[80b42b83-9a97-47c2-ae62-24b8f8fe9656]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.271 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[b76e4408-cd11-473d-a253-03429286358c]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.272 253465 DEBUG oslo_concurrency.processutils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.296 253465 DEBUG oslo_concurrency.processutils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.300 253465 DEBUG os_brick.initiator.connectors.lightos [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.301 253465 DEBUG os_brick.initiator.connectors.lightos [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.302 253465 DEBUG os_brick.initiator.connectors.lightos [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.302 253465 DEBUG os_brick.utils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] <== get_connector_properties: return (93ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.303 253465 DEBUG nova.virt.block_device [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updating existing volume attachment record: fa924754-f5cc-4194-8146-a67f1896885b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:54:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:54:36 compute-0 ovn_controller[152691]: 2025-11-22T03:54:36Z|00069|binding|INFO|Releasing lport 5d8a43bc-3755-462d-bbf0-5366a00c641c from this chassis (sb_readonly=0)
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.760 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:36 compute-0 nova_compute[253461]: 2025-11-22 03:54:36.902 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:54:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2435211444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:54:37 compute-0 ceph-mgr[75294]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3005905960
Nov 22 03:54:37 compute-0 nova_compute[253461]: 2025-11-22 03:54:37.145 253465 DEBUG nova.objects.instance [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lazy-loading 'flavor' on Instance uuid 1ed5ef11-db1e-4030-bda2-67534d28d084 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:54:37 compute-0 nova_compute[253461]: 2025-11-22 03:54:37.171 253465 DEBUG nova.virt.libvirt.driver [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Attempting to attach volume 26d44156-0a54-4470-99a5-818c18e945e0 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 03:54:37 compute-0 nova_compute[253461]: 2025-11-22 03:54:37.174 253465 DEBUG nova.virt.libvirt.guest [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 03:54:37 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:54:37 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-26d44156-0a54-4470-99a5-818c18e945e0">
Nov 22 03:54:37 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:54:37 compute-0 nova_compute[253461]:   </source>
Nov 22 03:54:37 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 03:54:37 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:54:37 compute-0 nova_compute[253461]:   </auth>
Nov 22 03:54:37 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:54:37 compute-0 nova_compute[253461]:   <serial>26d44156-0a54-4470-99a5-818c18e945e0</serial>
Nov 22 03:54:37 compute-0 nova_compute[253461]: </disk>
Nov 22 03:54:37 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 03:54:37 compute-0 nova_compute[253461]: 2025-11-22 03:54:37.292 253465 DEBUG nova.virt.libvirt.driver [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:54:37 compute-0 nova_compute[253461]: 2025-11-22 03:54:37.293 253465 DEBUG nova.virt.libvirt.driver [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:54:37 compute-0 nova_compute[253461]: 2025-11-22 03:54:37.294 253465 DEBUG nova.virt.libvirt.driver [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:54:37 compute-0 nova_compute[253461]: 2025-11-22 03:54:37.294 253465 DEBUG nova.virt.libvirt.driver [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] No VIF found with MAC fa:16:3e:7d:2f:20, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:54:37 compute-0 ceph-mon[75011]: pgmap v1085: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 775 MiB data, 933 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 21 MiB/s wr, 78 op/s
Nov 22 03:54:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2435211444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:54:37 compute-0 nova_compute[253461]: 2025-11-22 03:54:37.950 253465 DEBUG oslo_concurrency.lockutils [None req-85e51fa4-fb16-403e-b68a-b1cbcd4b2a7d 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 167 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 21 MiB/s wr, 118 op/s
Nov 22 03:54:38 compute-0 nova_compute[253461]: 2025-11-22 03:54:38.433 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:39 compute-0 ceph-mon[75011]: pgmap v1086: 305 pgs: 305 active+clean; 167 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 21 MiB/s wr, 118 op/s
Nov 22 03:54:39 compute-0 nova_compute[253461]: 2025-11-22 03:54:39.576 253465 DEBUG nova.compute.manager [req-c88250c5-c08b-41c1-8ee7-e1f738c7f56c req-3b67ac6d-f987-47d1-aa20-0f808eb5c876 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event volume-extended-26d44156-0a54-4470-99a5-818c18e945e0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:39 compute-0 nova_compute[253461]: 2025-11-22 03:54:39.605 253465 DEBUG nova.compute.manager [req-c88250c5-c08b-41c1-8ee7-e1f738c7f56c req-3b67ac6d-f987-47d1-aa20-0f808eb5c876 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Handling volume-extended event for volume 26d44156-0a54-4470-99a5-818c18e945e0 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Nov 22 03:54:39 compute-0 nova_compute[253461]: 2025-11-22 03:54:39.617 253465 INFO nova.compute.manager [req-c88250c5-c08b-41c1-8ee7-e1f738c7f56c req-3b67ac6d-f987-47d1-aa20-0f808eb5c876 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Cinder extended volume 26d44156-0a54-4470-99a5-818c18e945e0; extending it to detect new size
Nov 22 03:54:39 compute-0 nova_compute[253461]: 2025-11-22 03:54:39.803 253465 DEBUG nova.virt.libvirt.driver [req-c88250c5-c08b-41c1-8ee7-e1f738c7f56c req-3b67ac6d-f987-47d1-aa20-0f808eb5c876 ba9f648385de436285ef2768a1383fee ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756
Nov 22 03:54:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 167 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 17 MiB/s wr, 98 op/s
Nov 22 03:54:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Nov 22 03:54:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Nov 22 03:54:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Nov 22 03:54:41 compute-0 podman[266346]: 2025-11-22 03:54:41.378815827 +0000 UTC m=+0.059886018 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 03:54:41 compute-0 podman[266347]: 2025-11-22 03:54:41.419967777 +0000 UTC m=+0.101066629 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.590 253465 DEBUG oslo_concurrency.lockutils [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.591 253465 DEBUG oslo_concurrency.lockutils [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.609 253465 INFO nova.compute.manager [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Detaching volume 26d44156-0a54-4470-99a5-818c18e945e0
Nov 22 03:54:41 compute-0 ceph-mon[75011]: pgmap v1087: 305 pgs: 305 active+clean; 167 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 17 MiB/s wr, 98 op/s
Nov 22 03:54:41 compute-0 ceph-mon[75011]: osdmap e196: 3 total, 3 up, 3 in
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.832 253465 INFO nova.virt.block_device [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Attempting to driver detach volume 26d44156-0a54-4470-99a5-818c18e945e0 from mountpoint /dev/vdb
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.843 253465 DEBUG nova.virt.libvirt.driver [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Attempting to detach device vdb from instance 1ed5ef11-db1e-4030-bda2-67534d28d084 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.844 253465 DEBUG nova.virt.libvirt.guest [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-26d44156-0a54-4470-99a5-818c18e945e0">
Nov 22 03:54:41 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   </source>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <serial>26d44156-0a54-4470-99a5-818c18e945e0</serial>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:54:41 compute-0 nova_compute[253461]: </disk>
Nov 22 03:54:41 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.853 253465 INFO nova.virt.libvirt.driver [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Successfully detached device vdb from instance 1ed5ef11-db1e-4030-bda2-67534d28d084 from the persistent domain config.
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.853 253465 DEBUG nova.virt.libvirt.driver [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 1ed5ef11-db1e-4030-bda2-67534d28d084 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.854 253465 DEBUG nova.virt.libvirt.guest [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-26d44156-0a54-4470-99a5-818c18e945e0">
Nov 22 03:54:41 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   </source>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <serial>26d44156-0a54-4470-99a5-818c18e945e0</serial>
Nov 22 03:54:41 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:54:41 compute-0 nova_compute[253461]: </disk>
Nov 22 03:54:41 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.955 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.985 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763783681.9851768, 1ed5ef11-db1e-4030-bda2-67534d28d084 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.989 253465 DEBUG nova.virt.libvirt.driver [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 1ed5ef11-db1e-4030-bda2-67534d28d084 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 03:54:41 compute-0 nova_compute[253461]: 2025-11-22 03:54:41.992 253465 INFO nova.virt.libvirt.driver [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Successfully detached device vdb from instance 1ed5ef11-db1e-4030-bda2-67534d28d084 from the live domain config.
Nov 22 03:54:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 167 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Nov 22 03:54:42 compute-0 nova_compute[253461]: 2025-11-22 03:54:42.343 253465 DEBUG nova.objects.instance [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lazy-loading 'flavor' on Instance uuid 1ed5ef11-db1e-4030-bda2-67534d28d084 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:54:42 compute-0 nova_compute[253461]: 2025-11-22 03:54:42.418 253465 DEBUG oslo_concurrency.lockutils [None req-af6fc172-a50c-4451-97a2-a066f0051b58 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.169 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.169 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.170 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.170 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.170 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.171 253465 INFO nova.compute.manager [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Terminating instance
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.172 253465 DEBUG nova.compute.manager [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:54:43 compute-0 kernel: tap0192a6ee-e0 (unregistering): left promiscuous mode
Nov 22 03:54:43 compute-0 NetworkManager[48916]: <info>  [1763783683.2274] device (tap0192a6ee-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:54:43 compute-0 ovn_controller[152691]: 2025-11-22T03:54:43Z|00070|binding|INFO|Releasing lport 0192a6ee-e052-42ec-ba5d-39345610c279 from this chassis (sb_readonly=0)
Nov 22 03:54:43 compute-0 ovn_controller[152691]: 2025-11-22T03:54:43Z|00071|binding|INFO|Setting lport 0192a6ee-e052-42ec-ba5d-39345610c279 down in Southbound
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.282 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:43 compute-0 ovn_controller[152691]: 2025-11-22T03:54:43Z|00072|binding|INFO|Removing iface tap0192a6ee-e0 ovn-installed in OVS
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.284 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.302 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:43 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 22 03:54:43 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 15.211s CPU time.
Nov 22 03:54:43 compute-0 systemd-machined[215728]: Machine qemu-4-instance-00000004 terminated.
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.377 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:2f:20 10.100.0.10'], port_security=['fa:16:3e:7d:2f:20 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '1ed5ef11-db1e-4030-bda2-67534d28d084', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-088b40f3-90e0-4306-ab07-be0dfd55e4f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '72120bdf58ce486690a1373cf734f4d9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4d4a79a3-8cf0-4f3a-8d88-6b34e08377f2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bc03686c-7407-4c79-9f8a-d90d96b47d90, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=0192a6ee-e052-42ec-ba5d-39345610c279) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.379 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 0192a6ee-e052-42ec-ba5d-39345610c279 in datapath 088b40f3-90e0-4306-ab07-be0dfd55e4f4 unbound from our chassis
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.381 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 088b40f3-90e0-4306-ab07-be0dfd55e4f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.382 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9c27e7e8-f11a-49e2-8978-ffaf41d11b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.382 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4 namespace which is not needed anymore
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.414 253465 INFO nova.virt.libvirt.driver [-] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Instance destroyed successfully.
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.414 253465 DEBUG nova.objects.instance [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lazy-loading 'resources' on Instance uuid 1ed5ef11-db1e-4030-bda2-67534d28d084 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.425 253465 DEBUG nova.virt.libvirt.vif [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:53:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-16801461',display_name='tempest-VolumesExtendAttachedTest-instance-16801461',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-16801461',id=4,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPk6mCo0ew4H3dCxLB1NJE5trOKL3afdpkxvu5h6EBFin9bk/PJWM29cyXCphBdJi6MaYjq7H3PGH/nsiHbgmOUIOKfv/uY0hu+mxaA6Y8nTiXyLeETLbkRHxDqZN/YXgA==',key_name='tempest-keypair-1370176256',keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:54:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='72120bdf58ce486690a1373cf734f4d9',ramdisk_id='',reservation_id='r-nwdvz0a5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-1122643524',owner_user_name='tempest-VolumesExtendAttachedTest-1122643524-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:54:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='08c5b8ebe09040fbb8538108ea659e5c',uuid=1ed5ef11-db1e-4030-bda2-67534d28d084,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.425 253465 DEBUG nova.network.os_vif_util [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Converting VIF {"id": "0192a6ee-e052-42ec-ba5d-39345610c279", "address": "fa:16:3e:7d:2f:20", "network": {"id": "088b40f3-90e0-4306-ab07-be0dfd55e4f4", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-1554168595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "72120bdf58ce486690a1373cf734f4d9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0192a6ee-e0", "ovs_interfaceid": "0192a6ee-e052-42ec-ba5d-39345610c279", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.426 253465 DEBUG nova.network.os_vif_util [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:2f:20,bridge_name='br-int',has_traffic_filtering=True,id=0192a6ee-e052-42ec-ba5d-39345610c279,network=Network(088b40f3-90e0-4306-ab07-be0dfd55e4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0192a6ee-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.427 253465 DEBUG os_vif [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:2f:20,bridge_name='br-int',has_traffic_filtering=True,id=0192a6ee-e052-42ec-ba5d-39345610c279,network=Network(088b40f3-90e0-4306-ab07-be0dfd55e4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0192a6ee-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.429 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.429 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0192a6ee-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.430 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.431 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.434 253465 INFO os_vif [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:2f:20,bridge_name='br-int',has_traffic_filtering=True,id=0192a6ee-e052-42ec-ba5d-39345610c279,network=Network(088b40f3-90e0-4306-ab07-be0dfd55e4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0192a6ee-e0')
Nov 22 03:54:43 compute-0 neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4[266077]: [NOTICE]   (266081) : haproxy version is 2.8.14-c23fe91
Nov 22 03:54:43 compute-0 neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4[266077]: [NOTICE]   (266081) : path to executable is /usr/sbin/haproxy
Nov 22 03:54:43 compute-0 neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4[266077]: [WARNING]  (266081) : Exiting Master process...
Nov 22 03:54:43 compute-0 neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4[266077]: [ALERT]    (266081) : Current worker (266083) exited with code 143 (Terminated)
Nov 22 03:54:43 compute-0 neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4[266077]: [WARNING]  (266081) : All workers exited. Exiting... (0)
Nov 22 03:54:43 compute-0 systemd[1]: libpod-b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa.scope: Deactivated successfully.
Nov 22 03:54:43 compute-0 podman[266448]: 2025-11-22 03:54:43.555972485 +0000 UTC m=+0.052811621 container died b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6f1c47fdce606c3378cd8eba7ada7cd092e9f32ccaff324e6743c11a6708505-merged.mount: Deactivated successfully.
Nov 22 03:54:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa-userdata-shm.mount: Deactivated successfully.
Nov 22 03:54:43 compute-0 podman[266448]: 2025-11-22 03:54:43.605289117 +0000 UTC m=+0.102128213 container cleanup b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 03:54:43 compute-0 systemd[1]: libpod-conmon-b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa.scope: Deactivated successfully.
Nov 22 03:54:43 compute-0 ceph-mon[75011]: pgmap v1089: 305 pgs: 305 active+clean; 167 MiB data, 357 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Nov 22 03:54:43 compute-0 podman[266479]: 2025-11-22 03:54:43.678539038 +0000 UTC m=+0.049265800 container remove b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.689 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[86d21d12-ddfc-4a78-bc5b-c859586f2f47]: (4, ('Sat Nov 22 03:54:43 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4 (b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa)\nb6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa\nSat Nov 22 03:54:43 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4 (b6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa)\nb6370ca0db652bfff589a078ccd48b8e1e1cb627b001893e7756b3a8563b31aa\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.691 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3df72e53-6bbf-4834-8365-0080b918a355]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.692 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap088b40f3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:54:43 compute-0 kernel: tap088b40f3-90: left promiscuous mode
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.695 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.700 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e6032911-7cf6-416b-a0c7-58ebc538d2e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.712 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.719 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8d516226-c7ff-427e-b264-bc132421d98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.720 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb7c97c-2bd3-4e6d-a82f-1a6ddeb3bfb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.748 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[41d927ef-b808-4800-a502-d8fa45bdcc47]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389296, 'reachable_time': 21842, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266496, 'error': None, 'target': 'ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:43 compute-0 systemd[1]: run-netns-ovnmeta\x2d088b40f3\x2d90e0\x2d4306\x2dab07\x2dbe0dfd55e4f4.mount: Deactivated successfully.
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.752 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-088b40f3-90e0-4306-ab07-be0dfd55e4f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:54:43 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:54:43.752 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[a681778b-9338-4f29-9456-d119d5c74e66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.908 253465 INFO nova.virt.libvirt.driver [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Deleting instance files /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084_del
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.909 253465 INFO nova.virt.libvirt.driver [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Deletion of /var/lib/nova/instances/1ed5ef11-db1e-4030-bda2-67534d28d084_del complete
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.920 253465 DEBUG nova.compute.manager [req-73ed3895-faa9-4b2f-9f17-a2a808d1b4b8 req-ae82e002-085a-498a-a193-5dca33379c12 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event network-vif-unplugged-0192a6ee-e052-42ec-ba5d-39345610c279 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.920 253465 DEBUG oslo_concurrency.lockutils [req-73ed3895-faa9-4b2f-9f17-a2a808d1b4b8 req-ae82e002-085a-498a-a193-5dca33379c12 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.921 253465 DEBUG oslo_concurrency.lockutils [req-73ed3895-faa9-4b2f-9f17-a2a808d1b4b8 req-ae82e002-085a-498a-a193-5dca33379c12 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.921 253465 DEBUG oslo_concurrency.lockutils [req-73ed3895-faa9-4b2f-9f17-a2a808d1b4b8 req-ae82e002-085a-498a-a193-5dca33379c12 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.921 253465 DEBUG nova.compute.manager [req-73ed3895-faa9-4b2f-9f17-a2a808d1b4b8 req-ae82e002-085a-498a-a193-5dca33379c12 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] No waiting events found dispatching network-vif-unplugged-0192a6ee-e052-42ec-ba5d-39345610c279 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.922 253465 DEBUG nova.compute.manager [req-73ed3895-faa9-4b2f-9f17-a2a808d1b4b8 req-ae82e002-085a-498a-a193-5dca33379c12 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event network-vif-unplugged-0192a6ee-e052-42ec-ba5d-39345610c279 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.967 253465 INFO nova.compute.manager [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.967 253465 DEBUG oslo.service.loopingcall [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.967 253465 DEBUG nova.compute.manager [-] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:54:43 compute-0 nova_compute[253461]: 2025-11-22 03:54:43.968 253465 DEBUG nova.network.neutron [-] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:54:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 167 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 KiB/s wr, 52 op/s
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.043 253465 DEBUG nova.network.neutron [-] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.173 253465 INFO nova.compute.manager [-] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Took 1.21 seconds to deallocate network for instance.
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.292 253465 DEBUG nova.compute.manager [req-ea71ab1c-1b17-4a00-90a5-837abf0f06f4 req-cdca379a-4a4f-4dcf-87a3-c500617a5d48 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event network-vif-deleted-0192a6ee-e052-42ec-ba5d-39345610c279 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.298 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.299 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.356 253465 DEBUG oslo_concurrency.processutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:45 compute-0 ceph-mon[75011]: pgmap v1090: 305 pgs: 305 active+clean; 167 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 KiB/s wr, 52 op/s
Nov 22 03:54:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:54:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2678987897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.867 253465 DEBUG oslo_concurrency.processutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.873 253465 DEBUG nova.compute.provider_tree [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.894 253465 DEBUG nova.scheduler.client.report [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.922 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:45 compute-0 nova_compute[253461]: 2025-11-22 03:54:45.949 253465 INFO nova.scheduler.client.report [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Deleted allocations for instance 1ed5ef11-db1e-4030-bda2-67534d28d084
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.006 253465 DEBUG nova.compute.manager [req-d694756e-1303-4fa6-bcff-ab8131f353f7 req-237414d4-4ae5-460a-b6ce-5d12f9e8efcc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received event network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.006 253465 DEBUG oslo_concurrency.lockutils [req-d694756e-1303-4fa6-bcff-ab8131f353f7 req-237414d4-4ae5-460a-b6ce-5d12f9e8efcc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.007 253465 DEBUG oslo_concurrency.lockutils [req-d694756e-1303-4fa6-bcff-ab8131f353f7 req-237414d4-4ae5-460a-b6ce-5d12f9e8efcc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.008 253465 DEBUG oslo_concurrency.lockutils [req-d694756e-1303-4fa6-bcff-ab8131f353f7 req-237414d4-4ae5-460a-b6ce-5d12f9e8efcc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.008 253465 DEBUG nova.compute.manager [req-d694756e-1303-4fa6-bcff-ab8131f353f7 req-237414d4-4ae5-460a-b6ce-5d12f9e8efcc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] No waiting events found dispatching network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.008 253465 WARNING nova.compute.manager [req-d694756e-1303-4fa6-bcff-ab8131f353f7 req-237414d4-4ae5-460a-b6ce-5d12f9e8efcc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Received unexpected event network-vif-plugged-0192a6ee-e052-42ec-ba5d-39345610c279 for instance with vm_state deleted and task_state None.
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.023 253465 DEBUG oslo_concurrency.lockutils [None req-3afc98c6-44a8-4aff-842b-55ff8bbc1b5b 08c5b8ebe09040fbb8538108ea659e5c 72120bdf58ce486690a1373cf734f4d9 - - default default] Lock "1ed5ef11-db1e-4030-bda2-67534d28d084" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 152 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.2 KiB/s wr, 59 op/s
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005972439278504917 of space, bias 1.0, pg target 0.1791731783551475 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003465693576219314 of space, bias 1.0, pg target 0.10397080728657941 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:54:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.296 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2678987897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:46 compute-0 nova_compute[253461]: 2025-11-22 03:54:46.957 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:47 compute-0 ceph-mon[75011]: pgmap v1091: 305 pgs: 305 active+clean; 152 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 4.2 KiB/s wr, 59 op/s
Nov 22 03:54:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 3.6 KiB/s wr, 44 op/s
Nov 22 03:54:48 compute-0 nova_compute[253461]: 2025-11-22 03:54:48.433 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:48 compute-0 ceph-mon[75011]: pgmap v1092: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 3.6 KiB/s wr, 44 op/s
Nov 22 03:54:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:54:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4172291281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:54:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4172291281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:49 compute-0 nova_compute[253461]: 2025-11-22 03:54:49.249 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4172291281' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:54:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4172291281' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:54:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 3.6 KiB/s wr, 44 op/s
Nov 22 03:54:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:50 compute-0 ceph-mon[75011]: pgmap v1093: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 3.6 KiB/s wr, 44 op/s
Nov 22 03:54:51 compute-0 nova_compute[253461]: 2025-11-22 03:54:51.960 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.6 KiB/s wr, 53 op/s
Nov 22 03:54:52 compute-0 nova_compute[253461]: 2025-11-22 03:54:52.615 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:52 compute-0 nova_compute[253461]: 2025-11-22 03:54:52.860 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:53 compute-0 ceph-mon[75011]: pgmap v1094: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.6 KiB/s wr, 53 op/s
Nov 22 03:54:53 compute-0 nova_compute[253461]: 2025-11-22 03:54:53.435 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.4 KiB/s wr, 42 op/s
Nov 22 03:54:54 compute-0 nova_compute[253461]: 2025-11-22 03:54:54.960 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:54 compute-0 nova_compute[253461]: 2025-11-22 03:54:54.960 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:54 compute-0 nova_compute[253461]: 2025-11-22 03:54:54.986 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.087 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.088 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.096 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.097 253465 INFO nova.compute.claims [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:54:55 compute-0 ceph-mon[75011]: pgmap v1095: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 3.4 KiB/s wr, 42 op/s
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.242 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:55 compute-0 podman[266522]: 2025-11-22 03:54:55.437024085 +0000 UTC m=+0.101778552 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 03:54:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:54:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:54:55 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712928992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.750 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.757 253465 DEBUG nova.compute.provider_tree [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.776 253465 DEBUG nova.scheduler.client.report [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.805 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.806 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.864 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.864 253465 DEBUG nova.network.neutron [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.885 253465 INFO nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:54:55 compute-0 nova_compute[253461]: 2025-11-22 03:54:55.902 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.000 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.001 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.002 253465 INFO nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Creating image(s)
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.025 253465 DEBUG nova.storage.rbd_utils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image aba99a86-7eb3-4b04-b0a1-af00510f151c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.050 253465 DEBUG nova.storage.rbd_utils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image aba99a86-7eb3-4b04-b0a1-af00510f151c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.078 253465 DEBUG nova.storage.rbd_utils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image aba99a86-7eb3-4b04-b0a1-af00510f151c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.083 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.110 253465 DEBUG nova.policy [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e45192a50149470daea6e26cfd2de3a9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8e17fcbd6721457f93b2fe5018fb3534', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:54:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.176 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.177 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.178 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.178 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.203 253465 DEBUG nova.storage.rbd_utils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image aba99a86-7eb3-4b04-b0a1-af00510f151c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.207 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d aba99a86-7eb3-4b04-b0a1-af00510f151c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/712928992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.566 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d aba99a86-7eb3-4b04-b0a1-af00510f151c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.359s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.645 253465 DEBUG nova.storage.rbd_utils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] resizing rbd image aba99a86-7eb3-4b04-b0a1-af00510f151c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.784 253465 DEBUG nova.objects.instance [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'migration_context' on Instance uuid aba99a86-7eb3-4b04-b0a1-af00510f151c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.803 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.803 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Ensure instance console log exists: /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.804 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.805 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.805 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.810 253465 DEBUG nova.network.neutron [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Successfully created port: ef4eaf41-39f7-49af-bd4c-c963bd03246b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:54:56 compute-0 nova_compute[253461]: 2025-11-22 03:54:56.962 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:57 compute-0 ceph-mon[75011]: pgmap v1096: 305 pgs: 305 active+clean; 88 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.7 KiB/s wr, 42 op/s
Nov 22 03:54:57 compute-0 nova_compute[253461]: 2025-11-22 03:54:57.570 253465 DEBUG nova.network.neutron [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Successfully updated port: ef4eaf41-39f7-49af-bd4c-c963bd03246b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:54:57 compute-0 nova_compute[253461]: 2025-11-22 03:54:57.590 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:54:57 compute-0 nova_compute[253461]: 2025-11-22 03:54:57.591 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquired lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:54:57 compute-0 nova_compute[253461]: 2025-11-22 03:54:57.591 253465 DEBUG nova.network.neutron [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:54:57 compute-0 nova_compute[253461]: 2025-11-22 03:54:57.748 253465 DEBUG nova.compute.manager [req-6447248b-f58a-4010-b8a3-382287e77e88 req-251368d2-3523-49d0-8aea-1c905b4d3d22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received event network-changed-ef4eaf41-39f7-49af-bd4c-c963bd03246b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:54:57 compute-0 nova_compute[253461]: 2025-11-22 03:54:57.749 253465 DEBUG nova.compute.manager [req-6447248b-f58a-4010-b8a3-382287e77e88 req-251368d2-3523-49d0-8aea-1c905b4d3d22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Refreshing instance network info cache due to event network-changed-ef4eaf41-39f7-49af-bd4c-c963bd03246b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:54:57 compute-0 nova_compute[253461]: 2025-11-22 03:54:57.750 253465 DEBUG oslo_concurrency.lockutils [req-6447248b-f58a-4010-b8a3-382287e77e88 req-251368d2-3523-49d0-8aea-1c905b4d3d22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:54:57 compute-0 nova_compute[253461]: 2025-11-22 03:54:57.778 253465 DEBUG nova.network.neutron [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:54:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 110 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1015 KiB/s wr, 50 op/s
Nov 22 03:54:58 compute-0 nova_compute[253461]: 2025-11-22 03:54:58.412 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783683.4115014, 1ed5ef11-db1e-4030-bda2-67534d28d084 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:54:58 compute-0 nova_compute[253461]: 2025-11-22 03:54:58.413 253465 INFO nova.compute.manager [-] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] VM Stopped (Lifecycle Event)
Nov 22 03:54:58 compute-0 nova_compute[253461]: 2025-11-22 03:54:58.431 253465 DEBUG nova.compute.manager [None req-3f8f293f-bf4c-4fde-a27e-c83b158816d8 - - - - - -] [instance: 1ed5ef11-db1e-4030-bda2-67534d28d084] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:54:58 compute-0 nova_compute[253461]: 2025-11-22 03:54:58.439 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.112 253465 DEBUG nova.network.neutron [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updating instance_info_cache with network_info: [{"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.160 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Releasing lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.160 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Instance network_info: |[{"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.161 253465 DEBUG oslo_concurrency.lockutils [req-6447248b-f58a-4010-b8a3-382287e77e88 req-251368d2-3523-49d0-8aea-1c905b4d3d22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.161 253465 DEBUG nova.network.neutron [req-6447248b-f58a-4010-b8a3-382287e77e88 req-251368d2-3523-49d0-8aea-1c905b4d3d22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Refreshing network info cache for port ef4eaf41-39f7-49af-bd4c-c963bd03246b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.163 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Start _get_guest_xml network_info=[{"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.167 253465 WARNING nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.172 253465 DEBUG nova.virt.libvirt.host [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.172 253465 DEBUG nova.virt.libvirt.host [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.174 253465 DEBUG nova.virt.libvirt.host [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.175 253465 DEBUG nova.virt.libvirt.host [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.175 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.175 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.176 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.176 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.176 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.176 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.176 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.177 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.177 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.177 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.177 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.177 253465 DEBUG nova.virt.hardware [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.180 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:59 compute-0 ceph-mon[75011]: pgmap v1097: 305 pgs: 305 active+clean; 110 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1015 KiB/s wr, 50 op/s
Nov 22 03:54:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:54:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219036710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.678 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.712 253465 DEBUG nova.storage.rbd_utils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image aba99a86-7eb3-4b04-b0a1-af00510f151c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:54:59 compute-0 nova_compute[253461]: 2025-11-22 03:54:59.717 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:54:59 compute-0 sudo[266770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:54:59 compute-0 sudo[266770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:54:59 compute-0 sudo[266770]: pam_unix(sudo:session): session closed for user root
Nov 22 03:54:59 compute-0 sudo[266795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:54:59 compute-0 sudo[266795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:54:59 compute-0 sudo[266795]: pam_unix(sudo:session): session closed for user root
Nov 22 03:54:59 compute-0 sudo[266839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:00 compute-0 sudo[266839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:00 compute-0 sudo[266839]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:00 compute-0 sudo[266864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:55:00 compute-0 sudo[266864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 110 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1014 KiB/s wr, 35 op/s
Nov 22 03:55:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/679113420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/679113420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3601974817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.221 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.225 253465 DEBUG nova.virt.libvirt.vif [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1711662817',display_name='tempest-VolumesBackupsTest-instance-1711662817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1711662817',id=5,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwXsHHy1xGYZXLg+LsUn3gWIkTsp4l4HY0nvRL+dD6i2yij/zKCBTxuexKjTFl9PGA59sNZ0i5+2v/21gKSsKAKbtEmi3JvcZN1AnAPr5oFBuv+gPNCQ9f9WOOcd/UJDg==',key_name='tempest-keypair-1336013874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e17fcbd6721457f93b2fe5018fb3534',ramdisk_id='',reservation_id='r-5q76p7u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-922932240',owner_user_name='tempest-VolumesBackupsTest-922932240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:54:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e45192a50149470daea6e26cfd2de3a9',uuid=aba99a86-7eb3-4b04-b0a1-af00510f151c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.226 253465 DEBUG nova.network.os_vif_util [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converting VIF {"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.227 253465 DEBUG nova.network.os_vif_util [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:f5:c4,bridge_name='br-int',has_traffic_filtering=True,id=ef4eaf41-39f7-49af-bd4c-c963bd03246b,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef4eaf41-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.229 253465 DEBUG nova.objects.instance [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'pci_devices' on Instance uuid aba99a86-7eb3-4b04-b0a1-af00510f151c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1219036710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/679113420' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/679113420' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3601974817' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.314 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <uuid>aba99a86-7eb3-4b04-b0a1-af00510f151c</uuid>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <name>instance-00000005</name>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesBackupsTest-instance-1711662817</nova:name>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:54:59</nova:creationTime>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <nova:user uuid="e45192a50149470daea6e26cfd2de3a9">tempest-VolumesBackupsTest-922932240-project-member</nova:user>
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <nova:project uuid="8e17fcbd6721457f93b2fe5018fb3534">tempest-VolumesBackupsTest-922932240</nova:project>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <nova:port uuid="ef4eaf41-39f7-49af-bd4c-c963bd03246b">
Nov 22 03:55:00 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <system>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <entry name="serial">aba99a86-7eb3-4b04-b0a1-af00510f151c</entry>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <entry name="uuid">aba99a86-7eb3-4b04-b0a1-af00510f151c</entry>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </system>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <os>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   </os>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <features>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   </features>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/aba99a86-7eb3-4b04-b0a1-af00510f151c_disk">
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       </source>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/aba99a86-7eb3-4b04-b0a1-af00510f151c_disk.config">
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       </source>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:55:00 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:23:f5:c4"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <target dev="tapef4eaf41-39"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c/console.log" append="off"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <video>
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </video>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:55:00 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:55:00 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:55:00 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:55:00 compute-0 nova_compute[253461]: </domain>
Nov 22 03:55:00 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.314 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Preparing to wait for external event network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.314 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.315 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.315 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.315 253465 DEBUG nova.virt.libvirt.vif [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1711662817',display_name='tempest-VolumesBackupsTest-instance-1711662817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1711662817',id=5,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwXsHHy1xGYZXLg+LsUn3gWIkTsp4l4HY0nvRL+dD6i2yij/zKCBTxuexKjTFl9PGA59sNZ0i5+2v/21gKSsKAKbtEmi3JvcZN1AnAPr5oFBuv+gPNCQ9f9WOOcd/UJDg==',key_name='tempest-keypair-1336013874',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e17fcbd6721457f93b2fe5018fb3534',ramdisk_id='',reservation_id='r-5q76p7u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-922932240',owner_user_name='tempest-VolumesBackupsTest-922932240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:54:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e45192a50149470daea6e26cfd2de3a9',uuid=aba99a86-7eb3-4b04-b0a1-af00510f151c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.316 253465 DEBUG nova.network.os_vif_util [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converting VIF {"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.316 253465 DEBUG nova.network.os_vif_util [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:23:f5:c4,bridge_name='br-int',has_traffic_filtering=True,id=ef4eaf41-39f7-49af-bd4c-c963bd03246b,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef4eaf41-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.316 253465 DEBUG os_vif [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:f5:c4,bridge_name='br-int',has_traffic_filtering=True,id=ef4eaf41-39f7-49af-bd4c-c963bd03246b,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef4eaf41-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.317 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.317 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.317 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.319 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.319 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef4eaf41-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.320 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef4eaf41-39, col_values=(('external_ids', {'iface-id': 'ef4eaf41-39f7-49af-bd4c-c963bd03246b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:23:f5:c4', 'vm-uuid': 'aba99a86-7eb3-4b04-b0a1-af00510f151c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.321 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:00 compute-0 NetworkManager[48916]: <info>  [1763783700.3228] manager: (tapef4eaf41-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.323 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.328 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.329 253465 INFO os_vif [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:23:f5:c4,bridge_name='br-int',has_traffic_filtering=True,id=ef4eaf41-39f7-49af-bd4c-c963bd03246b,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef4eaf41-39')
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.399 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.399 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.400 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No VIF found with MAC fa:16:3e:23:f5:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.401 253465 INFO nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Using config drive
Nov 22 03:55:00 compute-0 nova_compute[253461]: 2025-11-22 03:55:00.431 253465 DEBUG nova.storage.rbd_utils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image aba99a86-7eb3-4b04-b0a1-af00510f151c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:00 compute-0 sudo[266864]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:00 compute-0 sudo[266943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:00 compute-0 sudo[266943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:00 compute-0 sudo[266943]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:00 compute-0 sudo[266968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:55:00 compute-0 sudo[266968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:00 compute-0 sudo[266968]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:00 compute-0 sudo[266993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:00 compute-0 sudo[266993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:00 compute-0 sudo[266993]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:00 compute-0 sudo[267018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 list-networks
Nov 22 03:55:00 compute-0 sudo[267018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.004 253465 INFO nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Creating config drive at /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c/disk.config
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.011 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6o0kn63r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.032 253465 DEBUG nova.network.neutron [req-6447248b-f58a-4010-b8a3-382287e77e88 req-251368d2-3523-49d0-8aea-1c905b4d3d22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updated VIF entry in instance network info cache for port ef4eaf41-39f7-49af-bd4c-c963bd03246b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.032 253465 DEBUG nova.network.neutron [req-6447248b-f58a-4010-b8a3-382287e77e88 req-251368d2-3523-49d0-8aea-1c905b4d3d22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updating instance_info_cache with network_info: [{"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.053 253465 DEBUG oslo_concurrency.lockutils [req-6447248b-f58a-4010-b8a3-382287e77e88 req-251368d2-3523-49d0-8aea-1c905b4d3d22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.144 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6o0kn63r" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.172 253465 DEBUG nova.storage.rbd_utils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image aba99a86-7eb3-4b04-b0a1-af00510f151c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.179 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c/disk.config aba99a86-7eb3-4b04-b0a1-af00510f151c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:01 compute-0 sudo[267018]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:55:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:55:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:55:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:55:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:55:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:01 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c464e2dc-c1af-4ca8-a7a4-0b38b529d310 does not exist
Nov 22 03:55:01 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a1d1d5d9-4be4-4cc1-a4c6-4fc7fdccea4d does not exist
Nov 22 03:55:01 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 83f5f2c5-f8c2-4366-8ee5-5d4dc8033936 does not exist
Nov 22 03:55:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:55:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:55:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:55:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: pgmap v1098: 305 pgs: 305 active+clean; 110 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1014 KiB/s wr, 35 op/s
Nov 22 03:55:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:55:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:55:01 compute-0 sudo[267099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:01 compute-0 sudo[267099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:01 compute-0 sudo[267099]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.361 253465 DEBUG oslo_concurrency.processutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c/disk.config aba99a86-7eb3-4b04-b0a1-af00510f151c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.362 253465 INFO nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Deleting local config drive /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c/disk.config because it was imported into RBD.
Nov 22 03:55:01 compute-0 sudo[267127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:55:01 compute-0 sudo[267127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:01 compute-0 sudo[267127]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:01 compute-0 kernel: tapef4eaf41-39: entered promiscuous mode
Nov 22 03:55:01 compute-0 NetworkManager[48916]: <info>  [1763783701.4423] manager: (tapef4eaf41-39): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Nov 22 03:55:01 compute-0 ovn_controller[152691]: 2025-11-22T03:55:01Z|00073|binding|INFO|Claiming lport ef4eaf41-39f7-49af-bd4c-c963bd03246b for this chassis.
Nov 22 03:55:01 compute-0 ovn_controller[152691]: 2025-11-22T03:55:01Z|00074|binding|INFO|ef4eaf41-39f7-49af-bd4c-c963bd03246b: Claiming fa:16:3e:23:f5:c4 10.100.0.3
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.442 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.449 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.456 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.466 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:f5:c4 10.100.0.3'], port_security=['fa:16:3e:23:f5:c4 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'aba99a86-7eb3-4b04-b0a1-af00510f151c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e17fcbd6721457f93b2fe5018fb3534', 'neutron:revision_number': '2', 'neutron:security_group_ids': '92e09f3d-f050-4dc2-85f5-d034b683dde7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d20e9a4c-63a4-481f-abc2-5dcc033feed1, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=ef4eaf41-39f7-49af-bd4c-c963bd03246b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.468 162689 INFO neutron.agent.ovn.metadata.agent [-] Port ef4eaf41-39f7-49af-bd4c-c963bd03246b in datapath b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb bound to our chassis
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.470 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.489 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a6daf6-77a8-4860-b265-1d21bf8ee9df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.490 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb33063bb-91 in ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:55:01 compute-0 systemd-machined[215728]: New machine qemu-5-instance-00000005.
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.493 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb33063bb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.494 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed1d98b-85ec-46cb-8ee9-6d81a7a53528]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.495 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[26bfc43f-b1b8-439c-94a7-bf2bfdddd8ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.517 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[527be19c-9ad0-4f91-b2ef-40aa9010cc16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 22 03:55:01 compute-0 sudo[267159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:01 compute-0 sudo[267159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:01 compute-0 sudo[267159]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:01 compute-0 systemd-udevd[267192]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:55:01 compute-0 NetworkManager[48916]: <info>  [1763783701.5523] device (tapef4eaf41-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.550 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[47a8bbb7-1763-4628-9860-57b3f43123ae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 NetworkManager[48916]: <info>  [1763783701.5537] device (tapef4eaf41-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.557 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 ovn_controller[152691]: 2025-11-22T03:55:01Z|00075|binding|INFO|Setting lport ef4eaf41-39f7-49af-bd4c-c963bd03246b ovn-installed in OVS
Nov 22 03:55:01 compute-0 ovn_controller[152691]: 2025-11-22T03:55:01Z|00076|binding|INFO|Setting lport ef4eaf41-39f7-49af-bd4c-c963bd03246b up in Southbound
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.583 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[e8080201-f755-415d-a98c-7db31076af35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.615 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 NetworkManager[48916]: <info>  [1763783701.6223] manager: (tapb33063bb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.622 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e23f337d-90a8-4260-94e1-8b457f0b1dc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.667 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5ab881-413e-468a-9402-0b0e77a76fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.672 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[4c534e27-0172-420d-9b68-8d5f413bcbcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 sudo[267195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:55:01 compute-0 sudo[267195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:01 compute-0 NetworkManager[48916]: <info>  [1763783701.6979] device (tapb33063bb-90): carrier: link connected
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.704 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[f086d7e1-3daa-4ad9-83be-9df731dd01dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.723 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a8fa77-31f4-465a-9fa7-ed501b13bfd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb33063bb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:2c:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395429, 'reachable_time': 19688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267250, 'error': None, 'target': 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.744 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c1672c3d-e649-4842-afd8-6ce206406c4a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:2c47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 395429, 'tstamp': 395429}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267251, 'error': None, 'target': 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.765 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c21ec140-ea35-4826-a7fb-1880ef01a153]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb33063bb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:2c:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395429, 'reachable_time': 19688, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267252, 'error': None, 'target': 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.803 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[cefd6428-7f4e-4623-9961-2f3f6de6b707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.861 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3a6399b7-13b2-475c-a374-0d5b9cbefbfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.862 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb33063bb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.863 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.864 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb33063bb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.866 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 NetworkManager[48916]: <info>  [1763783701.8667] manager: (tapb33063bb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 22 03:55:01 compute-0 kernel: tapb33063bb-90: entered promiscuous mode
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.868 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.869 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb33063bb-90, col_values=(('external_ids', {'iface-id': 'b719ddd2-762d-4b6d-9cf6-85878321092a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.871 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 ovn_controller[152691]: 2025-11-22T03:55:01Z|00077|binding|INFO|Releasing lport b719ddd2-762d-4b6d-9cf6-85878321092a from this chassis (sb_readonly=0)
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.872 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.875 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.876 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c338dd5d-d615-402a-a162-e96bc17b1e04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.877 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb.pid.haproxy
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:55:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:01.880 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'env', 'PROCESS_TAG=haproxy-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.886 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:01 compute-0 nova_compute[253461]: 2025-11-22 03:55:01.963 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:02 compute-0 podman[267299]: 2025-11-22 03:55:02.058286505 +0000 UTC m=+0.047013225 container create 593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:55:02 compute-0 systemd[1]: Started libpod-conmon-593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3.scope.
Nov 22 03:55:02 compute-0 podman[267299]: 2025-11-22 03:55:02.037343956 +0000 UTC m=+0.026070756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 22 03:55:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:55:02 compute-0 podman[267299]: 2025-11-22 03:55:02.178594515 +0000 UTC m=+0.167321315 container init 593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 03:55:02 compute-0 podman[267299]: 2025-11-22 03:55:02.190310616 +0000 UTC m=+0.179037376 container start 593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:55:02 compute-0 podman[267299]: 2025-11-22 03:55:02.1949832 +0000 UTC m=+0.183709950 container attach 593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:55:02 compute-0 friendly_rosalind[267316]: 167 167
Nov 22 03:55:02 compute-0 systemd[1]: libpod-593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3.scope: Deactivated successfully.
Nov 22 03:55:02 compute-0 conmon[267316]: conmon 593721556da168a5b3db <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3.scope/container/memory.events
Nov 22 03:55:02 compute-0 podman[267335]: 2025-11-22 03:55:02.274179086 +0000 UTC m=+0.041221536 container died 593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:55:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aad57666371425fa3ca5830cd979fb723a389b9d94e341118e8a122678c84de-merged.mount: Deactivated successfully.
Nov 22 03:55:02 compute-0 podman[267335]: 2025-11-22 03:55:02.320679467 +0000 UTC m=+0.087721837 container remove 593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_rosalind, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:55:02 compute-0 systemd[1]: libpod-conmon-593721556da168a5b3dbbd110b57ded5fc9bc1ce287d5b79494b715885e850b3.scope: Deactivated successfully.
Nov 22 03:55:02 compute-0 podman[267346]: 2025-11-22 03:55:02.356679407 +0000 UTC m=+0.084675873 container create 51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 03:55:02 compute-0 systemd[1]: Started libpod-conmon-51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15.scope.
Nov 22 03:55:02 compute-0 podman[267346]: 2025-11-22 03:55:02.32477071 +0000 UTC m=+0.052767156 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:55:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c3833a7386fd88d1ec1dee65f72faa765e4fb1f52b598a096070a61b8cbdc6a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:02 compute-0 podman[267346]: 2025-11-22 03:55:02.469335819 +0000 UTC m=+0.197332335 container init 51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 03:55:02 compute-0 podman[267346]: 2025-11-22 03:55:02.475565036 +0000 UTC m=+0.203561492 container start 51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 03:55:02 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[267383]: [NOTICE]   (267412) : New worker (267416) forked
Nov 22 03:55:02 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[267383]: [NOTICE]   (267412) : Loading success.
Nov 22 03:55:02 compute-0 podman[267430]: 2025-11-22 03:55:02.573307566 +0000 UTC m=+0.042539493 container create 42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moore, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:55:02 compute-0 systemd[1]: Started libpod-conmon-42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8.scope.
Nov 22 03:55:02 compute-0 nova_compute[253461]: 2025-11-22 03:55:02.636 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783702.6361341, aba99a86-7eb3-4b04-b0a1-af00510f151c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:02 compute-0 nova_compute[253461]: 2025-11-22 03:55:02.637 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] VM Started (Lifecycle Event)
Nov 22 03:55:02 compute-0 podman[267430]: 2025-11-22 03:55:02.556362813 +0000 UTC m=+0.025594770 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c37dacaf97262393b10daaa5c819dce209af34c075016888381e402eccf17d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c37dacaf97262393b10daaa5c819dce209af34c075016888381e402eccf17d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c37dacaf97262393b10daaa5c819dce209af34c075016888381e402eccf17d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c37dacaf97262393b10daaa5c819dce209af34c075016888381e402eccf17d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00c37dacaf97262393b10daaa5c819dce209af34c075016888381e402eccf17d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:02 compute-0 nova_compute[253461]: 2025-11-22 03:55:02.671 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:02 compute-0 nova_compute[253461]: 2025-11-22 03:55:02.678 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783702.6388407, aba99a86-7eb3-4b04-b0a1-af00510f151c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:02 compute-0 nova_compute[253461]: 2025-11-22 03:55:02.678 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] VM Paused (Lifecycle Event)
Nov 22 03:55:02 compute-0 podman[267430]: 2025-11-22 03:55:02.693039625 +0000 UTC m=+0.162271622 container init 42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moore, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:55:02 compute-0 nova_compute[253461]: 2025-11-22 03:55:02.705 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:02 compute-0 nova_compute[253461]: 2025-11-22 03:55:02.710 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:55:02 compute-0 podman[267430]: 2025-11-22 03:55:02.711223722 +0000 UTC m=+0.180455649 container start 42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:55:02 compute-0 podman[267430]: 2025-11-22 03:55:02.715117862 +0000 UTC m=+0.184349859 container attach 42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moore, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:55:02 compute-0 nova_compute[253461]: 2025-11-22 03:55:02.739 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:55:03 compute-0 ceph-mon[75011]: pgmap v1099: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 MiB/s wr, 41 op/s
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.654 253465 DEBUG nova.compute.manager [req-f650d8c9-9ace-4578-8af5-cf70efebff49 req-d438d576-b57e-4e99-85fc-7229e984938b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received event network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.656 253465 DEBUG oslo_concurrency.lockutils [req-f650d8c9-9ace-4578-8af5-cf70efebff49 req-d438d576-b57e-4e99-85fc-7229e984938b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.656 253465 DEBUG oslo_concurrency.lockutils [req-f650d8c9-9ace-4578-8af5-cf70efebff49 req-d438d576-b57e-4e99-85fc-7229e984938b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.657 253465 DEBUG oslo_concurrency.lockutils [req-f650d8c9-9ace-4578-8af5-cf70efebff49 req-d438d576-b57e-4e99-85fc-7229e984938b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.657 253465 DEBUG nova.compute.manager [req-f650d8c9-9ace-4578-8af5-cf70efebff49 req-d438d576-b57e-4e99-85fc-7229e984938b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Processing event network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.659 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.664 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783703.6637666, aba99a86-7eb3-4b04-b0a1-af00510f151c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.664 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] VM Resumed (Lifecycle Event)
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.668 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.673 253465 INFO nova.virt.libvirt.driver [-] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Instance spawned successfully.
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.673 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.699 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.707 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.713 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.713 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.714 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.715 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.715 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.716 253465 DEBUG nova.virt.libvirt.driver [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.732 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.796 253465 INFO nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Took 7.80 seconds to spawn the instance on the hypervisor.
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.797 253465 DEBUG nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:03 compute-0 mystifying_moore[267451]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:55:03 compute-0 mystifying_moore[267451]: --> relative data size: 1.0
Nov 22 03:55:03 compute-0 mystifying_moore[267451]: --> All data devices are unavailable
Nov 22 03:55:03 compute-0 systemd[1]: libpod-42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8.scope: Deactivated successfully.
Nov 22 03:55:03 compute-0 systemd[1]: libpod-42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8.scope: Consumed 1.090s CPU time.
Nov 22 03:55:03 compute-0 podman[267430]: 2025-11-22 03:55:03.851907699 +0000 UTC m=+1.321139636 container died 42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moore, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.872 253465 INFO nova.compute.manager [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Took 8.82 seconds to build instance.
Nov 22 03:55:03 compute-0 nova_compute[253461]: 2025-11-22 03:55:03.891 253465 DEBUG oslo_concurrency.lockutils [None req-1bb516c9-a661-48a6-92dd-525a86a299dd e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-00c37dacaf97262393b10daaa5c819dce209af34c075016888381e402eccf17d-merged.mount: Deactivated successfully.
Nov 22 03:55:03 compute-0 podman[267430]: 2025-11-22 03:55:03.941711508 +0000 UTC m=+1.410943435 container remove 42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_moore, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:03 compute-0 systemd[1]: libpod-conmon-42f5e810411eb32dbbb94a4db2401b4e06ba2e1f2a5316a2087a7531bebea3b8.scope: Deactivated successfully.
Nov 22 03:55:03 compute-0 sudo[267195]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:04 compute-0 sudo[267495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:04 compute-0 sudo[267495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:04 compute-0 sudo[267495]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:04 compute-0 sudo[267520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:55:04 compute-0 sudo[267520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:04 compute-0 sudo[267520]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 22 03:55:04 compute-0 sudo[267545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:04 compute-0 sudo[267545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:04 compute-0 sudo[267545]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:04 compute-0 sudo[267570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:55:04 compute-0 sudo[267570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:04 compute-0 podman[267631]: 2025-11-22 03:55:04.683033291 +0000 UTC m=+0.040643495 container create fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:55:04 compute-0 systemd[1]: Started libpod-conmon-fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171.scope.
Nov 22 03:55:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:55:04 compute-0 podman[267631]: 2025-11-22 03:55:04.753903602 +0000 UTC m=+0.111513816 container init fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:55:04 compute-0 podman[267631]: 2025-11-22 03:55:04.662267843 +0000 UTC m=+0.019878097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:04 compute-0 podman[267631]: 2025-11-22 03:55:04.761946396 +0000 UTC m=+0.119556600 container start fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:04 compute-0 podman[267631]: 2025-11-22 03:55:04.765441614 +0000 UTC m=+0.123051838 container attach fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 03:55:04 compute-0 systemd[1]: libpod-fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171.scope: Deactivated successfully.
Nov 22 03:55:04 compute-0 awesome_swirles[267647]: 167 167
Nov 22 03:55:04 compute-0 conmon[267647]: conmon fda05f2f8d4510f5d580 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171.scope/container/memory.events
Nov 22 03:55:04 compute-0 podman[267631]: 2025-11-22 03:55:04.770414784 +0000 UTC m=+0.128025008 container died fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce482cb174e11553a0512d3c0746ed67551360246e0f851905849355728e90b9-merged.mount: Deactivated successfully.
Nov 22 03:55:04 compute-0 podman[267631]: 2025-11-22 03:55:04.8459802 +0000 UTC m=+0.203590404 container remove fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:55:04 compute-0 systemd[1]: libpod-conmon-fda05f2f8d4510f5d5804efc6f58df9403e31b21f077e1d2d0c162600a4ea171.scope: Deactivated successfully.
Nov 22 03:55:05 compute-0 podman[267671]: 2025-11-22 03:55:05.007738226 +0000 UTC m=+0.043073049 container create 9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:55:05 compute-0 systemd[1]: Started libpod-conmon-9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e.scope.
Nov 22 03:55:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013f963deb36b787577eaa51a44d130743ccca573bd19021b7b973612a52f5f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013f963deb36b787577eaa51a44d130743ccca573bd19021b7b973612a52f5f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013f963deb36b787577eaa51a44d130743ccca573bd19021b7b973612a52f5f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/013f963deb36b787577eaa51a44d130743ccca573bd19021b7b973612a52f5f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:05 compute-0 podman[267671]: 2025-11-22 03:55:04.986880389 +0000 UTC m=+0.022215172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:05 compute-0 podman[267671]: 2025-11-22 03:55:05.092113613 +0000 UTC m=+0.127448416 container init 9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 03:55:05 compute-0 podman[267671]: 2025-11-22 03:55:05.101995045 +0000 UTC m=+0.137329828 container start 9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:55:05 compute-0 podman[267671]: 2025-11-22 03:55:05.105312444 +0000 UTC m=+0.140647227 container attach 9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:05 compute-0 nova_compute[253461]: 2025-11-22 03:55:05.322 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:05 compute-0 ceph-mon[75011]: pgmap v1100: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Nov 22 03:55:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:05 compute-0 nova_compute[253461]: 2025-11-22 03:55:05.744 253465 DEBUG nova.compute.manager [req-34da76b5-2400-4d22-82fe-c5d7f5d7a7d6 req-08e80751-3562-4aaf-acb4-4bb949c7678d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received event network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:05 compute-0 nova_compute[253461]: 2025-11-22 03:55:05.746 253465 DEBUG oslo_concurrency.lockutils [req-34da76b5-2400-4d22-82fe-c5d7f5d7a7d6 req-08e80751-3562-4aaf-acb4-4bb949c7678d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:05 compute-0 nova_compute[253461]: 2025-11-22 03:55:05.746 253465 DEBUG oslo_concurrency.lockutils [req-34da76b5-2400-4d22-82fe-c5d7f5d7a7d6 req-08e80751-3562-4aaf-acb4-4bb949c7678d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:05 compute-0 nova_compute[253461]: 2025-11-22 03:55:05.747 253465 DEBUG oslo_concurrency.lockutils [req-34da76b5-2400-4d22-82fe-c5d7f5d7a7d6 req-08e80751-3562-4aaf-acb4-4bb949c7678d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:05 compute-0 nova_compute[253461]: 2025-11-22 03:55:05.747 253465 DEBUG nova.compute.manager [req-34da76b5-2400-4d22-82fe-c5d7f5d7a7d6 req-08e80751-3562-4aaf-acb4-4bb949c7678d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] No waiting events found dispatching network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:55:05 compute-0 nova_compute[253461]: 2025-11-22 03:55:05.748 253465 WARNING nova.compute.manager [req-34da76b5-2400-4d22-82fe-c5d7f5d7a7d6 req-08e80751-3562-4aaf-acb4-4bb949c7678d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received unexpected event network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b for instance with vm_state active and task_state None.
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]: {
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:     "0": [
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:         {
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "devices": [
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "/dev/loop3"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             ],
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_name": "ceph_lv0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_size": "21470642176",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "name": "ceph_lv0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "tags": {
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cluster_name": "ceph",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.crush_device_class": "",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.encrypted": "0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osd_id": "0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.type": "block",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.vdo": "0"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             },
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "type": "block",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "vg_name": "ceph_vg0"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:         }
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:     ],
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:     "1": [
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:         {
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "devices": [
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "/dev/loop4"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             ],
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_name": "ceph_lv1",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_size": "21470642176",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "name": "ceph_lv1",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "tags": {
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cluster_name": "ceph",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.crush_device_class": "",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.encrypted": "0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osd_id": "1",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.type": "block",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.vdo": "0"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             },
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "type": "block",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "vg_name": "ceph_vg1"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:         }
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:     ],
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:     "2": [
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:         {
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "devices": [
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "/dev/loop5"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             ],
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_name": "ceph_lv2",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_size": "21470642176",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "name": "ceph_lv2",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "tags": {
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.cluster_name": "ceph",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.crush_device_class": "",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.encrypted": "0",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osd_id": "2",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.type": "block",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:                 "ceph.vdo": "0"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             },
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "type": "block",
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:             "vg_name": "ceph_vg2"
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:         }
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]:     ]
Nov 22 03:55:05 compute-0 sweet_chandrasekhar[267687]: }
Nov 22 03:55:05 compute-0 systemd[1]: libpod-9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e.scope: Deactivated successfully.
Nov 22 03:55:05 compute-0 podman[267671]: 2025-11-22 03:55:05.86340756 +0000 UTC m=+0.898742383 container died 9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:55:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-013f963deb36b787577eaa51a44d130743ccca573bd19021b7b973612a52f5f6-merged.mount: Deactivated successfully.
Nov 22 03:55:05 compute-0 podman[267671]: 2025-11-22 03:55:05.94128718 +0000 UTC m=+0.976621962 container remove 9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 03:55:05 compute-0 systemd[1]: libpod-conmon-9dd87e3ee602c8a082521ed16cd8495ae520f6df5b5584a026d93cd3e648a58e.scope: Deactivated successfully.
Nov 22 03:55:05 compute-0 sudo[267570]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:06 compute-0 sudo[267708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:06 compute-0 sudo[267708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:06 compute-0 sudo[267708]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:06 compute-0 sudo[267733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:55:06 compute-0 sudo[267733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:06 compute-0 sudo[267733]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 22 03:55:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:06 compute-0 sudo[267758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:06 compute-0 sudo[267758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:06 compute-0 sudo[267758]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:06 compute-0 sudo[267783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:55:06 compute-0 sudo[267783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:06 compute-0 podman[267846]: 2025-11-22 03:55:06.696074484 +0000 UTC m=+0.066896478 container create 357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_meitner, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:06 compute-0 systemd[1]: Started libpod-conmon-357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd.scope.
Nov 22 03:55:06 compute-0 podman[267846]: 2025-11-22 03:55:06.66814023 +0000 UTC m=+0.038962284 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:06 compute-0 NetworkManager[48916]: <info>  [1763783706.7753] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 22 03:55:06 compute-0 NetworkManager[48916]: <info>  [1763783706.7778] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 22 03:55:06 compute-0 nova_compute[253461]: 2025-11-22 03:55:06.774 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:55:06 compute-0 podman[267846]: 2025-11-22 03:55:06.809807924 +0000 UTC m=+0.180629938 container init 357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_meitner, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:55:06 compute-0 podman[267846]: 2025-11-22 03:55:06.822447028 +0000 UTC m=+0.193268982 container start 357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_meitner, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:55:06 compute-0 podman[267846]: 2025-11-22 03:55:06.826179485 +0000 UTC m=+0.197001459 container attach 357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_meitner, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:55:06 compute-0 awesome_meitner[267863]: 167 167
Nov 22 03:55:06 compute-0 systemd[1]: libpod-357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd.scope: Deactivated successfully.
Nov 22 03:55:06 compute-0 conmon[267863]: conmon 357a694dadbf0cbddc24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd.scope/container/memory.events
Nov 22 03:55:06 compute-0 podman[267868]: 2025-11-22 03:55:06.883297433 +0000 UTC m=+0.033878651 container died 357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_meitner, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:55:06 compute-0 nova_compute[253461]: 2025-11-22 03:55:06.897 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:06 compute-0 ovn_controller[152691]: 2025-11-22T03:55:06Z|00078|binding|INFO|Releasing lport b719ddd2-762d-4b6d-9cf6-85878321092a from this chassis (sb_readonly=0)
Nov 22 03:55:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-56de4c32f432d63814d74162002f30b4ee4cf1abf002e4347906a4815a452340-merged.mount: Deactivated successfully.
Nov 22 03:55:06 compute-0 nova_compute[253461]: 2025-11-22 03:55:06.913 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:06 compute-0 podman[267868]: 2025-11-22 03:55:06.93062121 +0000 UTC m=+0.081202408 container remove 357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_meitner, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 03:55:06 compute-0 systemd[1]: libpod-conmon-357a694dadbf0cbddc24058e95fa5ecdeb4eb96f8fcb779c416455e0201a69fd.scope: Deactivated successfully.
Nov 22 03:55:06 compute-0 nova_compute[253461]: 2025-11-22 03:55:06.964 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:07 compute-0 podman[267887]: 2025-11-22 03:55:07.181639574 +0000 UTC m=+0.052033519 container create 8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:55:07 compute-0 systemd[1]: Started libpod-conmon-8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266.scope.
Nov 22 03:55:07 compute-0 podman[267887]: 2025-11-22 03:55:07.158351902 +0000 UTC m=+0.028745937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:55:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4e1f90ba50d075967c1f2b44db6af2c0c384bfde7f4277d60683011024d979/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4e1f90ba50d075967c1f2b44db6af2c0c384bfde7f4277d60683011024d979/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4e1f90ba50d075967c1f2b44db6af2c0c384bfde7f4277d60683011024d979/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a4e1f90ba50d075967c1f2b44db6af2c0c384bfde7f4277d60683011024d979/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:07 compute-0 podman[267887]: 2025-11-22 03:55:07.286404288 +0000 UTC m=+0.156798323 container init 8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 03:55:07 compute-0 podman[267887]: 2025-11-22 03:55:07.294631829 +0000 UTC m=+0.165025804 container start 8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True)
Nov 22 03:55:07 compute-0 podman[267887]: 2025-11-22 03:55:07.298513085 +0000 UTC m=+0.168907120 container attach 8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:55:07 compute-0 ceph-mon[75011]: pgmap v1101: 305 pgs: 305 active+clean; 134 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Nov 22 03:55:07 compute-0 nova_compute[253461]: 2025-11-22 03:55:07.828 253465 DEBUG nova.compute.manager [req-3c9c3f1e-db10-484c-9a59-73eb7b7aa0cc req-3b9e62b7-6044-476f-8aa7-87f2d14f026d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received event network-changed-ef4eaf41-39f7-49af-bd4c-c963bd03246b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:07 compute-0 nova_compute[253461]: 2025-11-22 03:55:07.829 253465 DEBUG nova.compute.manager [req-3c9c3f1e-db10-484c-9a59-73eb7b7aa0cc req-3b9e62b7-6044-476f-8aa7-87f2d14f026d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Refreshing instance network info cache due to event network-changed-ef4eaf41-39f7-49af-bd4c-c963bd03246b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:55:07 compute-0 nova_compute[253461]: 2025-11-22 03:55:07.829 253465 DEBUG oslo_concurrency.lockutils [req-3c9c3f1e-db10-484c-9a59-73eb7b7aa0cc req-3b9e62b7-6044-476f-8aa7-87f2d14f026d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:55:07 compute-0 nova_compute[253461]: 2025-11-22 03:55:07.830 253465 DEBUG oslo_concurrency.lockutils [req-3c9c3f1e-db10-484c-9a59-73eb7b7aa0cc req-3b9e62b7-6044-476f-8aa7-87f2d14f026d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:55:07 compute-0 nova_compute[253461]: 2025-11-22 03:55:07.830 253465 DEBUG nova.network.neutron [req-3c9c3f1e-db10-484c-9a59-73eb7b7aa0cc req-3b9e62b7-6044-476f-8aa7-87f2d14f026d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Refreshing network info cache for port ef4eaf41-39f7-49af-bd4c-c963bd03246b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:55:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 22 03:55:08 compute-0 brave_kilby[267903]: {
Nov 22 03:55:08 compute-0 brave_kilby[267903]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "osd_id": 1,
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "type": "bluestore"
Nov 22 03:55:08 compute-0 brave_kilby[267903]:     },
Nov 22 03:55:08 compute-0 brave_kilby[267903]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "osd_id": 0,
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "type": "bluestore"
Nov 22 03:55:08 compute-0 brave_kilby[267903]:     },
Nov 22 03:55:08 compute-0 brave_kilby[267903]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "osd_id": 2,
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:55:08 compute-0 brave_kilby[267903]:         "type": "bluestore"
Nov 22 03:55:08 compute-0 brave_kilby[267903]:     }
Nov 22 03:55:08 compute-0 brave_kilby[267903]: }
Nov 22 03:55:08 compute-0 systemd[1]: libpod-8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266.scope: Deactivated successfully.
Nov 22 03:55:08 compute-0 systemd[1]: libpod-8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266.scope: Consumed 1.096s CPU time.
Nov 22 03:55:08 compute-0 podman[267887]: 2025-11-22 03:55:08.387597195 +0000 UTC m=+1.257991140 container died 8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 03:55:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a4e1f90ba50d075967c1f2b44db6af2c0c384bfde7f4277d60683011024d979-merged.mount: Deactivated successfully.
Nov 22 03:55:08 compute-0 podman[267887]: 2025-11-22 03:55:08.460862842 +0000 UTC m=+1.331256797 container remove 8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 03:55:08 compute-0 systemd[1]: libpod-conmon-8c685d0daeea844e63b2ed4541b285631c57d28060603287613a1bbdff299266.scope: Deactivated successfully.
Nov 22 03:55:08 compute-0 sudo[267783]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:55:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:55:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:08 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7d30921f-67c8-447a-aa46-9954895db35c does not exist
Nov 22 03:55:08 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1caca8af-87ed-4a0f-8f61-6e26e0f8ed80 does not exist
Nov 22 03:55:08 compute-0 sudo[267947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:55:08 compute-0 sudo[267947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:08 compute-0 sudo[267947]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:08 compute-0 sudo[267972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:55:08 compute-0 sudo[267972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:55:08 compute-0 sudo[267972]: pam_unix(sudo:session): session closed for user root
Nov 22 03:55:09 compute-0 ceph-mon[75011]: pgmap v1102: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Nov 22 03:55:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:55:09 compute-0 nova_compute[253461]: 2025-11-22 03:55:09.871 253465 DEBUG nova.network.neutron [req-3c9c3f1e-db10-484c-9a59-73eb7b7aa0cc req-3b9e62b7-6044-476f-8aa7-87f2d14f026d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updated VIF entry in instance network info cache for port ef4eaf41-39f7-49af-bd4c-c963bd03246b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:55:09 compute-0 nova_compute[253461]: 2025-11-22 03:55:09.872 253465 DEBUG nova.network.neutron [req-3c9c3f1e-db10-484c-9a59-73eb7b7aa0cc req-3b9e62b7-6044-476f-8aa7-87f2d14f026d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updating instance_info_cache with network_info: [{"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:09 compute-0 nova_compute[253461]: 2025-11-22 03:55:09.889 253465 DEBUG oslo_concurrency.lockutils [req-3c9c3f1e-db10-484c-9a59-73eb7b7aa0cc req-3b9e62b7-6044-476f-8aa7-87f2d14f026d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:55:09 compute-0 nova_compute[253461]: 2025-11-22 03:55:09.964 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 816 KiB/s wr, 87 op/s
Nov 22 03:55:10 compute-0 nova_compute[253461]: 2025-11-22 03:55:10.325 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e196 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:11 compute-0 ceph-mon[75011]: pgmap v1103: 305 pgs: 305 active+clean; 134 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 816 KiB/s wr, 87 op/s
Nov 22 03:55:11 compute-0 nova_compute[253461]: 2025-11-22 03:55:11.792 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:11 compute-0 nova_compute[253461]: 2025-11-22 03:55:11.810 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Triggering sync for uuid aba99a86-7eb3-4b04-b0a1-af00510f151c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 03:55:11 compute-0 nova_compute[253461]: 2025-11-22 03:55:11.810 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:11 compute-0 nova_compute[253461]: 2025-11-22 03:55:11.811 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:11 compute-0 nova_compute[253461]: 2025-11-22 03:55:11.847 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:11 compute-0 nova_compute[253461]: 2025-11-22 03:55:11.966 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 169 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.9 MiB/s wr, 112 op/s
Nov 22 03:55:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/955647256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/955647256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:12 compute-0 podman[267997]: 2025-11-22 03:55:12.405908931 +0000 UTC m=+0.072851580 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:55:12 compute-0 podman[267998]: 2025-11-22 03:55:12.516406885 +0000 UTC m=+0.177765664 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 03:55:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/955647256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/955647256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:12 compute-0 nova_compute[253461]: 2025-11-22 03:55:12.922 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "b70f7046-cee2-4015-8667-b06915b0d166" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:12 compute-0 nova_compute[253461]: 2025-11-22 03:55:12.922 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:12 compute-0 nova_compute[253461]: 2025-11-22 03:55:12.949 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.040 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.041 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.054 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.054 253465 INFO nova.compute.claims [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.165 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:13 compute-0 ceph-mon[75011]: pgmap v1104: 305 pgs: 305 active+clean; 169 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.9 MiB/s wr, 112 op/s
Nov 22 03:55:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149617092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.630 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.640 253465 DEBUG nova.compute.provider_tree [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.696 253465 DEBUG nova.scheduler.client.report [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.728 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.729 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.818 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.819 253465 DEBUG nova.network.neutron [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.844 253465 INFO nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.863 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.981 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.983 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:55:13 compute-0 nova_compute[253461]: 2025-11-22 03:55:13.984 253465 INFO nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Creating image(s)
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.016 253465 DEBUG nova.storage.rbd_utils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] rbd image b70f7046-cee2-4015-8667-b06915b0d166_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.049 253465 DEBUG nova.storage.rbd_utils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] rbd image b70f7046-cee2-4015-8667-b06915b0d166_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.083 253465 DEBUG nova.storage.rbd_utils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] rbd image b70f7046-cee2-4015-8667-b06915b0d166_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.089 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.115 253465 DEBUG nova.policy [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39163977261c4620a968df05b33f3c6b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4b20891fb5a5430aaeceb5bfc8665af0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:55:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 169 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.165 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.166 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.167 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.168 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.197 253465 DEBUG nova.storage.rbd_utils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] rbd image b70f7046-cee2-4015-8667-b06915b0d166_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.201 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d b70f7046-cee2-4015-8667-b06915b0d166_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.525 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d b70f7046-cee2-4015-8667-b06915b0d166_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/149617092' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.629 253465 DEBUG nova.storage.rbd_utils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] resizing rbd image b70f7046-cee2-4015-8667-b06915b0d166_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.712 253465 DEBUG nova.network.neutron [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Successfully created port: b9ec71b1-433a-40a2-ac23-41f05cdc37b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.768 253465 DEBUG nova.objects.instance [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lazy-loading 'migration_context' on Instance uuid b70f7046-cee2-4015-8667-b06915b0d166 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.785 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.786 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Ensure instance console log exists: /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.786 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.787 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:14 compute-0 nova_compute[253461]: 2025-11-22 03:55:14.787 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:15 compute-0 nova_compute[253461]: 2025-11-22 03:55:15.328 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Nov 22 03:55:15 compute-0 ceph-mon[75011]: pgmap v1105: 305 pgs: 305 active+clean; 169 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Nov 22 03:55:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Nov 22 03:55:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Nov 22 03:55:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:15 compute-0 nova_compute[253461]: 2025-11-22 03:55:15.812 253465 DEBUG nova.network.neutron [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Successfully updated port: b9ec71b1-433a-40a2-ac23-41f05cdc37b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:55:15 compute-0 nova_compute[253461]: 2025-11-22 03:55:15.834 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "refresh_cache-b70f7046-cee2-4015-8667-b06915b0d166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:55:15 compute-0 nova_compute[253461]: 2025-11-22 03:55:15.834 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquired lock "refresh_cache-b70f7046-cee2-4015-8667-b06915b0d166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:55:15 compute-0 nova_compute[253461]: 2025-11-22 03:55:15.835 253465 DEBUG nova.network.neutron [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:55:15 compute-0 nova_compute[253461]: 2025-11-22 03:55:15.920 253465 DEBUG nova.compute.manager [req-5052fd71-601e-4817-b966-4008e7e3062f req-b7e86a3a-a4dd-471b-99d8-c4b0515121cd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received event network-changed-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:15 compute-0 nova_compute[253461]: 2025-11-22 03:55:15.921 253465 DEBUG nova.compute.manager [req-5052fd71-601e-4817-b966-4008e7e3062f req-b7e86a3a-a4dd-471b-99d8-c4b0515121cd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Refreshing instance network info cache due to event network-changed-b9ec71b1-433a-40a2-ac23-41f05cdc37b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:55:15 compute-0 nova_compute[253461]: 2025-11-22 03:55:15.921 253465 DEBUG oslo_concurrency.lockutils [req-5052fd71-601e-4817-b966-4008e7e3062f req-b7e86a3a-a4dd-471b-99d8-c4b0515121cd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-b70f7046-cee2-4015-8667-b06915b0d166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:55:16 compute-0 nova_compute[253461]: 2025-11-22 03:55:16.143 253465 DEBUG nova.network.neutron [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:55:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 160 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.7 MiB/s wr, 137 op/s
Nov 22 03:55:16 compute-0 ovn_controller[152691]: 2025-11-22T03:55:16Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:23:f5:c4 10.100.0.3
Nov 22 03:55:16 compute-0 ovn_controller[152691]: 2025-11-22T03:55:16Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:23:f5:c4 10.100.0.3
Nov 22 03:55:16 compute-0 ceph-mon[75011]: osdmap e197: 3 total, 3 up, 3 in
Nov 22 03:55:16 compute-0 nova_compute[253461]: 2025-11-22 03:55:16.969 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.038 253465 DEBUG nova.network.neutron [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Updating instance_info_cache with network_info: [{"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.062 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Releasing lock "refresh_cache-b70f7046-cee2-4015-8667-b06915b0d166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.063 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Instance network_info: |[{"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.063 253465 DEBUG oslo_concurrency.lockutils [req-5052fd71-601e-4817-b966-4008e7e3062f req-b7e86a3a-a4dd-471b-99d8-c4b0515121cd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-b70f7046-cee2-4015-8667-b06915b0d166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.064 253465 DEBUG nova.network.neutron [req-5052fd71-601e-4817-b966-4008e7e3062f req-b7e86a3a-a4dd-471b-99d8-c4b0515121cd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Refreshing network info cache for port b9ec71b1-433a-40a2-ac23-41f05cdc37b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.069 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Start _get_guest_xml network_info=[{"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.077 253465 WARNING nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.084 253465 DEBUG nova.virt.libvirt.host [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.085 253465 DEBUG nova.virt.libvirt.host [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.092 253465 DEBUG nova.virt.libvirt.host [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.093 253465 DEBUG nova.virt.libvirt.host [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.094 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.094 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.094 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.094 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.094 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.095 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.095 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.095 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.095 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.095 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.095 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.096 253465 DEBUG nova.virt.hardware [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.098 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532483699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Nov 22 03:55:17 compute-0 ceph-mon[75011]: pgmap v1107: 305 pgs: 305 active+clean; 160 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.7 MiB/s wr, 137 op/s
Nov 22 03:55:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3532483699' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Nov 22 03:55:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.601 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.629 253465 DEBUG nova.storage.rbd_utils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] rbd image b70f7046-cee2-4015-8667-b06915b0d166_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:17 compute-0 nova_compute[253461]: 2025-11-22 03:55:17.635 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2306205647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.082 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.084 253465 DEBUG nova.virt.libvirt.vif [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2143574262',display_name='tempest-VolumesActionsTest-instance-2143574262',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2143574262',id=6,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b20891fb5a5430aaeceb5bfc8665af0',ramdisk_id='',reservation_id='r-f8llv2ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-904163775',owner_user_name='tempest-VolumesActionsTest-904163775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:55:14Z,user_data=None,user_id='39163977261c4620a968df05b33f3c6b',uuid=b70f7046-cee2-4015-8667-b06915b0d166,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.085 253465 DEBUG nova.network.os_vif_util [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Converting VIF {"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.086 253465 DEBUG nova.network.os_vif_util [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:da:d1,bridge_name='br-int',has_traffic_filtering=True,id=b9ec71b1-433a-40a2-ac23-41f05cdc37b7,network=Network(96af9526-051c-4ab3-b059-f3c544d56cfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9ec71b1-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.087 253465 DEBUG nova.objects.instance [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b70f7046-cee2-4015-8667-b06915b0d166 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.104 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <uuid>b70f7046-cee2-4015-8667-b06915b0d166</uuid>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <name>instance-00000006</name>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesActionsTest-instance-2143574262</nova:name>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:55:17</nova:creationTime>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <nova:user uuid="39163977261c4620a968df05b33f3c6b">tempest-VolumesActionsTest-904163775-project-member</nova:user>
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <nova:project uuid="4b20891fb5a5430aaeceb5bfc8665af0">tempest-VolumesActionsTest-904163775</nova:project>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <nova:port uuid="b9ec71b1-433a-40a2-ac23-41f05cdc37b7">
Nov 22 03:55:18 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <system>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <entry name="serial">b70f7046-cee2-4015-8667-b06915b0d166</entry>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <entry name="uuid">b70f7046-cee2-4015-8667-b06915b0d166</entry>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </system>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <os>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   </os>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <features>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   </features>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/b70f7046-cee2-4015-8667-b06915b0d166_disk">
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       </source>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/b70f7046-cee2-4015-8667-b06915b0d166_disk.config">
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       </source>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:55:18 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:d2:da:d1"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <target dev="tapb9ec71b1-43"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166/console.log" append="off"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <video>
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </video>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:55:18 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:55:18 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:55:18 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:55:18 compute-0 nova_compute[253461]: </domain>
Nov 22 03:55:18 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.104 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Preparing to wait for external event network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.105 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "b70f7046-cee2-4015-8667-b06915b0d166-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.105 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.105 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.105 253465 DEBUG nova.virt.libvirt.vif [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2143574262',display_name='tempest-VolumesActionsTest-instance-2143574262',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2143574262',id=6,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4b20891fb5a5430aaeceb5bfc8665af0',ramdisk_id='',reservation_id='r-f8llv2ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-904163775',owner_user_name='tempest-VolumesActionsTest-904163775-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:55:14Z,user_data=None,user_id='39163977261c4620a968df05b33f3c6b',uuid=b70f7046-cee2-4015-8667-b06915b0d166,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.106 253465 DEBUG nova.network.os_vif_util [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Converting VIF {"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.106 253465 DEBUG nova.network.os_vif_util [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:da:d1,bridge_name='br-int',has_traffic_filtering=True,id=b9ec71b1-433a-40a2-ac23-41f05cdc37b7,network=Network(96af9526-051c-4ab3-b059-f3c544d56cfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9ec71b1-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.106 253465 DEBUG os_vif [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:da:d1,bridge_name='br-int',has_traffic_filtering=True,id=b9ec71b1-433a-40a2-ac23-41f05cdc37b7,network=Network(96af9526-051c-4ab3-b059-f3c544d56cfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9ec71b1-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.107 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.108 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.108 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.112 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.112 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb9ec71b1-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.112 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb9ec71b1-43, col_values=(('external_ids', {'iface-id': 'b9ec71b1-433a-40a2-ac23-41f05cdc37b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:da:d1', 'vm-uuid': 'b70f7046-cee2-4015-8667-b06915b0d166'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.114 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:18 compute-0 NetworkManager[48916]: <info>  [1763783718.1153] manager: (tapb9ec71b1-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.116 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.125 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.128 253465 INFO os_vif [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:da:d1,bridge_name='br-int',has_traffic_filtering=True,id=b9ec71b1-433a-40a2-ac23-41f05cdc37b7,network=Network(96af9526-051c-4ab3-b059-f3c544d56cfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9ec71b1-43')
Nov 22 03:55:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 210 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 578 KiB/s rd, 8.4 MiB/s wr, 229 op/s
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.197 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.198 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.199 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] No VIF found with MAC fa:16:3e:d2:da:d1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.200 253465 INFO nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Using config drive
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.231 253465 DEBUG nova.storage.rbd_utils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] rbd image b70f7046-cee2-4015-8667-b06915b0d166_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.304 253465 DEBUG nova.network.neutron [req-5052fd71-601e-4817-b966-4008e7e3062f req-b7e86a3a-a4dd-471b-99d8-c4b0515121cd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Updated VIF entry in instance network info cache for port b9ec71b1-433a-40a2-ac23-41f05cdc37b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.305 253465 DEBUG nova.network.neutron [req-5052fd71-601e-4817-b966-4008e7e3062f req-b7e86a3a-a4dd-471b-99d8-c4b0515121cd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Updating instance_info_cache with network_info: [{"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.330 253465 DEBUG oslo_concurrency.lockutils [req-5052fd71-601e-4817-b966-4008e7e3062f req-b7e86a3a-a4dd-471b-99d8-c4b0515121cd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-b70f7046-cee2-4015-8667-b06915b0d166" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:55:18 compute-0 ceph-mon[75011]: osdmap e198: 3 total, 3 up, 3 in
Nov 22 03:55:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2306205647' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.695 253465 INFO nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Creating config drive at /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166/disk.config
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.704 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzxo6hqwh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.859 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzxo6hqwh" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.899 253465 DEBUG nova.storage.rbd_utils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] rbd image b70f7046-cee2-4015-8667-b06915b0d166_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:18 compute-0 nova_compute[253461]: 2025-11-22 03:55:18.906 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166/disk.config b70f7046-cee2-4015-8667-b06915b0d166_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.088 253465 DEBUG oslo_concurrency.processutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166/disk.config b70f7046-cee2-4015-8667-b06915b0d166_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.089 253465 INFO nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Deleting local config drive /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166/disk.config because it was imported into RBD.
Nov 22 03:55:19 compute-0 kernel: tapb9ec71b1-43: entered promiscuous mode
Nov 22 03:55:19 compute-0 NetworkManager[48916]: <info>  [1763783719.1624] manager: (tapb9ec71b1-43): new Tun device (/org/freedesktop/NetworkManager/Devices/50)
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.165 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:19 compute-0 ovn_controller[152691]: 2025-11-22T03:55:19Z|00079|binding|INFO|Claiming lport b9ec71b1-433a-40a2-ac23-41f05cdc37b7 for this chassis.
Nov 22 03:55:19 compute-0 ovn_controller[152691]: 2025-11-22T03:55:19Z|00080|binding|INFO|b9ec71b1-433a-40a2-ac23-41f05cdc37b7: Claiming fa:16:3e:d2:da:d1 10.100.0.4
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.175 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:da:d1 10.100.0.4'], port_security=['fa:16:3e:d2:da:d1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b70f7046-cee2-4015-8667-b06915b0d166', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96af9526-051c-4ab3-b059-f3c544d56cfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b20891fb5a5430aaeceb5bfc8665af0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '51f310f2-d711-4924-991b-6e959a59c28d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04477da0-83d2-4d91-b201-d9ffbeca5f48, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b9ec71b1-433a-40a2-ac23-41f05cdc37b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.178 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b9ec71b1-433a-40a2-ac23-41f05cdc37b7 in datapath 96af9526-051c-4ab3-b059-f3c544d56cfe bound to our chassis
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.181 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96af9526-051c-4ab3-b059-f3c544d56cfe
Nov 22 03:55:19 compute-0 ovn_controller[152691]: 2025-11-22T03:55:19Z|00081|binding|INFO|Setting lport b9ec71b1-433a-40a2-ac23-41f05cdc37b7 ovn-installed in OVS
Nov 22 03:55:19 compute-0 ovn_controller[152691]: 2025-11-22T03:55:19Z|00082|binding|INFO|Setting lport b9ec71b1-433a-40a2-ac23-41f05cdc37b7 up in Southbound
Nov 22 03:55:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.185 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3239783251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3239783251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.190 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.195 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3fb20a-8003-4d81-b28d-daa2553cd2ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.197 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96af9526-01 in ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.199 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96af9526-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.199 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf6fb43-2264-41bd-8d9f-2db861deebdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.201 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7856b7ed-6f51-4ae2-9e61-acebe3deae0a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 systemd-machined[215728]: New machine qemu-6-instance-00000006.
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.215 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[94908af6-c91b-4870-8747-e36777a260d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 22 03:55:19 compute-0 systemd-udevd[268364]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.233 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a558cf-e351-475e-b918-cf20099efcfe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 NetworkManager[48916]: <info>  [1763783719.2464] device (tapb9ec71b1-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:55:19 compute-0 NetworkManager[48916]: <info>  [1763783719.2478] device (tapb9ec71b1-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.273 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d929a7-103d-4dcd-8d70-12a6a90e6927]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 NetworkManager[48916]: <info>  [1763783719.2824] manager: (tap96af9526-00): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.281 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[091e27be-c0d4-4fa2-8e8d-a97079b7b26a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.320 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[5c685463-e2e7-4e09-b744-456d9ffe1019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.322 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a579081a-df51-4442-902b-8c267d5af728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 NetworkManager[48916]: <info>  [1763783719.3446] device (tap96af9526-00): carrier: link connected
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.354 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[c0de9b2c-f0ff-4a6a-9a11-11b1f98abfed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.371 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd264fd-da05-4f0d-87de-13e2f7fe7791]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96af9526-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:78:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 397194, 'reachable_time': 26468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268394, 'error': None, 'target': 'ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.396 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[74f03227-98d7-4455-b1db-09d2190d8383]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:7809'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 397194, 'tstamp': 397194}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268395, 'error': None, 'target': 'ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.401 253465 DEBUG nova.compute.manager [req-a7eba6a4-774f-4a16-b0be-f2a77c96d86c req-59429db7-2808-4458-b355-96158be67eb4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received event network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.402 253465 DEBUG oslo_concurrency.lockutils [req-a7eba6a4-774f-4a16-b0be-f2a77c96d86c req-59429db7-2808-4458-b355-96158be67eb4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "b70f7046-cee2-4015-8667-b06915b0d166-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.403 253465 DEBUG oslo_concurrency.lockutils [req-a7eba6a4-774f-4a16-b0be-f2a77c96d86c req-59429db7-2808-4458-b355-96158be67eb4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.404 253465 DEBUG oslo_concurrency.lockutils [req-a7eba6a4-774f-4a16-b0be-f2a77c96d86c req-59429db7-2808-4458-b355-96158be67eb4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.404 253465 DEBUG nova.compute.manager [req-a7eba6a4-774f-4a16-b0be-f2a77c96d86c req-59429db7-2808-4458-b355-96158be67eb4 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Processing event network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.418 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ae1224f8-85f2-4445-96f5-704ab098a2a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96af9526-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:78:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 397194, 'reachable_time': 26468, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268396, 'error': None, 'target': 'ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.474 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5c18609d-ed04-4300-a209-5edb6ae0caf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.567 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[28047bc9-f949-4f13-8532-76549ce9df19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.569 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96af9526-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.570 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.570 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96af9526-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:19 compute-0 kernel: tap96af9526-00: entered promiscuous mode
Nov 22 03:55:19 compute-0 NetworkManager[48916]: <info>  [1763783719.5735] manager: (tap96af9526-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.574 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.578 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.579 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96af9526-00, col_values=(('external_ids', {'iface-id': 'b42d7165-4193-459d-b5f1-7b86492a22f4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:19 compute-0 ovn_controller[152691]: 2025-11-22T03:55:19Z|00083|binding|INFO|Releasing lport b42d7165-4193-459d-b5f1-7b86492a22f4 from this chassis (sb_readonly=0)
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.581 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:19 compute-0 nova_compute[253461]: 2025-11-22 03:55:19.611 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.612 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96af9526-051c-4ab3-b059-f3c544d56cfe.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96af9526-051c-4ab3-b059-f3c544d56cfe.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.613 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[13a04b9c-c03c-4895-a6f0-cddb06e5be63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.614 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-96af9526-051c-4ab3-b059-f3c544d56cfe
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/96af9526-051c-4ab3-b059-f3c544d56cfe.pid.haproxy
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 96af9526-051c-4ab3-b059-f3c544d56cfe
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:55:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:19.615 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe', 'env', 'PROCESS_TAG=haproxy-96af9526-051c-4ab3-b059-f3c544d56cfe', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96af9526-051c-4ab3-b059-f3c544d56cfe.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:55:19 compute-0 ceph-mon[75011]: pgmap v1109: 305 pgs: 305 active+clean; 210 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 578 KiB/s rd, 8.4 MiB/s wr, 229 op/s
Nov 22 03:55:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3239783251' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3239783251' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:20 compute-0 podman[268428]: 2025-11-22 03:55:20.081450248 +0000 UTC m=+0.059896735 container create a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 03:55:20 compute-0 systemd[1]: Started libpod-conmon-a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f.scope.
Nov 22 03:55:20 compute-0 podman[268428]: 2025-11-22 03:55:20.051698618 +0000 UTC m=+0.030145115 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:55:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 210 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 6.7 MiB/s wr, 191 op/s
Nov 22 03:55:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:55:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/483f9ab5d6c6048b9ccefe23fd998219257dbb6b7682b99bf7f24344462e6787/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:55:20 compute-0 podman[268428]: 2025-11-22 03:55:20.199355094 +0000 UTC m=+0.177801631 container init a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:55:20 compute-0 podman[268428]: 2025-11-22 03:55:20.208016326 +0000 UTC m=+0.186462833 container start a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:20 compute-0 neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe[268461]: [NOTICE]   (268488) : New worker (268490) forked
Nov 22 03:55:20 compute-0 neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe[268461]: [NOTICE]   (268488) : Loading success.
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.294 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783720.2938654, b70f7046-cee2-4015-8667-b06915b0d166 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.295 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] VM Started (Lifecycle Event)
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.297 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.301 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.306 253465 INFO nova.virt.libvirt.driver [-] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Instance spawned successfully.
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.306 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.320 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.331 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.341 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.343 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.343 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.344 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.344 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.345 253465 DEBUG nova.virt.libvirt.driver [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.351 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.351 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783720.2941182, b70f7046-cee2-4015-8667-b06915b0d166 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.352 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] VM Paused (Lifecycle Event)
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.374 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.377 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783720.3004076, b70f7046-cee2-4015-8667-b06915b0d166 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.377 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] VM Resumed (Lifecycle Event)
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.400 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.405 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.409 253465 INFO nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Took 6.43 seconds to spawn the instance on the hypervisor.
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.409 253465 DEBUG nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.434 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.473 253465 INFO nova.compute.manager [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Took 7.47 seconds to build instance.
Nov 22 03:55:20 compute-0 nova_compute[253461]: 2025-11-22 03:55:20.489 253465 DEBUG oslo_concurrency.lockutils [None req-a7449198-4015-41b6-9ffd-a3f25ab3cbf5 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:21 compute-0 ceph-mon[75011]: pgmap v1110: 305 pgs: 305 active+clean; 210 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 552 KiB/s rd, 6.7 MiB/s wr, 191 op/s
Nov 22 03:55:21 compute-0 nova_compute[253461]: 2025-11-22 03:55:21.628 253465 DEBUG nova.compute.manager [req-bd56846e-8d10-44db-a78b-0c013b76fb22 req-f79d24a5-a3df-4557-9760-6b6467a43cb3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received event network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:21 compute-0 nova_compute[253461]: 2025-11-22 03:55:21.629 253465 DEBUG oslo_concurrency.lockutils [req-bd56846e-8d10-44db-a78b-0c013b76fb22 req-f79d24a5-a3df-4557-9760-6b6467a43cb3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "b70f7046-cee2-4015-8667-b06915b0d166-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:21 compute-0 nova_compute[253461]: 2025-11-22 03:55:21.630 253465 DEBUG oslo_concurrency.lockutils [req-bd56846e-8d10-44db-a78b-0c013b76fb22 req-f79d24a5-a3df-4557-9760-6b6467a43cb3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:21 compute-0 nova_compute[253461]: 2025-11-22 03:55:21.630 253465 DEBUG oslo_concurrency.lockutils [req-bd56846e-8d10-44db-a78b-0c013b76fb22 req-f79d24a5-a3df-4557-9760-6b6467a43cb3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:21 compute-0 nova_compute[253461]: 2025-11-22 03:55:21.631 253465 DEBUG nova.compute.manager [req-bd56846e-8d10-44db-a78b-0c013b76fb22 req-f79d24a5-a3df-4557-9760-6b6467a43cb3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] No waiting events found dispatching network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:55:21 compute-0 nova_compute[253461]: 2025-11-22 03:55:21.631 253465 WARNING nova.compute.manager [req-bd56846e-8d10-44db-a78b-0c013b76fb22 req-f79d24a5-a3df-4557-9760-6b6467a43cb3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received unexpected event network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 for instance with vm_state active and task_state None.
Nov 22 03:55:21 compute-0 nova_compute[253461]: 2025-11-22 03:55:21.971 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.9 MiB/s wr, 256 op/s
Nov 22 03:55:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:23.007 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:23.007 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:23.008 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.115 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.335 253465 DEBUG oslo_concurrency.lockutils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.335 253465 DEBUG oslo_concurrency.lockutils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.364 253465 DEBUG nova.objects.instance [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'flavor' on Instance uuid aba99a86-7eb3-4b04-b0a1-af00510f151c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.392 253465 INFO nova.virt.libvirt.driver [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Ignoring supplied device name: /dev/vdb
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.410 253465 DEBUG oslo_concurrency.lockutils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:23 compute-0 ceph-mon[75011]: pgmap v1111: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 5.9 MiB/s wr, 256 op/s
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.878 253465 DEBUG oslo_concurrency.lockutils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.878 253465 DEBUG oslo_concurrency.lockutils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:23 compute-0 nova_compute[253461]: 2025-11-22 03:55:23.879 253465 INFO nova.compute.manager [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Attaching volume 1518a74a-225b-4087-aa7a-7c509cb8ecf3 to /dev/vdb
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.032 253465 DEBUG os_brick.utils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.033 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.055 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.055 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e29c8c1d-073c-45de-aedd-f4324f7ad895]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.057 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.066 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.067 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad63270-775d-4639-bc52-264ac0629783]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.069 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.083 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.084 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[6164de69-8a25-4c39-a7fd-e3c4af9e4daf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.088 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[5a61520d-bb43-47c9-a74e-5fe750a9224e]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.089 253465 DEBUG oslo_concurrency.processutils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.124 253465 DEBUG oslo_concurrency.processutils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.127 253465 DEBUG os_brick.initiator.connectors.lightos [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.127 253465 DEBUG os_brick.initiator.connectors.lightos [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.127 253465 DEBUG os_brick.initiator.connectors.lightos [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.128 253465 DEBUG os_brick.utils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.128 253465 DEBUG nova.virt.block_device [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updating existing volume attachment record: 236a74ff-d56d-4c51-9042-59614bcda3a0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 03:55:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.5 MiB/s wr, 262 op/s
Nov 22 03:55:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3618660570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:24 compute-0 ceph-mon[75011]: pgmap v1112: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.5 MiB/s wr, 262 op/s
Nov 22 03:55:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3618660570' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.916 253465 DEBUG nova.objects.instance [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'flavor' on Instance uuid aba99a86-7eb3-4b04-b0a1-af00510f151c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.964 253465 DEBUG nova.virt.libvirt.driver [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Attempting to attach volume 1518a74a-225b-4087-aa7a-7c509cb8ecf3 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 03:55:24 compute-0 nova_compute[253461]: 2025-11-22 03:55:24.968 253465 DEBUG nova.virt.libvirt.guest [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 03:55:24 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:55:24 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-1518a74a-225b-4087-aa7a-7c509cb8ecf3">
Nov 22 03:55:24 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:24 compute-0 nova_compute[253461]:   </source>
Nov 22 03:55:24 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 03:55:24 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:55:24 compute-0 nova_compute[253461]:   </auth>
Nov 22 03:55:24 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:55:24 compute-0 nova_compute[253461]:   <serial>1518a74a-225b-4087-aa7a-7c509cb8ecf3</serial>
Nov 22 03:55:24 compute-0 nova_compute[253461]: </disk>
Nov 22 03:55:24 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 03:55:25 compute-0 nova_compute[253461]: 2025-11-22 03:55:25.106 253465 DEBUG nova.virt.libvirt.driver [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:25 compute-0 nova_compute[253461]: 2025-11-22 03:55:25.107 253465 DEBUG nova.virt.libvirt.driver [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:25 compute-0 nova_compute[253461]: 2025-11-22 03:55:25.108 253465 DEBUG nova.virt.libvirt.driver [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:25 compute-0 nova_compute[253461]: 2025-11-22 03:55:25.108 253465 DEBUG nova.virt.libvirt.driver [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No VIF found with MAC fa:16:3e:23:f5:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:55:25 compute-0 nova_compute[253461]: 2025-11-22 03:55:25.386 253465 DEBUG oslo_concurrency.lockutils [None req-00ce1c07-d28d-4d70-9433-bc8ec190b58e e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Nov 22 03:55:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Nov 22 03:55:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Nov 22 03:55:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080245461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080245461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 838 KiB/s wr, 144 op/s
Nov 22 03:55:26 compute-0 podman[268527]: 2025-11-22 03:55:26.385228366 +0000 UTC m=+0.064544825 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd)
Nov 22 03:55:26 compute-0 ceph-mon[75011]: osdmap e199: 3 total, 3 up, 3 in
Nov 22 03:55:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3080245461' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3080245461' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:26 compute-0 nova_compute[253461]: 2025-11-22 03:55:26.749 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquiring lock "f041318b-406c-4129-b5be-039a46ecc4a3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:26 compute-0 nova_compute[253461]: 2025-11-22 03:55:26.750 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "f041318b-406c-4129-b5be-039a46ecc4a3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:26 compute-0 nova_compute[253461]: 2025-11-22 03:55:26.878 253465 DEBUG nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:55:26 compute-0 nova_compute[253461]: 2025-11-22 03:55:26.973 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/644815773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.183 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.184 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.193 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.193 253465 INFO nova.compute.claims [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.241 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "b70f7046-cee2-4015-8667-b06915b0d166" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.242 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.243 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "b70f7046-cee2-4015-8667-b06915b0d166-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.243 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.244 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.246 253465 INFO nova.compute.manager [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Terminating instance
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.248 253465 DEBUG nova.compute.manager [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:55:27 compute-0 kernel: tapb9ec71b1-43 (unregistering): left promiscuous mode
Nov 22 03:55:27 compute-0 NetworkManager[48916]: <info>  [1763783727.3287] device (tapb9ec71b1-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:55:27 compute-0 ovn_controller[152691]: 2025-11-22T03:55:27Z|00084|binding|INFO|Releasing lport b9ec71b1-433a-40a2-ac23-41f05cdc37b7 from this chassis (sb_readonly=0)
Nov 22 03:55:27 compute-0 ovn_controller[152691]: 2025-11-22T03:55:27Z|00085|binding|INFO|Setting lport b9ec71b1-433a-40a2-ac23-41f05cdc37b7 down in Southbound
Nov 22 03:55:27 compute-0 ovn_controller[152691]: 2025-11-22T03:55:27Z|00086|binding|INFO|Removing iface tapb9ec71b1-43 ovn-installed in OVS
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.341 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.371 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.384 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:27 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 22 03:55:27 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 8.167s CPU time.
Nov 22 03:55:27 compute-0 systemd-machined[215728]: Machine qemu-6-instance-00000006 terminated.
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.484 253465 INFO nova.virt.libvirt.driver [-] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Instance destroyed successfully.
Nov 22 03:55:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:27.484 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:da:d1 10.100.0.4'], port_security=['fa:16:3e:d2:da:d1 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b70f7046-cee2-4015-8667-b06915b0d166', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96af9526-051c-4ab3-b059-f3c544d56cfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4b20891fb5a5430aaeceb5bfc8665af0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '51f310f2-d711-4924-991b-6e959a59c28d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=04477da0-83d2-4d91-b201-d9ffbeca5f48, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b9ec71b1-433a-40a2-ac23-41f05cdc37b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.485 253465 DEBUG nova.objects.instance [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lazy-loading 'resources' on Instance uuid b70f7046-cee2-4015-8667-b06915b0d166 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:27.488 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b9ec71b1-433a-40a2-ac23-41f05cdc37b7 in datapath 96af9526-051c-4ab3-b059-f3c544d56cfe unbound from our chassis
Nov 22 03:55:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:27.492 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96af9526-051c-4ab3-b059-f3c544d56cfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:55:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:27.497 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[32a3892e-8caf-4dfe-84a1-118fff33cdfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:27.498 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe namespace which is not needed anymore
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.599 253465 DEBUG nova.virt.libvirt.vif [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:55:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-2143574262',display_name='tempest-VolumesActionsTest-instance-2143574262',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-2143574262',id=6,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:55:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4b20891fb5a5430aaeceb5bfc8665af0',ramdisk_id='',reservation_id='r-f8llv2ns',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-904163775',owner_user_name='tempest-VolumesActionsTest-904163775-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:55:20Z,user_data=None,user_id='39163977261c4620a968df05b33f3c6b',uuid=b70f7046-cee2-4015-8667-b06915b0d166,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.601 253465 DEBUG nova.network.os_vif_util [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Converting VIF {"id": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "address": "fa:16:3e:d2:da:d1", "network": {"id": "96af9526-051c-4ab3-b059-f3c544d56cfe", "bridge": "br-int", "label": "tempest-VolumesActionsTest-470350589-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4b20891fb5a5430aaeceb5bfc8665af0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb9ec71b1-43", "ovs_interfaceid": "b9ec71b1-433a-40a2-ac23-41f05cdc37b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.602 253465 DEBUG nova.network.os_vif_util [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:da:d1,bridge_name='br-int',has_traffic_filtering=True,id=b9ec71b1-433a-40a2-ac23-41f05cdc37b7,network=Network(96af9526-051c-4ab3-b059-f3c544d56cfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9ec71b1-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.603 253465 DEBUG os_vif [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:da:d1,bridge_name='br-int',has_traffic_filtering=True,id=b9ec71b1-433a-40a2-ac23-41f05cdc37b7,network=Network(96af9526-051c-4ab3-b059-f3c544d56cfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9ec71b1-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.606 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.607 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb9ec71b1-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.610 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.612 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.615 253465 INFO os_vif [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:da:d1,bridge_name='br-int',has_traffic_filtering=True,id=b9ec71b1-433a-40a2-ac23-41f05cdc37b7,network=Network(96af9526-051c-4ab3-b059-f3c544d56cfe),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb9ec71b1-43')
Nov 22 03:55:27 compute-0 ceph-mon[75011]: pgmap v1114: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 838 KiB/s wr, 144 op/s
Nov 22 03:55:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/644815773' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:27 compute-0 neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe[268461]: [NOTICE]   (268488) : haproxy version is 2.8.14-c23fe91
Nov 22 03:55:27 compute-0 neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe[268461]: [NOTICE]   (268488) : path to executable is /usr/sbin/haproxy
Nov 22 03:55:27 compute-0 neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe[268461]: [ALERT]    (268488) : Current worker (268490) exited with code 143 (Terminated)
Nov 22 03:55:27 compute-0 neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe[268461]: [WARNING]  (268488) : All workers exited. Exiting... (0)
Nov 22 03:55:27 compute-0 systemd[1]: libpod-a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f.scope: Deactivated successfully.
Nov 22 03:55:27 compute-0 podman[268602]: 2025-11-22 03:55:27.752658292 +0000 UTC m=+0.109360126 container died a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:55:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1820131277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.891 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.899 253465 DEBUG nova.compute.provider_tree [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f-userdata-shm.mount: Deactivated successfully.
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.992 253465 DEBUG nova.compute.manager [req-fea772fb-aea5-43fb-9324-5a9206019b3c req-8b8b84b3-dfa8-4786-b6be-e9dd99d8dd58 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received event network-vif-unplugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.992 253465 DEBUG oslo_concurrency.lockutils [req-fea772fb-aea5-43fb-9324-5a9206019b3c req-8b8b84b3-dfa8-4786-b6be-e9dd99d8dd58 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "b70f7046-cee2-4015-8667-b06915b0d166-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.993 253465 DEBUG oslo_concurrency.lockutils [req-fea772fb-aea5-43fb-9324-5a9206019b3c req-8b8b84b3-dfa8-4786-b6be-e9dd99d8dd58 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.993 253465 DEBUG oslo_concurrency.lockutils [req-fea772fb-aea5-43fb-9324-5a9206019b3c req-8b8b84b3-dfa8-4786-b6be-e9dd99d8dd58 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.993 253465 DEBUG nova.compute.manager [req-fea772fb-aea5-43fb-9324-5a9206019b3c req-8b8b84b3-dfa8-4786-b6be-e9dd99d8dd58 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] No waiting events found dispatching network-vif-unplugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:55:27 compute-0 nova_compute[253461]: 2025-11-22 03:55:27.993 253465 DEBUG nova.compute.manager [req-fea772fb-aea5-43fb-9324-5a9206019b3c req-8b8b84b3-dfa8-4786-b6be-e9dd99d8dd58 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received event network-vif-unplugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-483f9ab5d6c6048b9ccefe23fd998219257dbb6b7682b99bf7f24344462e6787-merged.mount: Deactivated successfully.
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.132 253465 DEBUG nova.scheduler.client.report [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:55:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 117 KiB/s wr, 145 op/s
Nov 22 03:55:28 compute-0 podman[268602]: 2025-11-22 03:55:28.179030194 +0000 UTC m=+0.535732038 container cleanup a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.180 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.181 253465 DEBUG nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:55:28 compute-0 systemd[1]: libpod-conmon-a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f.scope: Deactivated successfully.
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.391 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.390 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.398 253465 DEBUG nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.398 253465 DEBUG nova.network.neutron [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:55:28 compute-0 podman[268651]: 2025-11-22 03:55:28.481941226 +0000 UTC m=+0.271959320 container remove a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.484 253465 INFO nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.494 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f74da028-4ea8-450e-8f69-af59b38a27f7]: (4, ('Sat Nov 22 03:55:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe (a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f)\na3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f\nSat Nov 22 03:55:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe (a3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f)\na3c8880ca0799b736e9b7721dfd47183c6806a13be0d571b72a10ea256534b7f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.496 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4470d676-6fe4-4809-bdb2-80b193ecd222]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.497 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96af9526-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.499 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:28 compute-0 kernel: tap96af9526-00: left promiscuous mode
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.509 253465 DEBUG nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.534 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.538 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1fd4f6-c4d4-4378-96a6-0532f36def8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.551 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3294d500-5b9d-4a97-ba86-13cf28ceeb8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.552 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[697742e7-d8ea-412e-8ba3-bd93b55c7049]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.569 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fe874a24-8387-4add-8b39-4a8fac8bbb6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 397186, 'reachable_time': 21379, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268667, 'error': None, 'target': 'ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.572 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96af9526-051c-4ab3-b059-f3c544d56cfe deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.573 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[dae1b105-fa9e-4809-997b-d25ad6acff7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d96af9526\x2d051c\x2d4ab3\x2db059\x2df3c544d56cfe.mount: Deactivated successfully.
Nov 22 03:55:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:28.574 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.601 253465 DEBUG nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.602 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.602 253465 INFO nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Creating image(s)
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.625 253465 DEBUG nova.storage.rbd_utils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] rbd image f041318b-406c-4129-b5be-039a46ecc4a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.647 253465 DEBUG nova.storage.rbd_utils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] rbd image f041318b-406c-4129-b5be-039a46ecc4a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.669 253465 DEBUG nova.storage.rbd_utils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] rbd image f041318b-406c-4129-b5be-039a46ecc4a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.672 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.689 253465 DEBUG nova.network.neutron [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.690 253465 DEBUG nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:55:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.731 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.731 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.732 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.733 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1820131277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Nov 22 03:55:28 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.767 253465 DEBUG nova.storage.rbd_utils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] rbd image f041318b-406c-4129-b5be-039a46ecc4a3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.771 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d f041318b-406c-4129-b5be-039a46ecc4a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.932 253465 INFO nova.virt.libvirt.driver [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Deleting instance files /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166_del
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.934 253465 INFO nova.virt.libvirt.driver [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Deletion of /var/lib/nova/instances/b70f7046-cee2-4015-8667-b06915b0d166_del complete
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.993 253465 INFO nova.compute.manager [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Took 1.74 seconds to destroy the instance on the hypervisor.
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.994 253465 DEBUG oslo.service.loopingcall [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.995 253465 DEBUG nova.compute.manager [-] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:55:28 compute-0 nova_compute[253461]: 2025-11-22 03:55:28.996 253465 DEBUG nova.network.neutron [-] [instance: b70f7046-cee2-4015-8667-b06915b0d166] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.135 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d f041318b-406c-4129-b5be-039a46ecc4a3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.216 253465 DEBUG nova.storage.rbd_utils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] resizing rbd image f041318b-406c-4129-b5be-039a46ecc4a3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.335 253465 DEBUG nova.objects.instance [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lazy-loading 'migration_context' on Instance uuid f041318b-406c-4129-b5be-039a46ecc4a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.353 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.354 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Ensure instance console log exists: /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.354 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.355 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.356 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.358 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.365 253465 WARNING nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.370 253465 DEBUG nova.virt.libvirt.host [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.372 253465 DEBUG nova.virt.libvirt.host [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.376 253465 DEBUG nova.virt.libvirt.host [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.377 253465 DEBUG nova.virt.libvirt.host [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.378 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.378 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.379 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.379 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.380 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.381 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.381 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.382 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.382 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.383 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.383 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.384 253465 DEBUG nova.virt.hardware [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.390 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.504 253465 DEBUG nova.network.neutron [-] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.527 253465 INFO nova.compute.manager [-] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Took 0.53 seconds to deallocate network for instance.
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.584 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.584 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.597 253465 DEBUG nova.compute.manager [req-b856cbb4-0e54-472b-9d1e-6cc918a43a80 req-c2a3ded7-88ad-477b-bf90-bbb943bf26f0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received event network-vif-deleted-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.622 253465 DEBUG nova.scheduler.client.report [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Refreshing inventories for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.646 253465 DEBUG nova.scheduler.client.report [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Updating ProviderTree inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.646 253465 DEBUG nova.compute.provider_tree [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.668 253465 DEBUG nova.scheduler.client.report [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Refreshing aggregate associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.705 253465 DEBUG nova.scheduler.client.report [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Refreshing trait associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 03:55:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Nov 22 03:55:29 compute-0 ceph-mon[75011]: pgmap v1115: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 117 KiB/s wr, 145 op/s
Nov 22 03:55:29 compute-0 ceph-mon[75011]: osdmap e200: 3 total, 3 up, 3 in
Nov 22 03:55:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Nov 22 03:55:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.806 253465 DEBUG oslo_concurrency.processutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313181844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:29 compute-0 nova_compute[253461]: 2025-11-22 03:55:29.986 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.023 253465 DEBUG nova.storage.rbd_utils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] rbd image f041318b-406c-4129-b5be-039a46ecc4a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.030 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.123 253465 DEBUG nova.compute.manager [req-184526a2-1ed8-4718-8a32-355fe878bfa2 req-205dee46-9997-45bc-8497-9728901a3c36 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received event network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.123 253465 DEBUG oslo_concurrency.lockutils [req-184526a2-1ed8-4718-8a32-355fe878bfa2 req-205dee46-9997-45bc-8497-9728901a3c36 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "b70f7046-cee2-4015-8667-b06915b0d166-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.124 253465 DEBUG oslo_concurrency.lockutils [req-184526a2-1ed8-4718-8a32-355fe878bfa2 req-205dee46-9997-45bc-8497-9728901a3c36 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.124 253465 DEBUG oslo_concurrency.lockutils [req-184526a2-1ed8-4718-8a32-355fe878bfa2 req-205dee46-9997-45bc-8497-9728901a3c36 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.124 253465 DEBUG nova.compute.manager [req-184526a2-1ed8-4718-8a32-355fe878bfa2 req-205dee46-9997-45bc-8497-9728901a3c36 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] No waiting events found dispatching network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.125 253465 WARNING nova.compute.manager [req-184526a2-1ed8-4718-8a32-355fe878bfa2 req-205dee46-9997-45bc-8497-9728901a3c36 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Received unexpected event network-vif-plugged-b9ec71b1-433a-40a2-ac23-41f05cdc37b7 for instance with vm_state deleted and task_state None.
Nov 22 03:55:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 27 KiB/s wr, 91 op/s
Nov 22 03:55:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2915926221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.252 253465 DEBUG oslo_concurrency.processutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.260 253465 DEBUG nova.compute.provider_tree [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.276 253465 DEBUG nova.scheduler.client.report [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.297 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.336 253465 INFO nova.scheduler.client.report [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Deleted allocations for instance b70f7046-cee2-4015-8667-b06915b0d166
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.399 253465 DEBUG oslo_concurrency.lockutils [None req-738ba4a6-5929-4acf-bf82-11a8b41e2685 39163977261c4620a968df05b33f3c6b 4b20891fb5a5430aaeceb5bfc8665af0 - - default default] Lock "b70f7046-cee2-4015-8667-b06915b0d166" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.448 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.449 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.449 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:55:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663757112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.471 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.484 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.486 253465 DEBUG nova.objects.instance [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lazy-loading 'pci_devices' on Instance uuid f041318b-406c-4129-b5be-039a46ecc4a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.504 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <uuid>f041318b-406c-4129-b5be-039a46ecc4a3</uuid>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <name>instance-00000007</name>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesNegativeTest-instance-1806561160</nova:name>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:55:29</nova:creationTime>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <nova:user uuid="896cddaa3dc7442a91fd7faef66f447e">tempest-VolumesNegativeTest-867150692-project-member</nova:user>
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <nova:project uuid="97d8dc3a92894cd1935b910e178e786d">tempest-VolumesNegativeTest-867150692</nova:project>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <nova:ports/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <system>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <entry name="serial">f041318b-406c-4129-b5be-039a46ecc4a3</entry>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <entry name="uuid">f041318b-406c-4129-b5be-039a46ecc4a3</entry>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     </system>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <os>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   </os>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <features>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   </features>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/f041318b-406c-4129-b5be-039a46ecc4a3_disk">
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       </source>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/f041318b-406c-4129-b5be-039a46ecc4a3_disk.config">
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       </source>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:55:30 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3/console.log" append="off"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <video>
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     </video>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:55:30 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:55:30 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:55:30 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:55:30 compute-0 nova_compute[253461]: </domain>
Nov 22 03:55:30 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.579 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.580 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.581 253465 INFO nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Using config drive
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.617 253465 DEBUG nova.storage.rbd_utils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] rbd image f041318b-406c-4129-b5be-039a46ecc4a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.625 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.625 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.626 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.626 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid aba99a86-7eb3-4b04-b0a1-af00510f151c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:30 compute-0 ceph-mon[75011]: osdmap e201: 3 total, 3 up, 3 in
Nov 22 03:55:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3313181844' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2915926221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2663757112' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.826 253465 INFO nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Creating config drive at /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3/disk.config
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.834 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp11ign2_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:30 compute-0 nova_compute[253461]: 2025-11-22 03:55:30.978 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp11ign2_o" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:31 compute-0 nova_compute[253461]: 2025-11-22 03:55:31.019 253465 DEBUG nova.storage.rbd_utils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] rbd image f041318b-406c-4129-b5be-039a46ecc4a3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:55:31 compute-0 nova_compute[253461]: 2025-11-22 03:55:31.024 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3/disk.config f041318b-406c-4129-b5be-039a46ecc4a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:31 compute-0 nova_compute[253461]: 2025-11-22 03:55:31.218 253465 DEBUG oslo_concurrency.processutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3/disk.config f041318b-406c-4129-b5be-039a46ecc4a3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:31 compute-0 nova_compute[253461]: 2025-11-22 03:55:31.220 253465 INFO nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Deleting local config drive /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3/disk.config because it was imported into RBD.
Nov 22 03:55:31 compute-0 systemd-machined[215728]: New machine qemu-7-instance-00000007.
Nov 22 03:55:31 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 22 03:55:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Nov 22 03:55:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Nov 22 03:55:31 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Nov 22 03:55:31 compute-0 ceph-mon[75011]: pgmap v1118: 305 pgs: 305 active+clean; 213 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 27 KiB/s wr, 91 op/s
Nov 22 03:55:31 compute-0 nova_compute[253461]: 2025-11-22 03:55:31.975 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.101 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updating instance_info_cache with network_info: [{"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.122 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-aba99a86-7eb3-4b04-b0a1-af00510f151c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.123 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.124 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.124 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.124 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 207 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 384 KiB/s rd, 2.4 MiB/s wr, 166 op/s
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.367 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783732.3669603, f041318b-406c-4129-b5be-039a46ecc4a3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.368 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] VM Resumed (Lifecycle Event)
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.372 253465 DEBUG nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.372 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.378 253465 INFO nova.virt.libvirt.driver [-] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Instance spawned successfully.
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.379 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.397 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.405 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.410 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.411 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.412 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.413 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.413 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.415 253465 DEBUG nova.virt.libvirt.driver [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.426 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.427 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783732.3715334, f041318b-406c-4129-b5be-039a46ecc4a3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.427 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] VM Started (Lifecycle Event)
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.462 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.466 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.473 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.473 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.474 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.474 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.475 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.508 253465 INFO nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Took 3.91 seconds to spawn the instance on the hypervisor.
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.509 253465 DEBUG nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.510 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.576 253465 INFO nova.compute.manager [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Took 5.42 seconds to build instance.
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.597 253465 DEBUG oslo_concurrency.lockutils [None req-56156f37-8129-43ec-8ab6-a333c0acfa17 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "f041318b-406c-4129-b5be-039a46ecc4a3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.610 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:32 compute-0 ceph-mon[75011]: osdmap e202: 3 total, 3 up, 3 in
Nov 22 03:55:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/721073364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:32 compute-0 nova_compute[253461]: 2025-11-22 03:55:32.912 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.001 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.002 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.007 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.007 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.008 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000005 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.198 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.202 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4441MB free_disk=59.923797607421875GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.203 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.204 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.321 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance aba99a86-7eb3-4b04-b0a1-af00510f151c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.322 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance f041318b-406c-4129-b5be-039a46ecc4a3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.323 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.324 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.396 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993112869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.882 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.890 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:55:33 compute-0 ceph-mon[75011]: pgmap v1120: 305 pgs: 305 active+clean; 207 MiB data, 356 MiB used, 60 GiB / 60 GiB avail; 384 KiB/s rd, 2.4 MiB/s wr, 166 op/s
Nov 22 03:55:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/721073364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.937 253465 DEBUG oslo_concurrency.lockutils [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.938 253465 DEBUG oslo_concurrency.lockutils [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.947 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:55:33 compute-0 nova_compute[253461]: 2025-11-22 03:55:33.981 253465 INFO nova.compute.manager [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Detaching volume 1518a74a-225b-4087-aa7a-7c509cb8ecf3
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.050 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.051 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.149 253465 INFO nova.virt.block_device [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Attempting to driver detach volume 1518a74a-225b-4087-aa7a-7c509cb8ecf3 from mountpoint /dev/vdb
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.160 253465 DEBUG nova.virt.libvirt.driver [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Attempting to detach device vdb from instance aba99a86-7eb3-4b04-b0a1-af00510f151c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.161 253465 DEBUG nova.virt.libvirt.guest [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-1518a74a-225b-4087-aa7a-7c509cb8ecf3">
Nov 22 03:55:34 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   </source>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <serial>1518a74a-225b-4087-aa7a-7c509cb8ecf3</serial>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:55:34 compute-0 nova_compute[253461]: </disk>
Nov 22 03:55:34 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:55:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 3.6 MiB/s wr, 155 op/s
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.178 253465 INFO nova.virt.libvirt.driver [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Successfully detached device vdb from instance aba99a86-7eb3-4b04-b0a1-af00510f151c from the persistent domain config.
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.178 253465 DEBUG nova.virt.libvirt.driver [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance aba99a86-7eb3-4b04-b0a1-af00510f151c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.179 253465 DEBUG nova.virt.libvirt.guest [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-1518a74a-225b-4087-aa7a-7c509cb8ecf3">
Nov 22 03:55:34 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   </source>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <serial>1518a74a-225b-4087-aa7a-7c509cb8ecf3</serial>
Nov 22 03:55:34 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:55:34 compute-0 nova_compute[253461]: </disk>
Nov 22 03:55:34 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:55:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651477550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1651477550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.370 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763783734.369766, aba99a86-7eb3-4b04-b0a1-af00510f151c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.372 253465 DEBUG nova.virt.libvirt.driver [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance aba99a86-7eb3-4b04-b0a1-af00510f151c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.375 253465 INFO nova.virt.libvirt.driver [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Successfully detached device vdb from instance aba99a86-7eb3-4b04-b0a1-af00510f151c from the live domain config.
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.558 253465 DEBUG nova.objects.instance [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'flavor' on Instance uuid aba99a86-7eb3-4b04-b0a1-af00510f151c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:34.576 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.596 253465 DEBUG oslo_concurrency.lockutils [None req-b478e20b-f801-4fb7-846c-4bf83b22b0de e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.816 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquiring lock "f041318b-406c-4129-b5be-039a46ecc4a3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.818 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "f041318b-406c-4129-b5be-039a46ecc4a3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.819 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquiring lock "f041318b-406c-4129-b5be-039a46ecc4a3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.820 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "f041318b-406c-4129-b5be-039a46ecc4a3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.820 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "f041318b-406c-4129-b5be-039a46ecc4a3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.822 253465 INFO nova.compute.manager [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Terminating instance
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.824 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquiring lock "refresh_cache-f041318b-406c-4129-b5be-039a46ecc4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.824 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquired lock "refresh_cache-f041318b-406c-4129-b5be-039a46ecc4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.825 253465 DEBUG nova.network.neutron [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:55:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2993112869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:34 compute-0 ceph-mon[75011]: pgmap v1121: 305 pgs: 305 active+clean; 213 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 3.6 MiB/s wr, 155 op/s
Nov 22 03:55:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1651477550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1651477550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:34 compute-0 nova_compute[253461]: 2025-11-22 03:55:34.971 253465 DEBUG nova.network.neutron [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.157 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.160 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.160 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.161 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.161 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.162 253465 INFO nova.compute.manager [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Terminating instance
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.163 253465 DEBUG nova.compute.manager [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.204 253465 DEBUG nova.network.neutron [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:35 compute-0 kernel: tapef4eaf41-39 (unregistering): left promiscuous mode
Nov 22 03:55:35 compute-0 NetworkManager[48916]: <info>  [1763783735.2151] device (tapef4eaf41-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.219 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Releasing lock "refresh_cache-f041318b-406c-4129-b5be-039a46ecc4a3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.220 253465 DEBUG nova.compute.manager [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:55:35 compute-0 ovn_controller[152691]: 2025-11-22T03:55:35Z|00087|binding|INFO|Releasing lport ef4eaf41-39f7-49af-bd4c-c963bd03246b from this chassis (sb_readonly=0)
Nov 22 03:55:35 compute-0 ovn_controller[152691]: 2025-11-22T03:55:35Z|00088|binding|INFO|Setting lport ef4eaf41-39f7-49af-bd4c-c963bd03246b down in Southbound
Nov 22 03:55:35 compute-0 ovn_controller[152691]: 2025-11-22T03:55:35Z|00089|binding|INFO|Removing iface tapef4eaf41-39 ovn-installed in OVS
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.235 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:23:f5:c4 10.100.0.3'], port_security=['fa:16:3e:23:f5:c4 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'aba99a86-7eb3-4b04-b0a1-af00510f151c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e17fcbd6721457f93b2fe5018fb3534', 'neutron:revision_number': '4', 'neutron:security_group_ids': '92e09f3d-f050-4dc2-85f5-d034b683dde7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.242'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d20e9a4c-63a4-481f-abc2-5dcc033feed1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=ef4eaf41-39f7-49af-bd4c-c963bd03246b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.237 162689 INFO neutron.agent.ovn.metadata.agent [-] Port ef4eaf41-39f7-49af-bd4c-c963bd03246b in datapath b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb unbound from our chassis
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.232 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.239 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.241 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b5bb1598-0dde-446c-a520-853cbb7d06a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.242 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb namespace which is not needed anymore
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.257 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:35 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 22 03:55:35 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 14.508s CPU time.
Nov 22 03:55:35 compute-0 systemd-machined[215728]: Machine qemu-5-instance-00000005 terminated.
Nov 22 03:55:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 22 03:55:35 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 3.936s CPU time.
Nov 22 03:55:35 compute-0 systemd-machined[215728]: Machine qemu-7-instance-00000007 terminated.
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.383 253465 DEBUG nova.compute.manager [req-b4881463-415e-4c98-8f50-0973726b4ce1 req-cfe0de05-c01d-437d-8662-33e0daf1c311 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received event network-vif-unplugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.383 253465 DEBUG oslo_concurrency.lockutils [req-b4881463-415e-4c98-8f50-0973726b4ce1 req-cfe0de05-c01d-437d-8662-33e0daf1c311 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.383 253465 DEBUG oslo_concurrency.lockutils [req-b4881463-415e-4c98-8f50-0973726b4ce1 req-cfe0de05-c01d-437d-8662-33e0daf1c311 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.383 253465 DEBUG oslo_concurrency.lockutils [req-b4881463-415e-4c98-8f50-0973726b4ce1 req-cfe0de05-c01d-437d-8662-33e0daf1c311 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.384 253465 DEBUG nova.compute.manager [req-b4881463-415e-4c98-8f50-0973726b4ce1 req-cfe0de05-c01d-437d-8662-33e0daf1c311 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] No waiting events found dispatching network-vif-unplugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.384 253465 DEBUG nova.compute.manager [req-b4881463-415e-4c98-8f50-0973726b4ce1 req-cfe0de05-c01d-437d-8662-33e0daf1c311 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received event network-vif-unplugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.400 253465 INFO nova.virt.libvirt.driver [-] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Instance destroyed successfully.
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.400 253465 DEBUG nova.objects.instance [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'resources' on Instance uuid aba99a86-7eb3-4b04-b0a1-af00510f151c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:35 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[267383]: [NOTICE]   (267412) : haproxy version is 2.8.14-c23fe91
Nov 22 03:55:35 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[267383]: [NOTICE]   (267412) : path to executable is /usr/sbin/haproxy
Nov 22 03:55:35 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[267383]: [WARNING]  (267412) : Exiting Master process...
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.413 253465 DEBUG nova.virt.libvirt.vif [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1711662817',display_name='tempest-VolumesBackupsTest-instance-1711662817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1711662817',id=5,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHwXsHHy1xGYZXLg+LsUn3gWIkTsp4l4HY0nvRL+dD6i2yij/zKCBTxuexKjTFl9PGA59sNZ0i5+2v/21gKSsKAKbtEmi3JvcZN1AnAPr5oFBuv+gPNCQ9f9WOOcd/UJDg==',key_name='tempest-keypair-1336013874',keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:55:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8e17fcbd6721457f93b2fe5018fb3534',ramdisk_id='',reservation_id='r-5q76p7u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-922932240',owner_user_name='tempest-VolumesBackupsTest-922932240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:55:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e45192a50149470daea6e26cfd2de3a9',uuid=aba99a86-7eb3-4b04-b0a1-af00510f151c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.413 253465 DEBUG nova.network.os_vif_util [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converting VIF {"id": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "address": "fa:16:3e:23:f5:c4", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef4eaf41-39", "ovs_interfaceid": "ef4eaf41-39f7-49af-bd4c-c963bd03246b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:55:35 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[267383]: [ALERT]    (267412) : Current worker (267416) exited with code 143 (Terminated)
Nov 22 03:55:35 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[267383]: [WARNING]  (267412) : All workers exited. Exiting... (0)
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.414 253465 DEBUG nova.network.os_vif_util [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:23:f5:c4,bridge_name='br-int',has_traffic_filtering=True,id=ef4eaf41-39f7-49af-bd4c-c963bd03246b,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef4eaf41-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.414 253465 DEBUG os_vif [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:f5:c4,bridge_name='br-int',has_traffic_filtering=True,id=ef4eaf41-39f7-49af-bd4c-c963bd03246b,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef4eaf41-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.415 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.416 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef4eaf41-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.417 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:35 compute-0 systemd[1]: libpod-51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15.scope: Deactivated successfully.
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.420 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.421 253465 INFO os_vif [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:23:f5:c4,bridge_name='br-int',has_traffic_filtering=True,id=ef4eaf41-39f7-49af-bd4c-c963bd03246b,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef4eaf41-39')
Nov 22 03:55:35 compute-0 podman[269107]: 2025-11-22 03:55:35.423296715 +0000 UTC m=+0.061638155 container died 51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 03:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15-userdata-shm.mount: Deactivated successfully.
Nov 22 03:55:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c3833a7386fd88d1ec1dee65f72faa765e4fb1f52b598a096070a61b8cbdc6a-merged.mount: Deactivated successfully.
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.457 253465 INFO nova.virt.libvirt.driver [-] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Instance destroyed successfully.
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.457 253465 DEBUG nova.objects.instance [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lazy-loading 'resources' on Instance uuid f041318b-406c-4129-b5be-039a46ecc4a3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:55:35 compute-0 podman[269107]: 2025-11-22 03:55:35.468600516 +0000 UTC m=+0.106941946 container cleanup 51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 03:55:35 compute-0 systemd[1]: libpod-conmon-51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15.scope: Deactivated successfully.
Nov 22 03:55:35 compute-0 podman[269165]: 2025-11-22 03:55:35.537044238 +0000 UTC m=+0.050198176 container remove 51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.542 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[08610f64-d7df-403a-b479-e0dd8f8bb524]: (4, ('Sat Nov 22 03:55:35 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb (51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15)\n51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15\nSat Nov 22 03:55:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb (51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15)\n51af8ded661eca004bc9d30ed7ca6b4a05f64eb3374dd7d5c57d5a6e58606c15\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.544 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[80ad9e97-b34f-4122-a685-855c1adce7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.545 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb33063bb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:55:35 compute-0 kernel: tapb33063bb-90: left promiscuous mode
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.547 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.568 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.574 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f20809ee-c3d7-4dde-925c-f1f0ff480c64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.588 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[94a9029e-5913-455b-a1c1-3ee46870a803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.590 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5b52d0e0-4ad7-4de6-b137-5832f4587729]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.611 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[58c51895-6717-4ecd-a628-68c71a1a94bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 395417, 'reachable_time': 35317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269202, 'error': None, 'target': 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.613 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:55:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:55:35.613 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[da5328a8-9ae3-4902-95b5-114e33afd4cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:55:35 compute-0 systemd[1]: run-netns-ovnmeta\x2db33063bb\x2d98b7\x2d49c3\x2d9e0b\x2d1ae7b9fe02cb.mount: Deactivated successfully.
Nov 22 03:55:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:55:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2901 syncs, 3.59 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4648 writes, 16K keys, 4648 commit groups, 1.0 writes per commit group, ingest: 9.22 MB, 0.02 MB/s
                                           Interval WAL: 4647 writes, 1969 syncs, 2.36 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.968 253465 INFO nova.virt.libvirt.driver [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Deleting instance files /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c_del
Nov 22 03:55:35 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.969 253465 INFO nova.virt.libvirt.driver [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Deletion of /var/lib/nova/instances/aba99a86-7eb3-4b04-b0a1-af00510f151c_del complete
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.998 253465 INFO nova.virt.libvirt.driver [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Deleting instance files /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3_del
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:35.999 253465 INFO nova.virt.libvirt.driver [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Deletion of /var/lib/nova/instances/f041318b-406c-4129-b5be-039a46ecc4a3_del complete
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.050 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.051 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.051 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.062 253465 INFO nova.compute.manager [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Took 0.90 seconds to destroy the instance on the hypervisor.
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.063 253465 DEBUG oslo.service.loopingcall [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.064 253465 DEBUG nova.compute.manager [-] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.065 253465 DEBUG nova.network.neutron [-] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.075 253465 INFO nova.compute.manager [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Took 0.85 seconds to destroy the instance on the hypervisor.
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.076 253465 DEBUG oslo.service.loopingcall [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.076 253465 DEBUG nova.compute.manager [-] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.076 253465 DEBUG nova.network.neutron [-] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 176 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 190 KiB/s rd, 2.9 MiB/s wr, 172 op/s
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:55:36
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['vms', '.mgr', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data']
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.308 253465 DEBUG nova.network.neutron [-] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.326 253465 DEBUG nova.network.neutron [-] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.346 253465 INFO nova.compute.manager [-] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Took 0.27 seconds to deallocate network for instance.
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.400 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.402 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:55:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.506 253465 DEBUG oslo_concurrency.processutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2761632384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2761632384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2642648014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.953 253465 DEBUG oslo_concurrency.processutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.960 253465 DEBUG nova.compute.provider_tree [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.976 253465 DEBUG nova.scheduler.client.report [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:55:36 compute-0 nova_compute[253461]: 2025-11-22 03:55:36.982 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.010 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.032 253465 INFO nova.scheduler.client.report [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Deleted allocations for instance f041318b-406c-4129-b5be-039a46ecc4a3
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.094 253465 DEBUG oslo_concurrency.lockutils [None req-dfc0e31b-99a3-4c67-8ddc-0dd4bac90717 896cddaa3dc7442a91fd7faef66f447e 97d8dc3a92894cd1935b910e178e786d - - default default] Lock "f041318b-406c-4129-b5be-039a46ecc4a3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:37 compute-0 ceph-mon[75011]: pgmap v1122: 305 pgs: 305 active+clean; 176 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 190 KiB/s rd, 2.9 MiB/s wr, 172 op/s
Nov 22 03:55:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2761632384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2761632384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2642648014' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.358 253465 DEBUG nova.network.neutron [-] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.378 253465 INFO nova.compute.manager [-] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Took 1.31 seconds to deallocate network for instance.
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.453 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.453 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.506 253465 DEBUG oslo_concurrency.processutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.531 253465 DEBUG nova.compute.manager [req-6dac9176-d6a9-4a34-a039-a54f1cceddda req-68397ffb-ce8d-4ef3-b145-047fe199c580 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received event network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.532 253465 DEBUG oslo_concurrency.lockutils [req-6dac9176-d6a9-4a34-a039-a54f1cceddda req-68397ffb-ce8d-4ef3-b145-047fe199c580 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.532 253465 DEBUG oslo_concurrency.lockutils [req-6dac9176-d6a9-4a34-a039-a54f1cceddda req-68397ffb-ce8d-4ef3-b145-047fe199c580 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.533 253465 DEBUG oslo_concurrency.lockutils [req-6dac9176-d6a9-4a34-a039-a54f1cceddda req-68397ffb-ce8d-4ef3-b145-047fe199c580 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.533 253465 DEBUG nova.compute.manager [req-6dac9176-d6a9-4a34-a039-a54f1cceddda req-68397ffb-ce8d-4ef3-b145-047fe199c580 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] No waiting events found dispatching network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.534 253465 WARNING nova.compute.manager [req-6dac9176-d6a9-4a34-a039-a54f1cceddda req-68397ffb-ce8d-4ef3-b145-047fe199c580 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received unexpected event network-vif-plugged-ef4eaf41-39f7-49af-bd4c-c963bd03246b for instance with vm_state deleted and task_state None.
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.534 253465 DEBUG nova.compute.manager [req-6dac9176-d6a9-4a34-a039-a54f1cceddda req-68397ffb-ce8d-4ef3-b145-047fe199c580 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Received event network-vif-deleted-ef4eaf41-39f7-49af-bd4c-c963bd03246b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:55:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:55:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4292239931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.926 253465 DEBUG oslo_concurrency.processutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:55:37 compute-0 nova_compute[253461]: 2025-11-22 03:55:37.932 253465 DEBUG nova.compute.provider_tree [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:55:38 compute-0 nova_compute[253461]: 2025-11-22 03:55:38.034 253465 DEBUG nova.scheduler.client.report [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:55:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 325 op/s
Nov 22 03:55:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4292239931' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:55:38 compute-0 nova_compute[253461]: 2025-11-22 03:55:38.389 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:38 compute-0 nova_compute[253461]: 2025-11-22 03:55:38.420 253465 INFO nova.scheduler.client.report [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Deleted allocations for instance aba99a86-7eb3-4b04-b0a1-af00510f151c
Nov 22 03:55:38 compute-0 nova_compute[253461]: 2025-11-22 03:55:38.512 253465 DEBUG oslo_concurrency.lockutils [None req-c143362f-7521-43ba-8ffa-7b9494b1e328 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "aba99a86-7eb3-4b04-b0a1-af00510f151c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:55:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Nov 22 03:55:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Nov 22 03:55:39 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Nov 22 03:55:39 compute-0 ceph-mon[75011]: pgmap v1123: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 325 op/s
Nov 22 03:55:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 887 KiB/s wr, 238 op/s
Nov 22 03:55:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Nov 22 03:55:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Nov 22 03:55:40 compute-0 ceph-mon[75011]: osdmap e203: 3 total, 3 up, 3 in
Nov 22 03:55:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Nov 22 03:55:40 compute-0 nova_compute[253461]: 2025-11-22 03:55:40.418 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463039836' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463039836' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Nov 22 03:55:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Nov 22 03:55:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Nov 22 03:55:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:55:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 11K writes, 46K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 2997 syncs, 3.75 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4266 writes, 16K keys, 4266 commit groups, 1.0 writes per commit group, ingest: 10.22 MB, 0.02 MB/s
                                           Interval WAL: 4266 writes, 1776 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 03:55:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/511219635' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/511219635' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:41 compute-0 ceph-mon[75011]: pgmap v1125: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 887 KiB/s wr, 238 op/s
Nov 22 03:55:41 compute-0 ceph-mon[75011]: osdmap e204: 3 total, 3 up, 3 in
Nov 22 03:55:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2463039836' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2463039836' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:41 compute-0 ceph-mon[75011]: osdmap e205: 3 total, 3 up, 3 in
Nov 22 03:55:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/511219635' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/511219635' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:42 compute-0 nova_compute[253461]: 2025-11-22 03:55:42.021 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.8 KiB/s wr, 338 op/s
Nov 22 03:55:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Nov 22 03:55:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Nov 22 03:55:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Nov 22 03:55:42 compute-0 nova_compute[253461]: 2025-11-22 03:55:42.483 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783727.48176, b70f7046-cee2-4015-8667-b06915b0d166 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:42 compute-0 nova_compute[253461]: 2025-11-22 03:55:42.483 253465 INFO nova.compute.manager [-] [instance: b70f7046-cee2-4015-8667-b06915b0d166] VM Stopped (Lifecycle Event)
Nov 22 03:55:42 compute-0 nova_compute[253461]: 2025-11-22 03:55:42.515 253465 DEBUG nova.compute.manager [None req-0e25aeee-7d09-41a1-bcad-fa3cc54f6b8d - - - - - -] [instance: b70f7046-cee2-4015-8667-b06915b0d166] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:43 compute-0 podman[269249]: 2025-11-22 03:55:43.414047503 +0000 UTC m=+0.089704022 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 03:55:43 compute-0 ceph-mon[75011]: pgmap v1128: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.8 KiB/s wr, 338 op/s
Nov 22 03:55:43 compute-0 ceph-mon[75011]: osdmap e206: 3 total, 3 up, 3 in
Nov 22 03:55:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Nov 22 03:55:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Nov 22 03:55:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Nov 22 03:55:43 compute-0 podman[269250]: 2025-11-22 03:55:43.515451915 +0000 UTC m=+0.186832177 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:55:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1685771332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1685771332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 9.0 KiB/s wr, 201 op/s
Nov 22 03:55:44 compute-0 ceph-mon[75011]: osdmap e207: 3 total, 3 up, 3 in
Nov 22 03:55:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1685771332' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1685771332' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:45 compute-0 nova_compute[253461]: 2025-11-22 03:55:45.421 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Nov 22 03:55:45 compute-0 ceph-mon[75011]: pgmap v1131: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 9.0 KiB/s wr, 201 op/s
Nov 22 03:55:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Nov 22 03:55:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Nov 22 03:55:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e208 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Nov 22 03:55:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Nov 22 03:55:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Nov 22 03:55:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 03:55:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 9476 writes, 38K keys, 9476 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 9476 writes, 2517 syncs, 3.76 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3840 writes, 14K keys, 3840 commit groups, 1.0 writes per commit group, ingest: 8.81 MB, 0.01 MB/s
                                           Interval WAL: 3840 writes, 1648 syncs, 2.33 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.7 KiB/s wr, 149 op/s
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003465693576219314 of space, bias 1.0, pg target 0.10397080728657941 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:55:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:55:46 compute-0 ceph-mon[75011]: osdmap e208: 3 total, 3 up, 3 in
Nov 22 03:55:46 compute-0 ceph-mon[75011]: osdmap e209: 3 total, 3 up, 3 in
Nov 22 03:55:47 compute-0 nova_compute[253461]: 2025-11-22 03:55:47.064 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 11 KiB/s wr, 187 op/s
Nov 22 03:55:48 compute-0 ceph-mon[75011]: pgmap v1134: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 7.7 KiB/s wr, 149 op/s
Nov 22 03:55:49 compute-0 ceph-mon[75011]: pgmap v1135: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 11 KiB/s wr, 187 op/s
Nov 22 03:55:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2971929688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2971929688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:55:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/363720303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:55:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/363720303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.9 KiB/s wr, 142 op/s
Nov 22 03:55:50 compute-0 nova_compute[253461]: 2025-11-22 03:55:50.396 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783735.3953412, aba99a86-7eb3-4b04-b0a1-af00510f151c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:50 compute-0 nova_compute[253461]: 2025-11-22 03:55:50.397 253465 INFO nova.compute.manager [-] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] VM Stopped (Lifecycle Event)
Nov 22 03:55:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2971929688' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2971929688' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/363720303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:55:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/363720303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:55:50 compute-0 ceph-mon[75011]: pgmap v1136: 305 pgs: 305 active+clean; 88 MiB data, 287 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.9 KiB/s wr, 142 op/s
Nov 22 03:55:50 compute-0 nova_compute[253461]: 2025-11-22 03:55:50.423 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:50 compute-0 nova_compute[253461]: 2025-11-22 03:55:50.430 253465 DEBUG nova.compute.manager [None req-fea990db-eacf-4dcc-ad7c-d853273f4d48 - - - - - -] [instance: aba99a86-7eb3-4b04-b0a1-af00510f151c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:50 compute-0 nova_compute[253461]: 2025-11-22 03:55:50.457 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783735.4561012, f041318b-406c-4129-b5be-039a46ecc4a3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:55:50 compute-0 nova_compute[253461]: 2025-11-22 03:55:50.457 253465 INFO nova.compute.manager [-] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] VM Stopped (Lifecycle Event)
Nov 22 03:55:50 compute-0 nova_compute[253461]: 2025-11-22 03:55:50.478 253465 DEBUG nova.compute.manager [None req-46633ea2-c97c-41f9-8ab5-a1c0bd67de01 - - - - - -] [instance: f041318b-406c-4129-b5be-039a46ecc4a3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:55:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Nov 22 03:55:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Nov 22 03:55:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Nov 22 03:55:51 compute-0 ceph-mon[75011]: osdmap e210: 3 total, 3 up, 3 in
Nov 22 03:55:51 compute-0 ceph-mgr[75294]: [devicehealth INFO root] Check health
Nov 22 03:55:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/724748617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:52 compute-0 nova_compute[253461]: 2025-11-22 03:55:52.107 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 109 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.4 MiB/s wr, 157 op/s
Nov 22 03:55:52 compute-0 nova_compute[253461]: 2025-11-22 03:55:52.339 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:52 compute-0 nova_compute[253461]: 2025-11-22 03:55:52.574 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Nov 22 03:55:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Nov 22 03:55:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Nov 22 03:55:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/724748617' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:52 compute-0 ceph-mon[75011]: pgmap v1138: 305 pgs: 305 active+clean; 109 MiB data, 296 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.4 MiB/s wr, 157 op/s
Nov 22 03:55:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Nov 22 03:55:53 compute-0 ceph-mon[75011]: osdmap e211: 3 total, 3 up, 3 in
Nov 22 03:55:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Nov 22 03:55:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Nov 22 03:55:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 166 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.6 MiB/s wr, 137 op/s
Nov 22 03:55:54 compute-0 ceph-mon[75011]: osdmap e212: 3 total, 3 up, 3 in
Nov 22 03:55:54 compute-0 ceph-mon[75011]: pgmap v1141: 305 pgs: 305 active+clean; 166 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.6 MiB/s wr, 137 op/s
Nov 22 03:55:55 compute-0 nova_compute[253461]: 2025-11-22 03:55:55.471 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:55:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 166 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.6 MiB/s wr, 118 op/s
Nov 22 03:55:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:55:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2169915025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:57 compute-0 nova_compute[253461]: 2025-11-22 03:55:57.109 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:55:57 compute-0 ceph-mon[75011]: pgmap v1142: 305 pgs: 305 active+clean; 166 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.6 MiB/s wr, 118 op/s
Nov 22 03:55:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2169915025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:55:57 compute-0 podman[269293]: 2025-11-22 03:55:57.427498743 +0000 UTC m=+0.093468635 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:55:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 180 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 125 op/s
Nov 22 03:55:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Nov 22 03:55:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Nov 22 03:55:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Nov 22 03:55:59 compute-0 ceph-mon[75011]: pgmap v1143: 305 pgs: 305 active+clean; 180 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 5.7 MiB/s wr, 125 op/s
Nov 22 03:55:59 compute-0 ceph-mon[75011]: osdmap e213: 3 total, 3 up, 3 in
Nov 22 03:56:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 203 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.7 MiB/s wr, 107 op/s
Nov 22 03:56:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1824152488' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1824152488' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Nov 22 03:56:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1824152488' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1824152488' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Nov 22 03:56:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Nov 22 03:56:00 compute-0 nova_compute[253461]: 2025-11-22 03:56:00.474 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3326354258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3326354258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:01 compute-0 ceph-mon[75011]: pgmap v1145: 305 pgs: 305 active+clean; 203 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 5.7 MiB/s wr, 107 op/s
Nov 22 03:56:01 compute-0 ceph-mon[75011]: osdmap e214: 3 total, 3 up, 3 in
Nov 22 03:56:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3326354258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3326354258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:02 compute-0 nova_compute[253461]: 2025-11-22 03:56:02.110 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 203 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 127 op/s
Nov 22 03:56:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Nov 22 03:56:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Nov 22 03:56:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Nov 22 03:56:03 compute-0 ceph-mon[75011]: pgmap v1147: 305 pgs: 305 active+clean; 203 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.8 MiB/s wr, 127 op/s
Nov 22 03:56:03 compute-0 ceph-mon[75011]: osdmap e215: 3 total, 3 up, 3 in
Nov 22 03:56:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 148 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 150 op/s
Nov 22 03:56:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Nov 22 03:56:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Nov 22 03:56:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Nov 22 03:56:04 compute-0 ceph-mon[75011]: pgmap v1149: 305 pgs: 305 active+clean; 148 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.5 MiB/s wr, 150 op/s
Nov 22 03:56:04 compute-0 ceph-mon[75011]: osdmap e216: 3 total, 3 up, 3 in
Nov 22 03:56:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:56:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/233003640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3887961192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3887961192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/233003640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3887961192' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3887961192' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:05 compute-0 nova_compute[253461]: 2025-11-22 03:56:05.477 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Nov 22 03:56:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Nov 22 03:56:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Nov 22 03:56:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 148 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 MiB/s wr, 138 op/s
Nov 22 03:56:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:06 compute-0 ceph-mon[75011]: osdmap e217: 3 total, 3 up, 3 in
Nov 22 03:56:06 compute-0 ceph-mon[75011]: pgmap v1152: 305 pgs: 305 active+clean; 148 MiB data, 318 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 MiB/s wr, 138 op/s
Nov 22 03:56:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3122322415' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3122322415' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:07 compute-0 nova_compute[253461]: 2025-11-22 03:56:07.112 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:56:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/706111980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3122322415' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3122322415' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/706111980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 305 active+clean; 188 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 13 MiB/s wr, 212 op/s
Nov 22 03:56:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Nov 22 03:56:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Nov 22 03:56:08 compute-0 ceph-mon[75011]: pgmap v1153: 305 pgs: 305 active+clean; 188 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 13 MiB/s wr, 212 op/s
Nov 22 03:56:08 compute-0 sudo[269316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:08 compute-0 sudo[269316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Nov 22 03:56:08 compute-0 sudo[269316]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:08 compute-0 sudo[269341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:56:08 compute-0 sudo[269341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:08 compute-0 sudo[269341]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:08 compute-0 sudo[269366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:08 compute-0 sudo[269366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:08 compute-0 sudo[269366]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:09 compute-0 sudo[269391]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 03:56:09 compute-0 sudo[269391]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:09 compute-0 sudo[269391]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:56:09 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:56:09 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:09 compute-0 sudo[269436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:09 compute-0 sudo[269436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:09 compute-0 sudo[269436]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:09 compute-0 sudo[269461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:56:09 compute-0 sudo[269461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:09 compute-0 sudo[269461]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Nov 22 03:56:09 compute-0 sudo[269486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:09 compute-0 sudo[269486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:09 compute-0 sudo[269486]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Nov 22 03:56:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Nov 22 03:56:09 compute-0 sudo[269511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:56:09 compute-0 sudo[269511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:09 compute-0 ceph-mon[75011]: osdmap e218: 3 total, 3 up, 3 in
Nov 22 03:56:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 504 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 71 MiB/s wr, 279 op/s
Nov 22 03:56:10 compute-0 sudo[269511]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:10 compute-0 nova_compute[253461]: 2025-11-22 03:56:10.479 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:10 compute-0 sudo[269568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:10 compute-0 sudo[269568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:10 compute-0 sudo[269568]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:10 compute-0 sudo[269593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:56:10 compute-0 sudo[269593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:10 compute-0 sudo[269593]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Nov 22 03:56:10 compute-0 sudo[269618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:10 compute-0 sudo[269618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:10 compute-0 sudo[269618]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Nov 22 03:56:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Nov 22 03:56:10 compute-0 sudo[269643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- inventory --format=json-pretty --filter-for-batch
Nov 22 03:56:10 compute-0 sudo[269643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:10 compute-0 ceph-mon[75011]: osdmap e219: 3 total, 3 up, 3 in
Nov 22 03:56:10 compute-0 ceph-mon[75011]: pgmap v1156: 305 pgs: 305 active+clean; 504 MiB data, 635 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 71 MiB/s wr, 279 op/s
Nov 22 03:56:10 compute-0 ceph-mon[75011]: osdmap e220: 3 total, 3 up, 3 in
Nov 22 03:56:11 compute-0 podman[269709]: 2025-11-22 03:56:11.308298124 +0000 UTC m=+0.084007231 container create 9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:56:11 compute-0 podman[269709]: 2025-11-22 03:56:11.271560034 +0000 UTC m=+0.047269171 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:11 compute-0 systemd[1]: Started libpod-conmon-9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892.scope.
Nov 22 03:56:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:11 compute-0 podman[269709]: 2025-11-22 03:56:11.478610716 +0000 UTC m=+0.254319883 container init 9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:56:11 compute-0 podman[269709]: 2025-11-22 03:56:11.491124171 +0000 UTC m=+0.266833248 container start 9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:11 compute-0 podman[269709]: 2025-11-22 03:56:11.495741078 +0000 UTC m=+0.271450175 container attach 9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:56:11 compute-0 focused_chandrasekhar[269725]: 167 167
Nov 22 03:56:11 compute-0 systemd[1]: libpod-9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892.scope: Deactivated successfully.
Nov 22 03:56:11 compute-0 podman[269709]: 2025-11-22 03:56:11.50109494 +0000 UTC m=+0.276804017 container died 9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-596c73d0f9656cf811797caa853bd9f98511ac65c2c554ea39d5857cf37c5aef-merged.mount: Deactivated successfully.
Nov 22 03:56:11 compute-0 podman[269709]: 2025-11-22 03:56:11.785762983 +0000 UTC m=+0.561472090 container remove 9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_chandrasekhar, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:56:11 compute-0 systemd[1]: libpod-conmon-9bb2fcc286d3f83aaeb07c72cb135f47669f2724c799952a8d1de3c526310892.scope: Deactivated successfully.
Nov 22 03:56:12 compute-0 podman[269749]: 2025-11-22 03:56:12.038915904 +0000 UTC m=+0.072045040 container create e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:12 compute-0 podman[269749]: 2025-11-22 03:56:11.991789326 +0000 UTC m=+0.024918502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:12 compute-0 systemd[1]: Started libpod-conmon-e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d.scope.
Nov 22 03:56:12 compute-0 nova_compute[253461]: 2025-11-22 03:56:12.114 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:12 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd6ed58624beb51e330140e005e1bc9e29513760acb3d2add3cf5f1284a651e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd6ed58624beb51e330140e005e1bc9e29513760acb3d2add3cf5f1284a651e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd6ed58624beb51e330140e005e1bc9e29513760acb3d2add3cf5f1284a651e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd6ed58624beb51e330140e005e1bc9e29513760acb3d2add3cf5f1284a651e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 700 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 102 MiB/s wr, 483 op/s
Nov 22 03:56:12 compute-0 podman[269749]: 2025-11-22 03:56:12.196853124 +0000 UTC m=+0.229982319 container init e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 03:56:12 compute-0 podman[269749]: 2025-11-22 03:56:12.210946931 +0000 UTC m=+0.244076067 container start e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elion, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:56:12 compute-0 podman[269749]: 2025-11-22 03:56:12.217293309 +0000 UTC m=+0.250422465 container attach e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:13 compute-0 ceph-mon[75011]: pgmap v1158: 305 pgs: 305 active+clean; 700 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 102 MiB/s wr, 483 op/s
Nov 22 03:56:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:56:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/948689826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:14 compute-0 great_elion[269765]: [
Nov 22 03:56:14 compute-0 great_elion[269765]:     {
Nov 22 03:56:14 compute-0 great_elion[269765]:         "available": false,
Nov 22 03:56:14 compute-0 great_elion[269765]:         "ceph_device": false,
Nov 22 03:56:14 compute-0 great_elion[269765]:         "device_id": "QEMU_DVD-ROM_QM00001",
Nov 22 03:56:14 compute-0 great_elion[269765]:         "lsm_data": {},
Nov 22 03:56:14 compute-0 great_elion[269765]:         "lvs": [],
Nov 22 03:56:14 compute-0 great_elion[269765]:         "path": "/dev/sr0",
Nov 22 03:56:14 compute-0 great_elion[269765]:         "rejected_reasons": [
Nov 22 03:56:14 compute-0 great_elion[269765]:             "Has a FileSystem",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "Insufficient space (<5GB)"
Nov 22 03:56:14 compute-0 great_elion[269765]:         ],
Nov 22 03:56:14 compute-0 great_elion[269765]:         "sys_api": {
Nov 22 03:56:14 compute-0 great_elion[269765]:             "actuators": null,
Nov 22 03:56:14 compute-0 great_elion[269765]:             "device_nodes": "sr0",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "devname": "sr0",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "human_readable_size": "482.00 KB",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "id_bus": "ata",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "model": "QEMU DVD-ROM",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "nr_requests": "2",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "parent": "/dev/sr0",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "partitions": {},
Nov 22 03:56:14 compute-0 great_elion[269765]:             "path": "/dev/sr0",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "removable": "1",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "rev": "2.5+",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "ro": "0",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "rotational": "1",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "sas_address": "",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "sas_device_handle": "",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "scheduler_mode": "mq-deadline",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "sectors": 0,
Nov 22 03:56:14 compute-0 great_elion[269765]:             "sectorsize": "2048",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "size": 493568.0,
Nov 22 03:56:14 compute-0 great_elion[269765]:             "support_discard": "2048",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "type": "disk",
Nov 22 03:56:14 compute-0 great_elion[269765]:             "vendor": "QEMU"
Nov 22 03:56:14 compute-0 great_elion[269765]:         }
Nov 22 03:56:14 compute-0 great_elion[269765]:     }
Nov 22 03:56:14 compute-0 great_elion[269765]: ]
Nov 22 03:56:14 compute-0 systemd[1]: libpod-e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d.scope: Deactivated successfully.
Nov 22 03:56:14 compute-0 systemd[1]: libpod-e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d.scope: Consumed 1.899s CPU time.
Nov 22 03:56:14 compute-0 podman[269749]: 2025-11-22 03:56:14.070329359 +0000 UTC m=+2.103458465 container died e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elion, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:56:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fd6ed58624beb51e330140e005e1bc9e29513760acb3d2add3cf5f1284a651e-merged.mount: Deactivated successfully.
Nov 22 03:56:14 compute-0 podman[269749]: 2025-11-22 03:56:14.13564186 +0000 UTC m=+2.168770946 container remove e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:14 compute-0 systemd[1]: libpod-conmon-e8709487d778a98053a18ed4b0cc20fa500801cc8b37f0a90592a139a697966d.scope: Deactivated successfully.
Nov 22 03:56:14 compute-0 sudo[269643]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:56:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:56:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 1008 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 141 MiB/s wr, 345 op/s
Nov 22 03:56:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:56:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:56:14 compute-0 podman[271735]: 2025-11-22 03:56:14.191715809 +0000 UTC m=+0.086263644 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:56:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:56:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:56:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:56:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 92949698-2797-43cd-a455-4fffefc8f0b4 does not exist
Nov 22 03:56:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 606317a5-a4b0-46f8-aa81-3d300d6c22a4 does not exist
Nov 22 03:56:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 85ae34e6-09ac-4cd2-a85f-accedaa74306 does not exist
Nov 22 03:56:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:56:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:56:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:56:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:56:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:56:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:56:14 compute-0 podman[271742]: 2025-11-22 03:56:14.235464931 +0000 UTC m=+0.128159366 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/948689826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:56:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:56:14 compute-0 sudo[271786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:14 compute-0 sudo[271786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:14 compute-0 sudo[271786]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:14 compute-0 sudo[271815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:56:14 compute-0 sudo[271815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:14 compute-0 sudo[271815]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:14 compute-0 sudo[271840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:14 compute-0 sudo[271840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:14 compute-0 sudo[271840]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:14 compute-0 sudo[271865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:56:14 compute-0 sudo[271865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:14 compute-0 podman[271930]: 2025-11-22 03:56:14.980280381 +0000 UTC m=+0.064368552 container create f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:56:15 compute-0 systemd[1]: Started libpod-conmon-f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033.scope.
Nov 22 03:56:15 compute-0 podman[271930]: 2025-11-22 03:56:14.954304511 +0000 UTC m=+0.038392722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:15 compute-0 podman[271930]: 2025-11-22 03:56:15.088100404 +0000 UTC m=+0.172188645 container init f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:56:15 compute-0 podman[271930]: 2025-11-22 03:56:15.099160305 +0000 UTC m=+0.183248496 container start f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:56:15 compute-0 podman[271930]: 2025-11-22 03:56:15.104131469 +0000 UTC m=+0.188219730 container attach f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:56:15 compute-0 hardcore_colden[271947]: 167 167
Nov 22 03:56:15 compute-0 systemd[1]: libpod-f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033.scope: Deactivated successfully.
Nov 22 03:56:15 compute-0 podman[271930]: 2025-11-22 03:56:15.10691934 +0000 UTC m=+0.191007541 container died f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:56:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b10c0a1cf7296681420ee665c7aee033aa966fce495bfa31a4ef967c935c1e20-merged.mount: Deactivated successfully.
Nov 22 03:56:15 compute-0 podman[271930]: 2025-11-22 03:56:15.161913404 +0000 UTC m=+0.246001605 container remove f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:56:15 compute-0 systemd[1]: libpod-conmon-f24f787a2e6b478fa485b667d630258960099221089201f6528b942c61a77033.scope: Deactivated successfully.
Nov 22 03:56:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Nov 22 03:56:15 compute-0 ceph-mon[75011]: pgmap v1159: 305 pgs: 305 active+clean; 1008 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 205 KiB/s rd, 141 MiB/s wr, 345 op/s
Nov 22 03:56:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Nov 22 03:56:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Nov 22 03:56:15 compute-0 podman[271971]: 2025-11-22 03:56:15.436914888 +0000 UTC m=+0.077691604 container create 535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 03:56:15 compute-0 nova_compute[253461]: 2025-11-22 03:56:15.482 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:15 compute-0 podman[271971]: 2025-11-22 03:56:15.406303996 +0000 UTC m=+0.047080722 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:15 compute-0 systemd[1]: Started libpod-conmon-535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb.scope.
Nov 22 03:56:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f0e376c95333273af951ee5f55f2788181efc1ba21edb4b42dca8aaf5531b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f0e376c95333273af951ee5f55f2788181efc1ba21edb4b42dca8aaf5531b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f0e376c95333273af951ee5f55f2788181efc1ba21edb4b42dca8aaf5531b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f0e376c95333273af951ee5f55f2788181efc1ba21edb4b42dca8aaf5531b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321f0e376c95333273af951ee5f55f2788181efc1ba21edb4b42dca8aaf5531b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:15 compute-0 podman[271971]: 2025-11-22 03:56:15.565555776 +0000 UTC m=+0.206332532 container init 535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:56:15 compute-0 podman[271971]: 2025-11-22 03:56:15.578605511 +0000 UTC m=+0.219382207 container start 535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:56:15 compute-0 podman[271971]: 2025-11-22 03:56:15.582293473 +0000 UTC m=+0.223070189 container attach 535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 03:56:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Nov 22 03:56:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Nov 22 03:56:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.780563) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783775780600, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2329, "num_deletes": 262, "total_data_size": 3380348, "memory_usage": 3433024, "flush_reason": "Manual Compaction"}
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783775807471, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3293965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21394, "largest_seqno": 23722, "table_properties": {"data_size": 3283241, "index_size": 6898, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22961, "raw_average_key_size": 21, "raw_value_size": 3261521, "raw_average_value_size": 3008, "num_data_blocks": 304, "num_entries": 1084, "num_filter_entries": 1084, "num_deletions": 262, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783607, "oldest_key_time": 1763783607, "file_creation_time": 1763783775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 26968 microseconds, and 11489 cpu microseconds.
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.807526) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3293965 bytes OK
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.807551) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.811197) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.811223) EVENT_LOG_v1 {"time_micros": 1763783775811214, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.811271) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3370311, prev total WAL file size 3370311, number of live WAL files 2.
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.812752) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3216KB)], [50(7606KB)]
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783775812794, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11083524, "oldest_snapshot_seqno": -1}
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5106 keys, 9360865 bytes, temperature: kUnknown
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783775892150, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9360865, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9322718, "index_size": 24277, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 125795, "raw_average_key_size": 24, "raw_value_size": 9226619, "raw_average_value_size": 1807, "num_data_blocks": 1007, "num_entries": 5106, "num_filter_entries": 5106, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.892411) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9360865 bytes
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.897995) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.5 rd, 117.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5636, records dropped: 530 output_compression: NoCompression
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.898025) EVENT_LOG_v1 {"time_micros": 1763783775898012, "job": 26, "event": "compaction_finished", "compaction_time_micros": 79437, "compaction_time_cpu_micros": 36222, "output_level": 6, "num_output_files": 1, "total_output_size": 9360865, "num_input_records": 5636, "num_output_records": 5106, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783775899284, "job": 26, "event": "table_file_deletion", "file_number": 52}
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783775902098, "job": 26, "event": "table_file_deletion", "file_number": 50}
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.812665) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.902345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.902352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.902355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.902357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:56:15 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:56:15.902360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:56:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 1008 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 84 MiB/s wr, 265 op/s
Nov 22 03:56:16 compute-0 ceph-mon[75011]: osdmap e221: 3 total, 3 up, 3 in
Nov 22 03:56:16 compute-0 ceph-mon[75011]: osdmap e222: 3 total, 3 up, 3 in
Nov 22 03:56:16 compute-0 inspiring_pasteur[271987]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:56:16 compute-0 inspiring_pasteur[271987]: --> relative data size: 1.0
Nov 22 03:56:16 compute-0 inspiring_pasteur[271987]: --> All data devices are unavailable
Nov 22 03:56:16 compute-0 systemd[1]: libpod-535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb.scope: Deactivated successfully.
Nov 22 03:56:16 compute-0 systemd[1]: libpod-535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb.scope: Consumed 1.105s CPU time.
Nov 22 03:56:16 compute-0 podman[271971]: 2025-11-22 03:56:16.727616106 +0000 UTC m=+1.368392792 container died 535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:56:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-321f0e376c95333273af951ee5f55f2788181efc1ba21edb4b42dca8aaf5531b-merged.mount: Deactivated successfully.
Nov 22 03:56:16 compute-0 podman[271971]: 2025-11-22 03:56:16.791074479 +0000 UTC m=+1.431851165 container remove 535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_pasteur, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:56:16 compute-0 systemd[1]: libpod-conmon-535821dfde92e70977891c4cd0c0e3ed6c08043e8ec97543c1bba94a6e1e60cb.scope: Deactivated successfully.
Nov 22 03:56:16 compute-0 sudo[271865]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:16 compute-0 sudo[272030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:16 compute-0 sudo[272030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:16 compute-0 sudo[272030]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:16 compute-0 sudo[272055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:56:16 compute-0 sudo[272055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:16 compute-0 sudo[272055]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:17 compute-0 sudo[272080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:17 compute-0 sudo[272080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:17 compute-0 sudo[272080]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:17 compute-0 nova_compute[253461]: 2025-11-22 03:56:17.116 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:17 compute-0 sudo[272105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:56:17 compute-0 sudo[272105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1457456663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1457456663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:17 compute-0 ceph-mon[75011]: pgmap v1162: 305 pgs: 305 active+clean; 1008 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 158 KiB/s rd, 84 MiB/s wr, 265 op/s
Nov 22 03:56:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1457456663' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1457456663' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:17 compute-0 podman[272171]: 2025-11-22 03:56:17.584474011 +0000 UTC m=+0.090652541 container create 96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:56:17 compute-0 podman[272171]: 2025-11-22 03:56:17.534485121 +0000 UTC m=+0.040663711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:17 compute-0 systemd[1]: Started libpod-conmon-96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c.scope.
Nov 22 03:56:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:17 compute-0 podman[272171]: 2025-11-22 03:56:17.726536034 +0000 UTC m=+0.232714604 container init 96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 03:56:17 compute-0 podman[272171]: 2025-11-22 03:56:17.737092539 +0000 UTC m=+0.243271058 container start 96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:17 compute-0 recursing_dubinsky[272188]: 167 167
Nov 22 03:56:17 compute-0 systemd[1]: libpod-96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c.scope: Deactivated successfully.
Nov 22 03:56:17 compute-0 podman[272171]: 2025-11-22 03:56:17.745993601 +0000 UTC m=+0.252172121 container attach 96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:56:17 compute-0 podman[272171]: 2025-11-22 03:56:17.747178227 +0000 UTC m=+0.253356757 container died 96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:56:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Nov 22 03:56:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Nov 22 03:56:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-87f9f1e9c6e85c814467754fa20f7e903ac3562bed1aebf7f3bd3e6952593efb-merged.mount: Deactivated successfully.
Nov 22 03:56:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Nov 22 03:56:17 compute-0 podman[272171]: 2025-11-22 03:56:17.839110128 +0000 UTC m=+0.345288618 container remove 96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_dubinsky, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 03:56:17 compute-0 systemd[1]: libpod-conmon-96f461f09bf0cc7434019b4a5f981383d3e861e01878281720866e8d1287008c.scope: Deactivated successfully.
Nov 22 03:56:18 compute-0 podman[272212]: 2025-11-22 03:56:18.079866332 +0000 UTC m=+0.076808138 container create 2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banzai, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:56:18 compute-0 systemd[1]: Started libpod-conmon-2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff.scope.
Nov 22 03:56:18 compute-0 podman[272212]: 2025-11-22 03:56:18.047657393 +0000 UTC m=+0.044599259 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1882a67a55258891bdc9fbf34df0ee17c6da5ef0cfb14c983379ffd7d108b091/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1882a67a55258891bdc9fbf34df0ee17c6da5ef0cfb14c983379ffd7d108b091/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1882a67a55258891bdc9fbf34df0ee17c6da5ef0cfb14c983379ffd7d108b091/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1882a67a55258891bdc9fbf34df0ee17c6da5ef0cfb14c983379ffd7d108b091/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 736 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 69 MiB/s wr, 174 op/s
Nov 22 03:56:18 compute-0 podman[272212]: 2025-11-22 03:56:18.189624552 +0000 UTC m=+0.186566318 container init 2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Nov 22 03:56:18 compute-0 podman[272212]: 2025-11-22 03:56:18.197980669 +0000 UTC m=+0.194922435 container start 2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:56:18 compute-0 podman[272212]: 2025-11-22 03:56:18.202226174 +0000 UTC m=+0.199167980 container attach 2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banzai, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:56:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3663205247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3663205247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Nov 22 03:56:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Nov 22 03:56:18 compute-0 ceph-mon[75011]: osdmap e223: 3 total, 3 up, 3 in
Nov 22 03:56:18 compute-0 ceph-mon[75011]: pgmap v1164: 305 pgs: 305 active+clean; 736 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 69 MiB/s wr, 174 op/s
Nov 22 03:56:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3663205247' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3663205247' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Nov 22 03:56:18 compute-0 gifted_banzai[272228]: {
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:     "0": [
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:         {
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "devices": [
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "/dev/loop3"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             ],
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_name": "ceph_lv0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_size": "21470642176",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "name": "ceph_lv0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "tags": {
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cluster_name": "ceph",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.crush_device_class": "",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.encrypted": "0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osd_id": "0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.type": "block",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.vdo": "0"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             },
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "type": "block",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "vg_name": "ceph_vg0"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:         }
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:     ],
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:     "1": [
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:         {
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "devices": [
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "/dev/loop4"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             ],
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_name": "ceph_lv1",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_size": "21470642176",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "name": "ceph_lv1",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "tags": {
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cluster_name": "ceph",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.crush_device_class": "",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.encrypted": "0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osd_id": "1",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.type": "block",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.vdo": "0"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             },
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "type": "block",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "vg_name": "ceph_vg1"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:         }
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:     ],
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:     "2": [
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:         {
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "devices": [
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "/dev/loop5"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             ],
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_name": "ceph_lv2",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_size": "21470642176",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "name": "ceph_lv2",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "tags": {
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.cluster_name": "ceph",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.crush_device_class": "",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.encrypted": "0",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osd_id": "2",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.type": "block",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:                 "ceph.vdo": "0"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             },
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "type": "block",
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:             "vg_name": "ceph_vg2"
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:         }
Nov 22 03:56:18 compute-0 gifted_banzai[272228]:     ]
Nov 22 03:56:18 compute-0 gifted_banzai[272228]: }
Nov 22 03:56:18 compute-0 systemd[1]: libpod-2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff.scope: Deactivated successfully.
Nov 22 03:56:18 compute-0 podman[272212]: 2025-11-22 03:56:18.945657071 +0000 UTC m=+0.942598847 container died 2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banzai, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 03:56:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1882a67a55258891bdc9fbf34df0ee17c6da5ef0cfb14c983379ffd7d108b091-merged.mount: Deactivated successfully.
Nov 22 03:56:19 compute-0 podman[272212]: 2025-11-22 03:56:19.010820402 +0000 UTC m=+1.007762178 container remove 2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:56:19 compute-0 systemd[1]: libpod-conmon-2cf07e93c2d94faa0d33afea330c1493f4d20b5543dec358319a22362289d8ff.scope: Deactivated successfully.
Nov 22 03:56:19 compute-0 sudo[272105]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:19 compute-0 sudo[272251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:19 compute-0 sudo[272251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:19 compute-0 sudo[272251]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:19 compute-0 sudo[272276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:56:19 compute-0 sudo[272276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:19 compute-0 sudo[272276]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:19 compute-0 sudo[272301]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:19 compute-0 sudo[272301]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:19 compute-0 sudo[272301]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:19 compute-0 sudo[272326]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:56:19 compute-0 sudo[272326]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:19 compute-0 podman[272391]: 2025-11-22 03:56:19.788608998 +0000 UTC m=+0.058216261 container create 188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:56:19 compute-0 ceph-mon[75011]: osdmap e224: 3 total, 3 up, 3 in
Nov 22 03:56:19 compute-0 systemd[1]: Started libpod-conmon-188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf.scope.
Nov 22 03:56:19 compute-0 podman[272391]: 2025-11-22 03:56:19.759177513 +0000 UTC m=+0.028784826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:19 compute-0 podman[272391]: 2025-11-22 03:56:19.884247778 +0000 UTC m=+0.153855081 container init 188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 03:56:19 compute-0 podman[272391]: 2025-11-22 03:56:19.89228189 +0000 UTC m=+0.161889113 container start 188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:56:19 compute-0 podman[272391]: 2025-11-22 03:56:19.895790311 +0000 UTC m=+0.165397584 container attach 188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:19 compute-0 vibrant_wu[272408]: 167 167
Nov 22 03:56:19 compute-0 systemd[1]: libpod-188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf.scope: Deactivated successfully.
Nov 22 03:56:19 compute-0 conmon[272408]: conmon 188b82a833b4159f734e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf.scope/container/memory.events
Nov 22 03:56:19 compute-0 podman[272391]: 2025-11-22 03:56:19.90009707 +0000 UTC m=+0.169704293 container died 188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-65714b1c68afb43939b8df5679c3a87697433ad3231e75db945a72be9b26459d-merged.mount: Deactivated successfully.
Nov 22 03:56:19 compute-0 podman[272391]: 2025-11-22 03:56:19.951534567 +0000 UTC m=+0.221141800 container remove 188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:56:19 compute-0 systemd[1]: libpod-conmon-188b82a833b4159f734e615e69457125437cc5761d0f53d2bcf0f00a247922bf.scope: Deactivated successfully.
Nov 22 03:56:20 compute-0 podman[272432]: 2025-11-22 03:56:20.138607308 +0000 UTC m=+0.047881389 container create d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 03:56:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 88 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 21 MiB/s wr, 344 op/s
Nov 22 03:56:20 compute-0 systemd[1]: Started libpod-conmon-d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd.scope.
Nov 22 03:56:20 compute-0 podman[272432]: 2025-11-22 03:56:20.117752813 +0000 UTC m=+0.027026964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:56:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5d88be8ec309bc59fc62c09d547950cd91513e6a57f05e48968890e7cffbd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5d88be8ec309bc59fc62c09d547950cd91513e6a57f05e48968890e7cffbd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5d88be8ec309bc59fc62c09d547950cd91513e6a57f05e48968890e7cffbd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa5d88be8ec309bc59fc62c09d547950cd91513e6a57f05e48968890e7cffbd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:20 compute-0 podman[272432]: 2025-11-22 03:56:20.24775283 +0000 UTC m=+0.157026941 container init d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:56:20 compute-0 podman[272432]: 2025-11-22 03:56:20.261317658 +0000 UTC m=+0.170591739 container start d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 03:56:20 compute-0 podman[272432]: 2025-11-22 03:56:20.265305012 +0000 UTC m=+0.174579093 container attach d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_montalcini, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:20 compute-0 nova_compute[253461]: 2025-11-22 03:56:20.485 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e224 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Nov 22 03:56:20 compute-0 ceph-mon[75011]: pgmap v1166: 305 pgs: 305 active+clean; 88 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 21 MiB/s wr, 344 op/s
Nov 22 03:56:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Nov 22 03:56:20 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Nov 22 03:56:21 compute-0 happy_montalcini[272448]: {
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "osd_id": 1,
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "type": "bluestore"
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:     },
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "osd_id": 0,
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "type": "bluestore"
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:     },
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "osd_id": 2,
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:         "type": "bluestore"
Nov 22 03:56:21 compute-0 happy_montalcini[272448]:     }
Nov 22 03:56:21 compute-0 happy_montalcini[272448]: }
Nov 22 03:56:21 compute-0 systemd[1]: libpod-d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd.scope: Deactivated successfully.
Nov 22 03:56:21 compute-0 podman[272432]: 2025-11-22 03:56:21.346126848 +0000 UTC m=+1.255400929 container died d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:56:21 compute-0 systemd[1]: libpod-d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd.scope: Consumed 1.091s CPU time.
Nov 22 03:56:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1976753778' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1976753778' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa5d88be8ec309bc59fc62c09d547950cd91513e6a57f05e48968890e7cffbd9-merged.mount: Deactivated successfully.
Nov 22 03:56:21 compute-0 podman[272432]: 2025-11-22 03:56:21.412216414 +0000 UTC m=+1.321490485 container remove d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_montalcini, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 03:56:21 compute-0 systemd[1]: libpod-conmon-d395f492325b34f96a23aad174e29c61764eda0a3a8e0e69f963134245e15dbd.scope: Deactivated successfully.
Nov 22 03:56:21 compute-0 sudo[272326]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:56:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:56:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:21 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d8a0d405-26a0-47ac-a466-45797aa4f168 does not exist
Nov 22 03:56:21 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 168d5f43-7ee6-4121-b329-fa085876448a does not exist
Nov 22 03:56:21 compute-0 sudo[272493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:56:21 compute-0 sudo[272493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:21 compute-0 sudo[272493]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:21 compute-0 sudo[272518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:56:21 compute-0 sudo[272518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:56:21 compute-0 sudo[272518]: pam_unix(sudo:session): session closed for user root
Nov 22 03:56:21 compute-0 ceph-mon[75011]: osdmap e225: 3 total, 3 up, 3 in
Nov 22 03:56:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1976753778' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1976753778' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:56:22 compute-0 nova_compute[253461]: 2025-11-22 03:56:22.119 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 17 MiB/s wr, 303 op/s
Nov 22 03:56:22 compute-0 ceph-mon[75011]: pgmap v1168: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 17 MiB/s wr, 303 op/s
Nov 22 03:56:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:23.007 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:23.008 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:23.008 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.0 KiB/s wr, 235 op/s
Nov 22 03:56:24 compute-0 ceph-mon[75011]: pgmap v1169: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.0 KiB/s wr, 235 op/s
Nov 22 03:56:25 compute-0 nova_compute[253461]: 2025-11-22 03:56:25.490 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Nov 22 03:56:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Nov 22 03:56:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Nov 22 03:56:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.5 KiB/s wr, 108 op/s
Nov 22 03:56:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/286074572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/286074572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:26 compute-0 ceph-mon[75011]: osdmap e226: 3 total, 3 up, 3 in
Nov 22 03:56:26 compute-0 ceph-mon[75011]: pgmap v1171: 305 pgs: 305 active+clean; 88 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.5 KiB/s wr, 108 op/s
Nov 22 03:56:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/286074572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/286074572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:27 compute-0 nova_compute[253461]: 2025-11-22 03:56:27.121 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:27 compute-0 ovn_controller[152691]: 2025-11-22T03:56:27Z|00090|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 03:56:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 112 op/s
Nov 22 03:56:28 compute-0 podman[272543]: 2025-11-22 03:56:28.409704552 +0000 UTC m=+0.076992433 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:29 compute-0 ceph-mon[75011]: pgmap v1172: 305 pgs: 305 active+clean; 108 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 112 op/s
Nov 22 03:56:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 119 op/s
Nov 22 03:56:30 compute-0 nova_compute[253461]: 2025-11-22 03:56:30.493 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.015 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.015 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.041 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:56:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1704640301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.125 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1704640301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.126 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.135 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.135 253465 INFO nova.compute.claims [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.237 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:31 compute-0 ceph-mon[75011]: pgmap v1173: 305 pgs: 305 active+clean; 134 MiB data, 313 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 119 op/s
Nov 22 03:56:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1704640301' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1704640301' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:56:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2589293871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.654 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.662 253465 DEBUG nova.compute.provider_tree [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.683 253465 DEBUG nova.scheduler.client.report [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.705 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.706 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.773 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.774 253465 DEBUG nova.network.neutron [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.799 253465 INFO nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.825 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.826 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.841 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.848 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.922 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.923 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.935 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.936 253465 INFO nova.compute.claims [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.992 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.994 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:56:31 compute-0 nova_compute[253461]: 2025-11-22 03:56:31.994 253465 INFO nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Creating image(s)
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.022 253465 DEBUG nova.storage.rbd_utils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image c036cf5d-81f0-4f9e-9f31-67298476667c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.046 253465 DEBUG nova.storage.rbd_utils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image c036cf5d-81f0-4f9e-9f31-67298476667c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.072 253465 DEBUG nova.storage.rbd_utils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image c036cf5d-81f0-4f9e-9f31-67298476667c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.076 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.098 253465 DEBUG nova.policy [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e45192a50149470daea6e26cfd2de3a9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8e17fcbd6721457f93b2fe5018fb3534', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.124 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.140 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.141 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.141 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.142 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.167 253465 DEBUG nova.storage.rbd_utils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image c036cf5d-81f0-4f9e-9f31-67298476667c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.171 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d c036cf5d-81f0-4f9e-9f31-67298476667c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 134 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 889 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.243 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:32 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2589293871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.425 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.467 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.467 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.468 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.470 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.470 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.494 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.541 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d c036cf5d-81f0-4f9e-9f31-67298476667c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.628 253465 DEBUG nova.storage.rbd_utils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] resizing rbd image c036cf5d-81f0-4f9e-9f31-67298476667c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:56:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:56:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1212359619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.742 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.749 253465 DEBUG nova.objects.instance [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'migration_context' on Instance uuid c036cf5d-81f0-4f9e-9f31-67298476667c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.755 253465 DEBUG nova.compute.provider_tree [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.762 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.763 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Ensure instance console log exists: /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.764 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.764 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.765 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.769 253465 DEBUG nova.scheduler.client.report [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.792 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.793 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.797 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.798 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.799 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.799 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.892 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.894 253465 DEBUG nova.network.neutron [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.917 253465 INFO nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:56:32 compute-0 nova_compute[253461]: 2025-11-22 03:56:32.941 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.014 253465 DEBUG nova.network.neutron [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Successfully created port: 77a13178-8559-4b2b-af3d-991a871b7351 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.027 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.029 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.030 253465 INFO nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Creating image(s)
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.066 253465 DEBUG nova.storage.rbd_utils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.093 253465 DEBUG nova.storage.rbd_utils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.119 253465 DEBUG nova.storage.rbd_utils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.122 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.195 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.196 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.197 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.198 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:56:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3649301731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.223 253465 DEBUG nova.storage.rbd_utils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.227 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.252 253465 DEBUG nova.policy [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '323c39d407144b438e9fbcdc7c67710e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5846275e26354bb095399d10c8b52285', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.256 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:33 compute-0 ceph-mon[75011]: pgmap v1174: 305 pgs: 305 active+clean; 134 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 889 KiB/s rd, 2.1 MiB/s wr, 98 op/s
Nov 22 03:56:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1212359619' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3649301731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.477 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.479 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4654MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.479 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.480 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.588 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance c036cf5d-81f0-4f9e-9f31-67298476667c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.588 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance e11a8b93-8ac0-460e-8780-bd0a8525f6ad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.588 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.589 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.609 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.383s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.699 253465 DEBUG nova.storage.rbd_utils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] resizing rbd image e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:56:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:33.714 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:56:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:33.715 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.747 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.751 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.842 253465 DEBUG nova.network.neutron [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Successfully updated port: 77a13178-8559-4b2b-af3d-991a871b7351 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.851 253465 DEBUG nova.objects.instance [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'migration_context' on Instance uuid e11a8b93-8ac0-460e-8780-bd0a8525f6ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.863 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.864 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquired lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.865 253465 DEBUG nova.network.neutron [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.878 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.879 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Ensure instance console log exists: /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.880 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.881 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.881 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.931 253465 DEBUG nova.compute.manager [req-19b046d7-8145-48c9-a6e9-64f9b7522f66 req-b3f28f02-194d-4e40-a8e3-f0eca23948ed f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received event network-changed-77a13178-8559-4b2b-af3d-991a871b7351 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.931 253465 DEBUG nova.compute.manager [req-19b046d7-8145-48c9-a6e9-64f9b7522f66 req-b3f28f02-194d-4e40-a8e3-f0eca23948ed f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Refreshing instance network info cache due to event network-changed-77a13178-8559-4b2b-af3d-991a871b7351. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:56:33 compute-0 nova_compute[253461]: 2025-11-22 03:56:33.932 253465 DEBUG oslo_concurrency.lockutils [req-19b046d7-8145-48c9-a6e9-64f9b7522f66 req-b3f28f02-194d-4e40-a8e3-f0eca23948ed f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:56:34 compute-0 nova_compute[253461]: 2025-11-22 03:56:34.000 253465 DEBUG nova.network.neutron [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:56:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 192 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.8 MiB/s wr, 122 op/s
Nov 22 03:56:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:56:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3668900072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:34 compute-0 nova_compute[253461]: 2025-11-22 03:56:34.212 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:34 compute-0 nova_compute[253461]: 2025-11-22 03:56:34.219 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:56:34 compute-0 nova_compute[253461]: 2025-11-22 03:56:34.239 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:56:34 compute-0 nova_compute[253461]: 2025-11-22 03:56:34.259 253465 DEBUG nova.network.neutron [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Successfully created port: e6035774-e683-4695-8e35-ced236ccbefb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:56:34 compute-0 nova_compute[253461]: 2025-11-22 03:56:34.272 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:56:34 compute-0 nova_compute[253461]: 2025-11-22 03:56:34.272 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3668900072' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:56:35 compute-0 nova_compute[253461]: 2025-11-22 03:56:35.231 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:35 compute-0 nova_compute[253461]: 2025-11-22 03:56:35.262 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:35 compute-0 nova_compute[253461]: 2025-11-22 03:56:35.263 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:35 compute-0 nova_compute[253461]: 2025-11-22 03:56:35.264 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:35 compute-0 nova_compute[253461]: 2025-11-22 03:56:35.264 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:56:35 compute-0 ceph-mon[75011]: pgmap v1175: 305 pgs: 305 active+clean; 192 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.8 MiB/s wr, 122 op/s
Nov 22 03:56:35 compute-0 nova_compute[253461]: 2025-11-22 03:56:35.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:35 compute-0 nova_compute[253461]: 2025-11-22 03:56:35.495 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 192 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.6 MiB/s wr, 117 op/s
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:56:36
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'images']
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:56:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.865 253465 DEBUG nova.network.neutron [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Updating instance_info_cache with network_info: [{"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.890 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Releasing lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.891 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Instance network_info: |[{"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.891 253465 DEBUG oslo_concurrency.lockutils [req-19b046d7-8145-48c9-a6e9-64f9b7522f66 req-b3f28f02-194d-4e40-a8e3-f0eca23948ed f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.892 253465 DEBUG nova.network.neutron [req-19b046d7-8145-48c9-a6e9-64f9b7522f66 req-b3f28f02-194d-4e40-a8e3-f0eca23948ed f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Refreshing network info cache for port 77a13178-8559-4b2b-af3d-991a871b7351 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.897 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Start _get_guest_xml network_info=[{"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.904 253465 WARNING nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.910 253465 DEBUG nova.virt.libvirt.host [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.911 253465 DEBUG nova.virt.libvirt.host [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.922 253465 DEBUG nova.virt.libvirt.host [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.923 253465 DEBUG nova.virt.libvirt.host [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.924 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.924 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.925 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.926 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.926 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.927 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.927 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.928 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.928 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.929 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.929 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.930 253465 DEBUG nova.virt.hardware [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.934 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.967 253465 DEBUG nova.network.neutron [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Successfully updated port: e6035774-e683-4695-8e35-ced236ccbefb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.990 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.991 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquired lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:56:36 compute-0 nova_compute[253461]: 2025-11-22 03:56:36.991 253465 DEBUG nova.network.neutron [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.067 253465 DEBUG nova.compute.manager [req-8364a1d8-90b9-4a83-9e54-86607edc94d2 req-fe1882ba-89a0-46a2-b138-51aaedae16dc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received event network-changed-e6035774-e683-4695-8e35-ced236ccbefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.068 253465 DEBUG nova.compute.manager [req-8364a1d8-90b9-4a83-9e54-86607edc94d2 req-fe1882ba-89a0-46a2-b138-51aaedae16dc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Refreshing instance network info cache due to event network-changed-e6035774-e683-4695-8e35-ced236ccbefb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.068 253465 DEBUG oslo_concurrency.lockutils [req-8364a1d8-90b9-4a83-9e54-86607edc94d2 req-fe1882ba-89a0-46a2-b138-51aaedae16dc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.125 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.145 253465 DEBUG nova.network.neutron [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:56:37 compute-0 ceph-mon[75011]: pgmap v1176: 305 pgs: 305 active+clean; 192 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.6 MiB/s wr, 117 op/s
Nov 22 03:56:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:56:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216239403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.450 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.484 253465 DEBUG nova.storage.rbd_utils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image c036cf5d-81f0-4f9e-9f31-67298476667c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.489 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:56:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/493229520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.950 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.952 253465 DEBUG nova.virt.libvirt.vif [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1636563006',display_name='tempest-VolumesBackupsTest-instance-1636563006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1636563006',id=8,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZmXPkDdEmsqmkz8kqe43oFlJFd4mtzRvqTzdxQlIJdrm+TXvJOWJqYTKuDnf/jPUfL2yoATyIDrwn8REyMFcza6x9HqKqJXWCV8Fo3TAsCRy7bVFpoGuwCDzGXSnxhKA==',key_name='tempest-keypair-1370992232',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e17fcbd6721457f93b2fe5018fb3534',ramdisk_id='',reservation_id='r-1iuw06nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-922932240',owner_user_name='tempest-VolumesBackupsTest-922932240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:56:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e45192a50149470daea6e26cfd2de3a9',uuid=c036cf5d-81f0-4f9e-9f31-67298476667c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.952 253465 DEBUG nova.network.os_vif_util [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converting VIF {"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.953 253465 DEBUG nova.network.os_vif_util [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:a7:62,bridge_name='br-int',has_traffic_filtering=True,id=77a13178-8559-4b2b-af3d-991a871b7351,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77a13178-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.954 253465 DEBUG nova.objects.instance [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'pci_devices' on Instance uuid c036cf5d-81f0-4f9e-9f31-67298476667c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.976 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <uuid>c036cf5d-81f0-4f9e-9f31-67298476667c</uuid>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <name>instance-00000008</name>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesBackupsTest-instance-1636563006</nova:name>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:56:36</nova:creationTime>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <nova:user uuid="e45192a50149470daea6e26cfd2de3a9">tempest-VolumesBackupsTest-922932240-project-member</nova:user>
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <nova:project uuid="8e17fcbd6721457f93b2fe5018fb3534">tempest-VolumesBackupsTest-922932240</nova:project>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <nova:port uuid="77a13178-8559-4b2b-af3d-991a871b7351">
Nov 22 03:56:37 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <system>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <entry name="serial">c036cf5d-81f0-4f9e-9f31-67298476667c</entry>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <entry name="uuid">c036cf5d-81f0-4f9e-9f31-67298476667c</entry>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </system>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <os>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   </os>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <features>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   </features>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/c036cf5d-81f0-4f9e-9f31-67298476667c_disk">
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       </source>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/c036cf5d-81f0-4f9e-9f31-67298476667c_disk.config">
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       </source>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:56:37 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:3d:a7:62"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <target dev="tap77a13178-85"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c/console.log" append="off"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <video>
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </video>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:56:37 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:56:37 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:56:37 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:56:37 compute-0 nova_compute[253461]: </domain>
Nov 22 03:56:37 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.978 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Preparing to wait for external event network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.978 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.979 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.979 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.981 253465 DEBUG nova.virt.libvirt.vif [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1636563006',display_name='tempest-VolumesBackupsTest-instance-1636563006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1636563006',id=8,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZmXPkDdEmsqmkz8kqe43oFlJFd4mtzRvqTzdxQlIJdrm+TXvJOWJqYTKuDnf/jPUfL2yoATyIDrwn8REyMFcza6x9HqKqJXWCV8Fo3TAsCRy7bVFpoGuwCDzGXSnxhKA==',key_name='tempest-keypair-1370992232',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8e17fcbd6721457f93b2fe5018fb3534',ramdisk_id='',reservation_id='r-1iuw06nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-922932240',owner_user_name='tempest-VolumesBackupsTest-922932240-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:56:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e45192a50149470daea6e26cfd2de3a9',uuid=c036cf5d-81f0-4f9e-9f31-67298476667c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.981 253465 DEBUG nova.network.os_vif_util [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converting VIF {"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.982 253465 DEBUG nova.network.os_vif_util [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:a7:62,bridge_name='br-int',has_traffic_filtering=True,id=77a13178-8559-4b2b-af3d-991a871b7351,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77a13178-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.983 253465 DEBUG os_vif [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:a7:62,bridge_name='br-int',has_traffic_filtering=True,id=77a13178-8559-4b2b-af3d-991a871b7351,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77a13178-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.984 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.985 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.986 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.990 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.991 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap77a13178-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.992 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap77a13178-85, col_values=(('external_ids', {'iface-id': '77a13178-8559-4b2b-af3d-991a871b7351', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:a7:62', 'vm-uuid': 'c036cf5d-81f0-4f9e-9f31-67298476667c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:37 compute-0 NetworkManager[48916]: <info>  [1763783797.9962] manager: (tap77a13178-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 22 03:56:37 compute-0 nova_compute[253461]: 2025-11-22 03:56:37.999 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.003 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.004 253465 INFO os_vif [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:a7:62,bridge_name='br-int',has_traffic_filtering=True,id=77a13178-8559-4b2b-af3d-991a871b7351,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77a13178-85')
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.066 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.067 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.067 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No VIF found with MAC fa:16:3e:3d:a7:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.068 253465 INFO nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Using config drive
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.100 253465 DEBUG nova.storage.rbd_utils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image c036cf5d-81f0-4f9e-9f31-67298476667c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 226 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 5.3 MiB/s wr, 115 op/s
Nov 22 03:56:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4216239403' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/493229520' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.336 253465 DEBUG nova.network.neutron [req-19b046d7-8145-48c9-a6e9-64f9b7522f66 req-b3f28f02-194d-4e40-a8e3-f0eca23948ed f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Updated VIF entry in instance network info cache for port 77a13178-8559-4b2b-af3d-991a871b7351. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.337 253465 DEBUG nova.network.neutron [req-19b046d7-8145-48c9-a6e9-64f9b7522f66 req-b3f28f02-194d-4e40-a8e3-f0eca23948ed f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Updating instance_info_cache with network_info: [{"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.353 253465 DEBUG oslo_concurrency.lockutils [req-19b046d7-8145-48c9-a6e9-64f9b7522f66 req-b3f28f02-194d-4e40-a8e3-f0eca23948ed f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.423 253465 DEBUG nova.network.neutron [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Updating instance_info_cache with network_info: [{"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.448 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Releasing lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.449 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Instance network_info: |[{"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.449 253465 DEBUG oslo_concurrency.lockutils [req-8364a1d8-90b9-4a83-9e54-86607edc94d2 req-fe1882ba-89a0-46a2-b138-51aaedae16dc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.450 253465 DEBUG nova.network.neutron [req-8364a1d8-90b9-4a83-9e54-86607edc94d2 req-fe1882ba-89a0-46a2-b138-51aaedae16dc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Refreshing network info cache for port e6035774-e683-4695-8e35-ced236ccbefb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.455 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Start _get_guest_xml network_info=[{"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.462 253465 WARNING nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.466 253465 DEBUG nova.virt.libvirt.host [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.467 253465 DEBUG nova.virt.libvirt.host [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.471 253465 DEBUG nova.virt.libvirt.host [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.471 253465 DEBUG nova.virt.libvirt.host [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.472 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.472 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.473 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.474 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.474 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.474 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.475 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.475 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.476 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.476 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.477 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.477 253465 DEBUG nova.virt.hardware [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.482 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:56:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200225972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.949 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.982 253465 DEBUG nova.storage.rbd_utils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:38 compute-0 nova_compute[253461]: 2025-11-22 03:56:38.987 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.234 253465 INFO nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Creating config drive at /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c/disk.config
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.240 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw81lk487 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:39 compute-0 ceph-mon[75011]: pgmap v1177: 305 pgs: 305 active+clean; 226 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 5.3 MiB/s wr, 115 op/s
Nov 22 03:56:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/200225972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.384 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw81lk487" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.412 253465 DEBUG nova.storage.rbd_utils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] rbd image c036cf5d-81f0-4f9e-9f31-67298476667c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.416 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c/disk.config c036cf5d-81f0-4f9e-9f31-67298476667c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:56:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570733990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.471 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.473 253465 DEBUG nova.virt.libvirt.vif [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:56:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1361792656',display_name='tempest-VolumesSnapshotTestJSON-instance-1361792656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1361792656',id=9,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAZaL9sMNYeLA8X2G3DMletzSSJ/2V4CMJuVkQvn0yEKu0ayBQxH4M7TkumL22T2fBpR0Jgyf4NabxDKgpmUkL2K6MULdzTCQ3NveNhT5jt1Nc412S33JpTt4iAhAEaIw==',key_name='tempest-keypair-1025410891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5846275e26354bb095399d10c8b52285',ramdisk_id='',reservation_id='r-lmbuad4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-724001677',owner_user_name='tempest-VolumesSnapshotTestJSON-724001677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:56:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='323c39d407144b438e9fbcdc7c67710e',uuid=e11a8b93-8ac0-460e-8780-bd0a8525f6ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.473 253465 DEBUG nova.network.os_vif_util [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converting VIF {"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.474 253465 DEBUG nova.network.os_vif_util [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:72:17,bridge_name='br-int',has_traffic_filtering=True,id=e6035774-e683-4695-8e35-ced236ccbefb,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6035774-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.475 253465 DEBUG nova.objects.instance [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'pci_devices' on Instance uuid e11a8b93-8ac0-460e-8780-bd0a8525f6ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.509 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <uuid>e11a8b93-8ac0-460e-8780-bd0a8525f6ad</uuid>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <name>instance-00000009</name>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1361792656</nova:name>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:56:38</nova:creationTime>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <nova:user uuid="323c39d407144b438e9fbcdc7c67710e">tempest-VolumesSnapshotTestJSON-724001677-project-member</nova:user>
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <nova:project uuid="5846275e26354bb095399d10c8b52285">tempest-VolumesSnapshotTestJSON-724001677</nova:project>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <nova:port uuid="e6035774-e683-4695-8e35-ced236ccbefb">
Nov 22 03:56:39 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <system>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <entry name="serial">e11a8b93-8ac0-460e-8780-bd0a8525f6ad</entry>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <entry name="uuid">e11a8b93-8ac0-460e-8780-bd0a8525f6ad</entry>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </system>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <os>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   </os>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <features>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   </features>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk">
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       </source>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk.config">
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       </source>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:56:39 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:9c:72:17"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <target dev="tape6035774-e6"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad/console.log" append="off"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <video>
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </video>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:56:39 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:56:39 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:56:39 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:56:39 compute-0 nova_compute[253461]: </domain>
Nov 22 03:56:39 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.510 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Preparing to wait for external event network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.510 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.511 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.511 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.511 253465 DEBUG nova.virt.libvirt.vif [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:56:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1361792656',display_name='tempest-VolumesSnapshotTestJSON-instance-1361792656',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1361792656',id=9,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAZaL9sMNYeLA8X2G3DMletzSSJ/2V4CMJuVkQvn0yEKu0ayBQxH4M7TkumL22T2fBpR0Jgyf4NabxDKgpmUkL2K6MULdzTCQ3NveNhT5jt1Nc412S33JpTt4iAhAEaIw==',key_name='tempest-keypair-1025410891',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5846275e26354bb095399d10c8b52285',ramdisk_id='',reservation_id='r-lmbuad4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-724001677',owner_user_name='tempest-VolumesSnapshotTestJSON-724001677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:56:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='323c39d407144b438e9fbcdc7c67710e',uuid=e11a8b93-8ac0-460e-8780-bd0a8525f6ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.512 253465 DEBUG nova.network.os_vif_util [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converting VIF {"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.512 253465 DEBUG nova.network.os_vif_util [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:72:17,bridge_name='br-int',has_traffic_filtering=True,id=e6035774-e683-4695-8e35-ced236ccbefb,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6035774-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.512 253465 DEBUG os_vif [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:72:17,bridge_name='br-int',has_traffic_filtering=True,id=e6035774-e683-4695-8e35-ced236ccbefb,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6035774-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.513 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.513 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.514 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.517 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.518 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6035774-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.518 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape6035774-e6, col_values=(('external_ids', {'iface-id': 'e6035774-e683-4695-8e35-ced236ccbefb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:72:17', 'vm-uuid': 'e11a8b93-8ac0-460e-8780-bd0a8525f6ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.520 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:39 compute-0 NetworkManager[48916]: <info>  [1763783799.5222] manager: (tape6035774-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.523 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.530 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.533 253465 INFO os_vif [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:72:17,bridge_name='br-int',has_traffic_filtering=True,id=e6035774-e683-4695-8e35-ced236ccbefb,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6035774-e6')
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.594 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.594 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.594 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No VIF found with MAC fa:16:3e:9c:72:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.595 253465 INFO nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Using config drive
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.620 253465 DEBUG nova.storage.rbd_utils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.627 253465 DEBUG oslo_concurrency.processutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c/disk.config c036cf5d-81f0-4f9e-9f31-67298476667c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.628 253465 INFO nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Deleting local config drive /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c/disk.config because it was imported into RBD.
Nov 22 03:56:39 compute-0 virtqemud[253186]: End of file while reading data: Input/output error
Nov 22 03:56:39 compute-0 virtqemud[253186]: End of file while reading data: Input/output error
Nov 22 03:56:39 compute-0 kernel: tap77a13178-85: entered promiscuous mode
Nov 22 03:56:39 compute-0 NetworkManager[48916]: <info>  [1763783799.6915] manager: (tap77a13178-85): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.694 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:39 compute-0 ovn_controller[152691]: 2025-11-22T03:56:39Z|00091|binding|INFO|Claiming lport 77a13178-8559-4b2b-af3d-991a871b7351 for this chassis.
Nov 22 03:56:39 compute-0 ovn_controller[152691]: 2025-11-22T03:56:39Z|00092|binding|INFO|77a13178-8559-4b2b-af3d-991a871b7351: Claiming fa:16:3e:3d:a7:62 10.100.0.7
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.707 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:a7:62 10.100.0.7'], port_security=['fa:16:3e:3d:a7:62 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c036cf5d-81f0-4f9e-9f31-67298476667c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e17fcbd6721457f93b2fe5018fb3534', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ab74d7d-f920-4c91-ab55-8be5e55b4e62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d20e9a4c-63a4-481f-abc2-5dcc033feed1, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=77a13178-8559-4b2b-af3d-991a871b7351) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.710 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 77a13178-8559-4b2b-af3d-991a871b7351 in datapath b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb bound to our chassis
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.712 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.729 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e9207b70-8d29-4908-85db-07f4b9184aa1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.729 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb33063bb-91 in ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.732 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb33063bb-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.732 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c106f479-b0ca-4c04-aa64-b4976515c554]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.733 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3f41021f-27e9-4071-952d-88444d6f81bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 systemd-machined[215728]: New machine qemu-8-instance-00000008.
Nov 22 03:56:39 compute-0 systemd-udevd[273205]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.748 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[99bdc171-1b4f-437b-b1fc-b3ed2f1ca257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 NetworkManager[48916]: <info>  [1763783799.7497] device (tap77a13178-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:56:39 compute-0 NetworkManager[48916]: <info>  [1763783799.7511] device (tap77a13178-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:56:39 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.776 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2e535d-d9b8-4ae4-94ed-fe1deaee7111]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 ovn_controller[152691]: 2025-11-22T03:56:39Z|00093|binding|INFO|Setting lport 77a13178-8559-4b2b-af3d-991a871b7351 ovn-installed in OVS
Nov 22 03:56:39 compute-0 ovn_controller[152691]: 2025-11-22T03:56:39Z|00094|binding|INFO|Setting lport 77a13178-8559-4b2b-af3d-991a871b7351 up in Southbound
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.816 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.816 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[f355b008-b7c7-4561-ba96-b45a1d9c24d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.822 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[55e6c7ae-bd2c-42b0-9a66-c267dbc31043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 NetworkManager[48916]: <info>  [1763783799.8236] manager: (tapb33063bb-90): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.857 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[1e566985-b394-4dc8-a746-3e0aadb00ade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.860 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[592030f4-11ec-4238-b85c-35b7a267cd1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 NetworkManager[48916]: <info>  [1763783799.8904] device (tapb33063bb-90): carrier: link connected
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.896 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[39d44c8a-f83e-4529-a2f2-51c7439f86da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.917 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ee288e-efbe-4fb7-8808-e58a63d2d379]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb33063bb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:2c:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405248, 'reachable_time': 42991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273241, 'error': None, 'target': 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.934 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1881e7-fcb3-46e1-bd78-6724c48af664]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:2c47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405248, 'tstamp': 405248}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273242, 'error': None, 'target': 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.943 253465 INFO nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Creating config drive at /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad/disk.config
Nov 22 03:56:39 compute-0 nova_compute[253461]: 2025-11-22 03:56:39.950 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp42568hzn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.956 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8180902f-13fb-4f7c-92c1-1143d2b359b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb33063bb-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:2c:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405248, 'reachable_time': 42991, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273243, 'error': None, 'target': 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:39.986 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4ad9c4c3-395b-4b92-bfdf-f2870373bae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.051 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d4607657-5bf2-47c6-887a-f6d5fbcf51f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.052 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb33063bb-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.053 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.053 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb33063bb-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:40 compute-0 NetworkManager[48916]: <info>  [1763783800.1014] manager: (tapb33063bb-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 22 03:56:40 compute-0 kernel: tapb33063bb-90: entered promiscuous mode
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.101 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp42568hzn" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.108 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb33063bb-90, col_values=(('external_ids', {'iface-id': 'b719ddd2-762d-4b6d-9cf6-85878321092a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:40 compute-0 ovn_controller[152691]: 2025-11-22T03:56:40Z|00095|binding|INFO|Releasing lport b719ddd2-762d-4b6d-9cf6-85878321092a from this chassis (sb_readonly=0)
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.137 253465 DEBUG nova.storage.rbd_utils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.141 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad/disk.config e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.148 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.150 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9c94a82d-4b64-4ed9-9a26-4862a83f3d92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.151 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb.pid.haproxy
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:56:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:40.151 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'env', 'PROCESS_TAG=haproxy-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.162 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 226 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.5 MiB/s wr, 85 op/s
Nov 22 03:56:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3570733990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:56:40 compute-0 podman[273351]: 2025-11-22 03:56:40.587002154 +0000 UTC m=+0.041703104 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.771 253465 DEBUG nova.compute.manager [req-b57222be-0fe9-4950-88fc-b402ebcbd5e6 req-b1ee0e9c-9299-40bf-a326-54ddae26e40f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received event network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.772 253465 DEBUG oslo_concurrency.lockutils [req-b57222be-0fe9-4950-88fc-b402ebcbd5e6 req-b1ee0e9c-9299-40bf-a326-54ddae26e40f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.773 253465 DEBUG oslo_concurrency.lockutils [req-b57222be-0fe9-4950-88fc-b402ebcbd5e6 req-b1ee0e9c-9299-40bf-a326-54ddae26e40f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.773 253465 DEBUG oslo_concurrency.lockutils [req-b57222be-0fe9-4950-88fc-b402ebcbd5e6 req-b1ee0e9c-9299-40bf-a326-54ddae26e40f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.774 253465 DEBUG nova.compute.manager [req-b57222be-0fe9-4950-88fc-b402ebcbd5e6 req-b1ee0e9c-9299-40bf-a326-54ddae26e40f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Processing event network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.799 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783800.7991567, c036cf5d-81f0-4f9e-9f31-67298476667c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.802 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] VM Started (Lifecycle Event)
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.806 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.811 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.818 253465 INFO nova.virt.libvirt.driver [-] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Instance spawned successfully.
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.818 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:56:40 compute-0 podman[273351]: 2025-11-22 03:56:40.834112144 +0000 UTC m=+0.288813024 container create d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.844 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.851 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.873 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.874 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.875 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.876 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.877 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.878 253465 DEBUG nova.virt.libvirt.driver [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:40 compute-0 systemd[1]: Started libpod-conmon-d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc.scope.
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.885 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.886 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783800.800517, c036cf5d-81f0-4f9e-9f31-67298476667c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.887 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] VM Paused (Lifecycle Event)
Nov 22 03:56:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.926 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.931 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783800.8105981, c036cf5d-81f0-4f9e-9f31-67298476667c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.931 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] VM Resumed (Lifecycle Event)
Nov 22 03:56:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/397b58bf0d1ea36ea251b0a28d87ebb3295969f99251d3956123bb6a1a36726b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.948 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.951 253465 DEBUG oslo_concurrency.processutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad/disk.config e11a8b93-8ac0-460e-8780-bd0a8525f6ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.810s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.952 253465 INFO nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Deleting local config drive /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad/disk.config because it was imported into RBD.
Nov 22 03:56:40 compute-0 podman[273351]: 2025-11-22 03:56:40.954231527 +0000 UTC m=+0.408932417 container init d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:56:40 compute-0 podman[273351]: 2025-11-22 03:56:40.964367444 +0000 UTC m=+0.419068324 container start d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.971 253465 INFO nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Took 8.98 seconds to spawn the instance on the hypervisor.
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.972 253465 DEBUG nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:56:40 compute-0 nova_compute[253461]: 2025-11-22 03:56:40.976 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:56:41 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[273377]: [NOTICE]   (273382) : New worker (273388) forked
Nov 22 03:56:41 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[273377]: [NOTICE]   (273382) : Loading success.
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.013 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:56:41 compute-0 kernel: tape6035774-e6: entered promiscuous mode
Nov 22 03:56:41 compute-0 NetworkManager[48916]: <info>  [1763783801.0293] manager: (tape6035774-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/58)
Nov 22 03:56:41 compute-0 systemd-udevd[273224]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.030 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 ovn_controller[152691]: 2025-11-22T03:56:41Z|00096|binding|INFO|Claiming lport e6035774-e683-4695-8e35-ced236ccbefb for this chassis.
Nov 22 03:56:41 compute-0 ovn_controller[152691]: 2025-11-22T03:56:41Z|00097|binding|INFO|e6035774-e683-4695-8e35-ced236ccbefb: Claiming fa:16:3e:9c:72:17 10.100.0.4
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.036 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.040 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:72:17 10.100.0.4'], port_security=['fa:16:3e:9c:72:17 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e11a8b93-8ac0-460e-8780-bd0a8525f6ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5846275e26354bb095399d10c8b52285', 'neutron:revision_number': '2', 'neutron:security_group_ids': '225d45e9-bf16-413b-bfdd-6faa7865b9ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3e3d611-3c95-4b51-bc26-a179069ce7f3, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=e6035774-e683-4695-8e35-ced236ccbefb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:56:41 compute-0 NetworkManager[48916]: <info>  [1763783801.0446] device (tape6035774-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:56:41 compute-0 NetworkManager[48916]: <info>  [1763783801.0453] device (tape6035774-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.047 162689 INFO neutron.agent.ovn.metadata.agent [-] Port e6035774-e683-4695-8e35-ced236ccbefb in datapath 4c32f371-ff20-4759-bfb3-24316a8c7a57 bound to our chassis
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.048 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c32f371-ff20-4759-bfb3-24316a8c7a57
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.052 253465 INFO nova.compute.manager [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Took 9.96 seconds to build instance.
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.064 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[94fe5a23-081f-46c3-8e3c-ebe56d621d92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.065 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c32f371-f1 in ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.066 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c32f371-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.066 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[853490d6-b81f-41cd-878a-49667cd00739]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.067 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3c5066-40ba-4a59-b5fa-569e2a37c26d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 systemd-machined[215728]: New machine qemu-9-instance-00000009.
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.084 253465 DEBUG oslo_concurrency.lockutils [None req-05f68241-73c3-4aa0-9ae1-27fb6031e311 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.089 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[73f0169d-2638-49d4-82f2-78f8cb011eb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.106 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b305168e-ca88-467a-98de-09f8c2f707de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.109 253465 DEBUG nova.network.neutron [req-8364a1d8-90b9-4a83-9e54-86607edc94d2 req-fe1882ba-89a0-46a2-b138-51aaedae16dc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Updated VIF entry in instance network info cache for port e6035774-e683-4695-8e35-ced236ccbefb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.109 253465 DEBUG nova.network.neutron [req-8364a1d8-90b9-4a83-9e54-86607edc94d2 req-fe1882ba-89a0-46a2-b138-51aaedae16dc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Updating instance_info_cache with network_info: [{"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.111 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 ovn_controller[152691]: 2025-11-22T03:56:41Z|00098|binding|INFO|Setting lport e6035774-e683-4695-8e35-ced236ccbefb ovn-installed in OVS
Nov 22 03:56:41 compute-0 ovn_controller[152691]: 2025-11-22T03:56:41Z|00099|binding|INFO|Setting lport e6035774-e683-4695-8e35-ced236ccbefb up in Southbound
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.113 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.124 253465 DEBUG oslo_concurrency.lockutils [req-8364a1d8-90b9-4a83-9e54-86607edc94d2 req-fe1882ba-89a0-46a2-b138-51aaedae16dc f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.143 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[942e252e-5e20-4709-8dbf-b206d41fff94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 NetworkManager[48916]: <info>  [1763783801.1495] manager: (tap4c32f371-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/59)
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.148 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[84257ca0-e8d5-400f-9937-cbbe9c1b0c65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.194 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[76ad67b1-b985-44c1-a893-1f4fe24a2041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.198 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[eab932a7-cf16-48ab-b97e-3e1a1f36fb2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 NetworkManager[48916]: <info>  [1763783801.2285] device (tap4c32f371-f0): carrier: link connected
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.236 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[b80f0cb5-9e7c-4a54-8dd7-6cc0e980c873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.261 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[cd75b65d-de67-450c-a804-df4dc35d8f7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c32f371-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:ec:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405382, 'reachable_time': 39724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273422, 'error': None, 'target': 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.279 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[49b4260f-ca63-4679-b828-29868f289005]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:ec95'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405382, 'tstamp': 405382}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273423, 'error': None, 'target': 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.299 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[80d91149-f60b-40d4-ab20-cf1e5aaeb32e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c32f371-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:ec:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405382, 'reachable_time': 39724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273424, 'error': None, 'target': 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.342 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[abd44e08-1943-431c-ae2e-4c8c10785966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.427 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4c732ac9-5571-4525-81d9-7f7585755938]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.429 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c32f371-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.429 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.430 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c32f371-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.432 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 kernel: tap4c32f371-f0: entered promiscuous mode
Nov 22 03:56:41 compute-0 NetworkManager[48916]: <info>  [1763783801.4336] manager: (tap4c32f371-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.436 253465 DEBUG nova.compute.manager [req-d164bebc-9b80-43ff-b36f-06838f85214f req-303f00d4-2314-45f4-a789-170d059d2921 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received event network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.437 253465 DEBUG oslo_concurrency.lockutils [req-d164bebc-9b80-43ff-b36f-06838f85214f req-303f00d4-2314-45f4-a789-170d059d2921 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.437 253465 DEBUG oslo_concurrency.lockutils [req-d164bebc-9b80-43ff-b36f-06838f85214f req-303f00d4-2314-45f4-a789-170d059d2921 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.438 253465 DEBUG oslo_concurrency.lockutils [req-d164bebc-9b80-43ff-b36f-06838f85214f req-303f00d4-2314-45f4-a789-170d059d2921 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.438 253465 DEBUG nova.compute.manager [req-d164bebc-9b80-43ff-b36f-06838f85214f req-303f00d4-2314-45f4-a789-170d059d2921 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Processing event network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.439 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.451 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c32f371-f0, col_values=(('external_ids', {'iface-id': '2af56aef-09c6-4f74-ad59-cabe02948eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.452 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 ovn_controller[152691]: 2025-11-22T03:56:41Z|00100|binding|INFO|Releasing lport 2af56aef-09c6-4f74-ad59-cabe02948eac from this chassis (sb_readonly=0)
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.454 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.474 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c32f371-ff20-4759-bfb3-24316a8c7a57.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c32f371-ff20-4759-bfb3-24316a8c7a57.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.476 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a86e240b-6545-4427-9db5-0d72fb9d02a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.477 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-4c32f371-ff20-4759-bfb3-24316a8c7a57
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/4c32f371-ff20-4759-bfb3-24316a8c7a57.pid.haproxy
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 4c32f371-ff20-4759-bfb3-24316a8c7a57
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.482 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'env', 'PROCESS_TAG=haproxy-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c32f371-ff20-4759-bfb3-24316a8c7a57.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.487 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:41 compute-0 ceph-mon[75011]: pgmap v1178: 305 pgs: 305 active+clean; 226 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 4.5 MiB/s wr, 85 op/s
Nov 22 03:56:41 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:56:41.719 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.867 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.868 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783801.8669293, e11a8b93-8ac0-460e-8780-bd0a8525f6ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.868 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] VM Started (Lifecycle Event)
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.874 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.877 253465 INFO nova.virt.libvirt.driver [-] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Instance spawned successfully.
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.877 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:56:41 compute-0 podman[273496]: 2025-11-22 03:56:41.901663814 +0000 UTC m=+0.069054159 container create c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.905 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.910 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.913 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.913 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.913 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.914 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.914 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.915 253465 DEBUG nova.virt.libvirt.driver [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:56:41 compute-0 systemd[1]: Started libpod-conmon-c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7.scope.
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.961 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.962 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783801.867594, e11a8b93-8ac0-460e-8780-bd0a8525f6ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:56:41 compute-0 nova_compute[253461]: 2025-11-22 03:56:41.962 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] VM Paused (Lifecycle Event)
Nov 22 03:56:41 compute-0 podman[273496]: 2025-11-22 03:56:41.87267851 +0000 UTC m=+0.040068905 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:56:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:56:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/602e680e9c136f8807b65504d019ce4232e9d23016e484821c54c8f11e9093e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.000 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.003 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783801.8723247, e11a8b93-8ac0-460e-8780-bd0a8525f6ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.003 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] VM Resumed (Lifecycle Event)
Nov 22 03:56:42 compute-0 podman[273496]: 2025-11-22 03:56:42.005170215 +0000 UTC m=+0.172560580 container init c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:56:42 compute-0 podman[273496]: 2025-11-22 03:56:42.015453821 +0000 UTC m=+0.182844166 container start c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.023 253465 INFO nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Took 9.00 seconds to spawn the instance on the hypervisor.
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.025 253465 DEBUG nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.029 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.039 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:56:42 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[273512]: [NOTICE]   (273516) : New worker (273518) forked
Nov 22 03:56:42 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[273512]: [NOTICE]   (273516) : Loading success.
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.075 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.102 253465 INFO nova.compute.manager [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Took 10.21 seconds to build instance.
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.119 253465 DEBUG oslo_concurrency.lockutils [None req-76c7b350-5da7-4680-b16f-fe780146b58b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:42 compute-0 nova_compute[253461]: 2025-11-22 03:56:42.127 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 227 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Nov 22 03:56:42 compute-0 ceph-mon[75011]: pgmap v1179: 305 pgs: 305 active+clean; 227 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 274 KiB/s rd, 3.6 MiB/s wr, 80 op/s
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.229 253465 DEBUG nova.compute.manager [req-864bc53d-b8a4-49f9-a19d-af8550c698b7 req-23c954ba-7c08-443c-8f5b-598c1ef7f068 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received event network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.230 253465 DEBUG oslo_concurrency.lockutils [req-864bc53d-b8a4-49f9-a19d-af8550c698b7 req-23c954ba-7c08-443c-8f5b-598c1ef7f068 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.230 253465 DEBUG oslo_concurrency.lockutils [req-864bc53d-b8a4-49f9-a19d-af8550c698b7 req-23c954ba-7c08-443c-8f5b-598c1ef7f068 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.230 253465 DEBUG oslo_concurrency.lockutils [req-864bc53d-b8a4-49f9-a19d-af8550c698b7 req-23c954ba-7c08-443c-8f5b-598c1ef7f068 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.230 253465 DEBUG nova.compute.manager [req-864bc53d-b8a4-49f9-a19d-af8550c698b7 req-23c954ba-7c08-443c-8f5b-598c1ef7f068 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] No waiting events found dispatching network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.231 253465 WARNING nova.compute.manager [req-864bc53d-b8a4-49f9-a19d-af8550c698b7 req-23c954ba-7c08-443c-8f5b-598c1ef7f068 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received unexpected event network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 for instance with vm_state active and task_state None.
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.640 253465 DEBUG nova.compute.manager [req-a1da9e70-18d7-4196-a01f-dc6a06759c52 req-834c2042-d08c-4d04-aa93-c8631825842c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received event network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.641 253465 DEBUG oslo_concurrency.lockutils [req-a1da9e70-18d7-4196-a01f-dc6a06759c52 req-834c2042-d08c-4d04-aa93-c8631825842c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.641 253465 DEBUG oslo_concurrency.lockutils [req-a1da9e70-18d7-4196-a01f-dc6a06759c52 req-834c2042-d08c-4d04-aa93-c8631825842c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.641 253465 DEBUG oslo_concurrency.lockutils [req-a1da9e70-18d7-4196-a01f-dc6a06759c52 req-834c2042-d08c-4d04-aa93-c8631825842c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.642 253465 DEBUG nova.compute.manager [req-a1da9e70-18d7-4196-a01f-dc6a06759c52 req-834c2042-d08c-4d04-aa93-c8631825842c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] No waiting events found dispatching network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:56:43 compute-0 nova_compute[253461]: 2025-11-22 03:56:43.642 253465 WARNING nova.compute.manager [req-a1da9e70-18d7-4196-a01f-dc6a06759c52 req-834c2042-d08c-4d04-aa93-c8631825842c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received unexpected event network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb for instance with vm_state active and task_state None.
Nov 22 03:56:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 22 03:56:44 compute-0 podman[273527]: 2025-11-22 03:56:44.422339424 +0000 UTC m=+0.088743003 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:56:44 compute-0 podman[273528]: 2025-11-22 03:56:44.46772815 +0000 UTC m=+0.129444337 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:56:44 compute-0 NetworkManager[48916]: <info>  [1763783804.4827] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Nov 22 03:56:44 compute-0 nova_compute[253461]: 2025-11-22 03:56:44.482 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:44 compute-0 NetworkManager[48916]: <info>  [1763783804.4839] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 22 03:56:44 compute-0 ovn_controller[152691]: 2025-11-22T03:56:44Z|00101|binding|INFO|Releasing lport 2af56aef-09c6-4f74-ad59-cabe02948eac from this chassis (sb_readonly=0)
Nov 22 03:56:44 compute-0 ovn_controller[152691]: 2025-11-22T03:56:44Z|00102|binding|INFO|Releasing lport b719ddd2-762d-4b6d-9cf6-85878321092a from this chassis (sb_readonly=0)
Nov 22 03:56:44 compute-0 nova_compute[253461]: 2025-11-22 03:56:44.520 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:44 compute-0 ovn_controller[152691]: 2025-11-22T03:56:44Z|00103|binding|INFO|Releasing lport 2af56aef-09c6-4f74-ad59-cabe02948eac from this chassis (sb_readonly=0)
Nov 22 03:56:44 compute-0 ovn_controller[152691]: 2025-11-22T03:56:44Z|00104|binding|INFO|Releasing lport b719ddd2-762d-4b6d-9cf6-85878321092a from this chassis (sb_readonly=0)
Nov 22 03:56:44 compute-0 nova_compute[253461]: 2025-11-22 03:56:44.545 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:44 compute-0 nova_compute[253461]: 2025-11-22 03:56:44.560 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:45 compute-0 ceph-mon[75011]: pgmap v1180: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.6 MiB/s wr, 194 op/s
Nov 22 03:56:45 compute-0 nova_compute[253461]: 2025-11-22 03:56:45.740 253465 DEBUG nova.compute.manager [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received event network-changed-e6035774-e683-4695-8e35-ced236ccbefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:56:45 compute-0 nova_compute[253461]: 2025-11-22 03:56:45.741 253465 DEBUG nova.compute.manager [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Refreshing instance network info cache due to event network-changed-e6035774-e683-4695-8e35-ced236ccbefb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:56:45 compute-0 nova_compute[253461]: 2025-11-22 03:56:45.741 253465 DEBUG oslo_concurrency.lockutils [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:56:45 compute-0 nova_compute[253461]: 2025-11-22 03:56:45.741 253465 DEBUG oslo_concurrency.lockutils [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:56:45 compute-0 nova_compute[253461]: 2025-11-22 03:56:45.741 253465 DEBUG nova.network.neutron [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Refreshing network info cache for port e6035774-e683-4695-8e35-ced236ccbefb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:56:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.3 MiB/s wr, 142 op/s
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006975264740834798 of space, bias 1.0, pg target 0.20925794222504393 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006927571709969395 of space, bias 1.0, pg target 0.20782715129908186 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:56:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:56:46 compute-0 ceph-mon[75011]: pgmap v1181: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.3 MiB/s wr, 142 op/s
Nov 22 03:56:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/363066632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/363066632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:46 compute-0 nova_compute[253461]: 2025-11-22 03:56:46.976 253465 DEBUG nova.network.neutron [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Updated VIF entry in instance network info cache for port e6035774-e683-4695-8e35-ced236ccbefb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:56:46 compute-0 nova_compute[253461]: 2025-11-22 03:56:46.976 253465 DEBUG nova.network.neutron [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Updating instance_info_cache with network_info: [{"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:56:47 compute-0 nova_compute[253461]: 2025-11-22 03:56:47.129 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:47 compute-0 nova_compute[253461]: 2025-11-22 03:56:47.261 253465 DEBUG oslo_concurrency.lockutils [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-e11a8b93-8ac0-460e-8780-bd0a8525f6ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:56:47 compute-0 nova_compute[253461]: 2025-11-22 03:56:47.262 253465 DEBUG nova.compute.manager [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received event network-changed-77a13178-8559-4b2b-af3d-991a871b7351 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:56:47 compute-0 nova_compute[253461]: 2025-11-22 03:56:47.262 253465 DEBUG nova.compute.manager [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Refreshing instance network info cache due to event network-changed-77a13178-8559-4b2b-af3d-991a871b7351. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:56:47 compute-0 nova_compute[253461]: 2025-11-22 03:56:47.263 253465 DEBUG oslo_concurrency.lockutils [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:56:47 compute-0 nova_compute[253461]: 2025-11-22 03:56:47.263 253465 DEBUG oslo_concurrency.lockutils [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:56:47 compute-0 nova_compute[253461]: 2025-11-22 03:56:47.264 253465 DEBUG nova.network.neutron [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Refreshing network info cache for port 77a13178-8559-4b2b-af3d-991a871b7351 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:56:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/363066632' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/363066632' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 213 op/s
Nov 22 03:56:48 compute-0 ceph-mon[75011]: pgmap v1182: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 213 op/s
Nov 22 03:56:49 compute-0 nova_compute[253461]: 2025-11-22 03:56:49.004 253465 DEBUG nova.network.neutron [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Updated VIF entry in instance network info cache for port 77a13178-8559-4b2b-af3d-991a871b7351. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:56:49 compute-0 nova_compute[253461]: 2025-11-22 03:56:49.005 253465 DEBUG nova.network.neutron [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Updating instance_info_cache with network_info: [{"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:56:49 compute-0 nova_compute[253461]: 2025-11-22 03:56:49.030 253465 DEBUG oslo_concurrency.lockutils [req-93970e7b-d940-4182-a5a8-fb87b23dea15 req-3a683bf6-f082-4256-866b-42213c330e7e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-c036cf5d-81f0-4f9e-9f31-67298476667c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:56:49 compute-0 nova_compute[253461]: 2025-11-22 03:56:49.523 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:56:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2389504530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:56:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2389504530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2389504530' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:56:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2389504530' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:56:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 222 op/s
Nov 22 03:56:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:51 compute-0 ceph-mon[75011]: pgmap v1183: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 222 op/s
Nov 22 03:56:52 compute-0 nova_compute[253461]: 2025-11-22 03:56:52.132 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 227 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 233 op/s
Nov 22 03:56:52 compute-0 ceph-mon[75011]: pgmap v1184: 305 pgs: 305 active+clean; 227 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 233 op/s
Nov 22 03:56:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 242 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.1 MiB/s wr, 248 op/s
Nov 22 03:56:54 compute-0 nova_compute[253461]: 2025-11-22 03:56:54.527 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:54 compute-0 ovn_controller[152691]: 2025-11-22T03:56:54Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3d:a7:62 10.100.0.7
Nov 22 03:56:54 compute-0 ovn_controller[152691]: 2025-11-22T03:56:54Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3d:a7:62 10.100.0.7
Nov 22 03:56:54 compute-0 ovn_controller[152691]: 2025-11-22T03:56:54Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:72:17 10.100.0.4
Nov 22 03:56:54 compute-0 ovn_controller[152691]: 2025-11-22T03:56:54Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:72:17 10.100.0.4
Nov 22 03:56:55 compute-0 ceph-mon[75011]: pgmap v1185: 305 pgs: 305 active+clean; 242 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.1 MiB/s wr, 248 op/s
Nov 22 03:56:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:56:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 242 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Nov 22 03:56:56 compute-0 ceph-mon[75011]: pgmap v1186: 305 pgs: 305 active+clean; 242 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.1 MiB/s wr, 134 op/s
Nov 22 03:56:57 compute-0 nova_compute[253461]: 2025-11-22 03:56:57.134 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:56:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 268 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 195 op/s
Nov 22 03:56:59 compute-0 ceph-mon[75011]: pgmap v1187: 305 pgs: 305 active+clean; 268 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.3 MiB/s wr, 195 op/s
Nov 22 03:56:59 compute-0 podman[273571]: 2025-11-22 03:56:59.396763695 +0000 UTC m=+0.074141072 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:56:59 compute-0 nova_compute[253461]: 2025-11-22 03:56:59.529 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 292 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 755 KiB/s rd, 4.3 MiB/s wr, 166 op/s
Nov 22 03:57:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1522238496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1522238496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1522238496' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1522238496' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:57:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1882285223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:01 compute-0 ceph-mon[75011]: pgmap v1188: 305 pgs: 305 active+clean; 292 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 755 KiB/s rd, 4.3 MiB/s wr, 166 op/s
Nov 22 03:57:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1882285223' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Nov 22 03:57:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Nov 22 03:57:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.622 253465 DEBUG oslo_concurrency.lockutils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.622 253465 DEBUG oslo_concurrency.lockutils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.643 253465 DEBUG nova.objects.instance [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'flavor' on Instance uuid e11a8b93-8ac0-460e-8780-bd0a8525f6ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.664 253465 INFO nova.virt.libvirt.driver [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Ignoring supplied device name: /dev/vdb
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.678 253465 DEBUG oslo_concurrency.lockutils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.693 253465 DEBUG oslo_concurrency.lockutils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.693 253465 DEBUG oslo_concurrency.lockutils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:01 compute-0 anacron[29935]: Job `cron.monthly' started
Nov 22 03:57:01 compute-0 anacron[29935]: Job `cron.monthly' terminated
Nov 22 03:57:01 compute-0 anacron[29935]: Normal exit (3 jobs run)
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.710 253465 DEBUG nova.objects.instance [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'flavor' on Instance uuid c036cf5d-81f0-4f9e-9f31-67298476667c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.729 253465 INFO nova.virt.libvirt.driver [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Ignoring supplied device name: /dev/vdb
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.743 253465 DEBUG oslo_concurrency.lockutils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.888 253465 DEBUG oslo_concurrency.lockutils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.888 253465 DEBUG oslo_concurrency.lockutils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.889 253465 INFO nova.compute.manager [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Attaching volume ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c to /dev/vdb
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.976 253465 DEBUG oslo_concurrency.lockutils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.977 253465 DEBUG oslo_concurrency.lockutils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:01 compute-0 nova_compute[253461]: 2025-11-22 03:57:01.978 253465 INFO nova.compute.manager [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Attaching volume 8dd7be21-cc11-4318-98e2-3ffa1436ab12 to /dev/vdb
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.086 253465 DEBUG os_brick.utils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.088 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.103 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.104 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf6058c-7751-4a17-b87b-fe1c671946b7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.105 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.119 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.119 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9305d2-ad43-4406-98cc-72a78fb21e0d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.123 253465 DEBUG os_brick.utils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.122 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.126 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.136 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.136 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.136 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[84522e09-8d8c-4daf-aeff-bfb39e7f1c8e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.140 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[389fcee9-d1ce-4523-acf6-182ce6582190]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.140 253465 DEBUG oslo_concurrency.processutils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.143 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.144 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[92849146-2f95-4378-910e-41eab8bd2b47]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.169 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.172 253465 DEBUG oslo_concurrency.processutils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.174 253465 DEBUG os_brick.initiator.connectors.lightos [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.174 253465 DEBUG os_brick.initiator.connectors.lightos [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.175 253465 DEBUG os_brick.initiator.connectors.lightos [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.175 253465 DEBUG os_brick.utils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] <== get_connector_properties: return (87ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.175 253465 DEBUG nova.virt.block_device [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Updating existing volume attachment record: b9569d65-e9c5-49d9-9572-c37ed6321405 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.182 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.182 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[a3328fb0-1029-4d8e-a94e-555a8e080aee]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.184 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.197 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.197 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[ef859172-6757-4063-acfb-8ae4a9c89640]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.198 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[52ed65a9-3bb8-41bf-a77f-25ec30f0df85]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.199 253465 DEBUG oslo_concurrency.processutils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 949 KiB/s rd, 5.1 MiB/s wr, 170 op/s
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.230 253465 DEBUG oslo_concurrency.processutils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.233 253465 DEBUG os_brick.initiator.connectors.lightos [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.233 253465 DEBUG os_brick.initiator.connectors.lightos [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.234 253465 DEBUG os_brick.initiator.connectors.lightos [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.234 253465 DEBUG os_brick.utils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] <== get_connector_properties: return (109ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.235 253465 DEBUG nova.virt.block_device [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Updating existing volume attachment record: f201146d-e512-43ec-b2b5-7c39c5d9bab5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 03:57:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Nov 22 03:57:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Nov 22 03:57:02 compute-0 ceph-mon[75011]: osdmap e227: 3 total, 3 up, 3 in
Nov 22 03:57:02 compute-0 ceph-mon[75011]: pgmap v1190: 305 pgs: 305 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 949 KiB/s rd, 5.1 MiB/s wr, 170 op/s
Nov 22 03:57:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Nov 22 03:57:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:57:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/544006633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:57:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2287316728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.963 253465 DEBUG nova.objects.instance [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'flavor' on Instance uuid e11a8b93-8ac0-460e-8780-bd0a8525f6ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:02 compute-0 nova_compute[253461]: 2025-11-22 03:57:02.995 253465 DEBUG nova.virt.libvirt.driver [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Attempting to attach volume ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.000 253465 DEBUG nova.virt.libvirt.guest [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c">
Nov 22 03:57:03 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   </source>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 03:57:03 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   </auth>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <serial>ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c</serial>
Nov 22 03:57:03 compute-0 nova_compute[253461]: </disk>
Nov 22 03:57:03 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.009 253465 DEBUG nova.objects.instance [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'flavor' on Instance uuid c036cf5d-81f0-4f9e-9f31-67298476667c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.029 253465 DEBUG nova.virt.libvirt.driver [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Attempting to attach volume 8dd7be21-cc11-4318-98e2-3ffa1436ab12 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.033 253465 DEBUG nova.virt.libvirt.guest [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-8dd7be21-cc11-4318-98e2-3ffa1436ab12">
Nov 22 03:57:03 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   </source>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 03:57:03 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   </auth>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:57:03 compute-0 nova_compute[253461]:   <serial>8dd7be21-cc11-4318-98e2-3ffa1436ab12</serial>
Nov 22 03:57:03 compute-0 nova_compute[253461]: </disk>
Nov 22 03:57:03 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.152 253465 DEBUG nova.virt.libvirt.driver [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.152 253465 DEBUG nova.virt.libvirt.driver [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.153 253465 DEBUG nova.virt.libvirt.driver [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.153 253465 DEBUG nova.virt.libvirt.driver [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No VIF found with MAC fa:16:3e:9c:72:17, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.163 253465 DEBUG nova.virt.libvirt.driver [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.164 253465 DEBUG nova.virt.libvirt.driver [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.165 253465 DEBUG nova.virt.libvirt.driver [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.165 253465 DEBUG nova.virt.libvirt.driver [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] No VIF found with MAC fa:16:3e:3d:a7:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:57:03 compute-0 ceph-mon[75011]: osdmap e228: 3 total, 3 up, 3 in
Nov 22 03:57:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/544006633' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2287316728' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.397 253465 DEBUG oslo_concurrency.lockutils [None req-dffbce60-f0d2-4f2d-b557-a4f7b537dfaa e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:03 compute-0 nova_compute[253461]: 2025-11-22 03:57:03.415 253465 DEBUG oslo_concurrency.lockutils [None req-64bb14cf-a7f0-4d82-bb06-c8b4842701c2 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 193 op/s
Nov 22 03:57:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Nov 22 03:57:04 compute-0 ceph-mon[75011]: pgmap v1192: 305 pgs: 305 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.7 MiB/s wr, 193 op/s
Nov 22 03:57:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Nov 22 03:57:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Nov 22 03:57:04 compute-0 nova_compute[253461]: 2025-11-22 03:57:04.531 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:57:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3569921799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Nov 22 03:57:05 compute-0 ceph-mon[75011]: osdmap e229: 3 total, 3 up, 3 in
Nov 22 03:57:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3569921799' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Nov 22 03:57:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Nov 22 03:57:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 60 KiB/s wr, 61 op/s
Nov 22 03:57:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Nov 22 03:57:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Nov 22 03:57:06 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Nov 22 03:57:06 compute-0 ceph-mon[75011]: osdmap e230: 3 total, 3 up, 3 in
Nov 22 03:57:06 compute-0 ceph-mon[75011]: pgmap v1195: 305 pgs: 305 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 60 KiB/s wr, 61 op/s
Nov 22 03:57:07 compute-0 nova_compute[253461]: 2025-11-22 03:57:07.138 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Nov 22 03:57:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Nov 22 03:57:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Nov 22 03:57:07 compute-0 ceph-mon[75011]: osdmap e231: 3 total, 3 up, 3 in
Nov 22 03:57:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516056000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3516056000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.7 KiB/s wr, 79 op/s
Nov 22 03:57:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Nov 22 03:57:08 compute-0 ceph-mon[75011]: osdmap e232: 3 total, 3 up, 3 in
Nov 22 03:57:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3516056000' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3516056000' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:08 compute-0 ceph-mon[75011]: pgmap v1198: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 3.7 KiB/s wr, 79 op/s
Nov 22 03:57:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Nov 22 03:57:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Nov 22 03:57:09 compute-0 nova_compute[253461]: 2025-11-22 03:57:09.534 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:57:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/474870171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Nov 22 03:57:09 compute-0 ceph-mon[75011]: osdmap e233: 3 total, 3 up, 3 in
Nov 22 03:57:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/474870171' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Nov 22 03:57:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Nov 22 03:57:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 47 KiB/s wr, 179 op/s
Nov 22 03:57:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Nov 22 03:57:11 compute-0 ceph-mon[75011]: osdmap e234: 3 total, 3 up, 3 in
Nov 22 03:57:11 compute-0 ceph-mon[75011]: pgmap v1201: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 47 KiB/s wr, 179 op/s
Nov 22 03:57:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Nov 22 03:57:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Nov 22 03:57:12 compute-0 nova_compute[253461]: 2025-11-22 03:57:12.141 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 43 KiB/s wr, 172 op/s
Nov 22 03:57:12 compute-0 ceph-mon[75011]: osdmap e235: 3 total, 3 up, 3 in
Nov 22 03:57:12 compute-0 ceph-mon[75011]: pgmap v1203: 305 pgs: 305 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 43 KiB/s wr, 172 op/s
Nov 22 03:57:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Nov 22 03:57:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Nov 22 03:57:13 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Nov 22 03:57:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 8.0 KiB/s wr, 112 op/s
Nov 22 03:57:14 compute-0 ovn_controller[152691]: 2025-11-22T03:57:14Z|00105|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 22 03:57:14 compute-0 nova_compute[253461]: 2025-11-22 03:57:14.538 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:14 compute-0 ceph-mon[75011]: osdmap e236: 3 total, 3 up, 3 in
Nov 22 03:57:14 compute-0 ceph-mon[75011]: pgmap v1205: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 8.0 KiB/s wr, 112 op/s
Nov 22 03:57:15 compute-0 podman[273648]: 2025-11-22 03:57:15.426759071 +0000 UTC m=+0.085561666 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 03:57:15 compute-0 podman[273649]: 2025-11-22 03:57:15.473289138 +0000 UTC m=+0.133047093 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:57:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Nov 22 03:57:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Nov 22 03:57:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Nov 22 03:57:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 6.3 KiB/s wr, 88 op/s
Nov 22 03:57:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2149296088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2149296088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Nov 22 03:57:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Nov 22 03:57:16 compute-0 ceph-mon[75011]: osdmap e237: 3 total, 3 up, 3 in
Nov 22 03:57:16 compute-0 ceph-mon[75011]: pgmap v1207: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 6.3 KiB/s wr, 88 op/s
Nov 22 03:57:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2149296088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2149296088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Nov 22 03:57:17 compute-0 nova_compute[253461]: 2025-11-22 03:57:17.144 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:17 compute-0 ceph-mon[75011]: osdmap e238: 3 total, 3 up, 3 in
Nov 22 03:57:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 6.3 KiB/s wr, 77 op/s
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.267 253465 DEBUG oslo_concurrency.lockutils [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.268 253465 DEBUG oslo_concurrency.lockutils [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.283 253465 INFO nova.compute.manager [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Detaching volume ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.475 253465 INFO nova.virt.block_device [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Attempting to driver detach volume ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c from mountpoint /dev/vdb
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.487 253465 DEBUG nova.virt.libvirt.driver [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Attempting to detach device vdb from instance e11a8b93-8ac0-460e-8780-bd0a8525f6ad from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.488 253465 DEBUG nova.virt.libvirt.guest [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c">
Nov 22 03:57:18 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   </source>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <serial>ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c</serial>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:57:18 compute-0 nova_compute[253461]: </disk>
Nov 22 03:57:18 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.521 253465 INFO nova.virt.libvirt.driver [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Successfully detached device vdb from instance e11a8b93-8ac0-460e-8780-bd0a8525f6ad from the persistent domain config.
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.522 253465 DEBUG nova.virt.libvirt.driver [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance e11a8b93-8ac0-460e-8780-bd0a8525f6ad from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.522 253465 DEBUG nova.virt.libvirt.guest [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c">
Nov 22 03:57:18 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   </source>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <serial>ee58bbe5-d9ea-4d33-9f85-b75e2aa4ec1c</serial>
Nov 22 03:57:18 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:57:18 compute-0 nova_compute[253461]: </disk>
Nov 22 03:57:18 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.620 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.659 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763783838.6589262, e11a8b93-8ac0-460e-8780-bd0a8525f6ad => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.662 253465 DEBUG nova.virt.libvirt.driver [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance e11a8b93-8ac0-460e-8780-bd0a8525f6ad _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.664 253465 INFO nova.virt.libvirt.driver [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Successfully detached device vdb from instance e11a8b93-8ac0-460e-8780-bd0a8525f6ad from the live domain config.
Nov 22 03:57:18 compute-0 nova_compute[253461]: 2025-11-22 03:57:18.966 253465 DEBUG nova.objects.instance [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'flavor' on Instance uuid e11a8b93-8ac0-460e-8780-bd0a8525f6ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:19 compute-0 ceph-mon[75011]: pgmap v1209: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 293 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 6.3 KiB/s wr, 77 op/s
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.028 253465 DEBUG oslo_concurrency.lockutils [None req-516d9b16-1e35-4e11-8d2c-5736368ada09 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.029 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.030 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.030 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.030 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.031 253465 INFO nova.compute.manager [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Terminating instance
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.033 253465 DEBUG nova.compute.manager [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:57:19 compute-0 kernel: tape6035774-e6 (unregistering): left promiscuous mode
Nov 22 03:57:19 compute-0 NetworkManager[48916]: <info>  [1763783839.0924] device (tape6035774-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:57:19 compute-0 ovn_controller[152691]: 2025-11-22T03:57:19Z|00106|binding|INFO|Releasing lport e6035774-e683-4695-8e35-ced236ccbefb from this chassis (sb_readonly=0)
Nov 22 03:57:19 compute-0 ovn_controller[152691]: 2025-11-22T03:57:19Z|00107|binding|INFO|Setting lport e6035774-e683-4695-8e35-ced236ccbefb down in Southbound
Nov 22 03:57:19 compute-0 ovn_controller[152691]: 2025-11-22T03:57:19Z|00108|binding|INFO|Removing iface tape6035774-e6 ovn-installed in OVS
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.104 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.106 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.115 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:72:17 10.100.0.4'], port_security=['fa:16:3e:9c:72:17 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e11a8b93-8ac0-460e-8780-bd0a8525f6ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5846275e26354bb095399d10c8b52285', 'neutron:revision_number': '4', 'neutron:security_group_ids': '225d45e9-bf16-413b-bfdd-6faa7865b9ba', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3e3d611-3c95-4b51-bc26-a179069ce7f3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=e6035774-e683-4695-8e35-ced236ccbefb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.117 162689 INFO neutron.agent.ovn.metadata.agent [-] Port e6035774-e683-4695-8e35-ced236ccbefb in datapath 4c32f371-ff20-4759-bfb3-24316a8c7a57 unbound from our chassis
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.119 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c32f371-ff20-4759-bfb3-24316a8c7a57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.120 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[678e940e-35d6-4b85-953b-cb98060a1207]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.120 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 namespace which is not needed anymore
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.127 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:19 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 22 03:57:19 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 15.410s CPU time.
Nov 22 03:57:19 compute-0 systemd-machined[215728]: Machine qemu-9-instance-00000009 terminated.
Nov 22 03:57:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:57:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761410129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.270 253465 INFO nova.virt.libvirt.driver [-] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Instance destroyed successfully.
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.270 253465 DEBUG nova.objects.instance [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'resources' on Instance uuid e11a8b93-8ac0-460e-8780-bd0a8525f6ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:19 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[273512]: [NOTICE]   (273516) : haproxy version is 2.8.14-c23fe91
Nov 22 03:57:19 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[273512]: [NOTICE]   (273516) : path to executable is /usr/sbin/haproxy
Nov 22 03:57:19 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[273512]: [WARNING]  (273516) : Exiting Master process...
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.281 253465 DEBUG nova.virt.libvirt.vif [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:56:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1361792656',display_name='tempest-VolumesSnapshotTestJSON-instance-1361792656',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1361792656',id=9,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAAZaL9sMNYeLA8X2G3DMletzSSJ/2V4CMJuVkQvn0yEKu0ayBQxH4M7TkumL22T2fBpR0Jgyf4NabxDKgpmUkL2K6MULdzTCQ3NveNhT5jt1Nc412S33JpTt4iAhAEaIw==',key_name='tempest-keypair-1025410891',keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:56:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5846275e26354bb095399d10c8b52285',ramdisk_id='',reservation_id='r-lmbuad4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-724001677',owner_user_name='tempest-VolumesSnapshotTestJSON-724001677-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:56:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='323c39d407144b438e9fbcdc7c67710e',uuid=e11a8b93-8ac0-460e-8780-bd0a8525f6ad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.282 253465 DEBUG nova.network.os_vif_util [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converting VIF {"id": "e6035774-e683-4695-8e35-ced236ccbefb", "address": "fa:16:3e:9c:72:17", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6035774-e6", "ovs_interfaceid": "e6035774-e683-4695-8e35-ced236ccbefb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:57:19 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[273512]: [ALERT]    (273516) : Current worker (273518) exited with code 143 (Terminated)
Nov 22 03:57:19 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[273512]: [WARNING]  (273516) : All workers exited. Exiting... (0)
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.282 253465 DEBUG nova.network.os_vif_util [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9c:72:17,bridge_name='br-int',has_traffic_filtering=True,id=e6035774-e683-4695-8e35-ced236ccbefb,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6035774-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.282 253465 DEBUG os_vif [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:72:17,bridge_name='br-int',has_traffic_filtering=True,id=e6035774-e683-4695-8e35-ced236ccbefb,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6035774-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.284 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.284 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6035774-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.286 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:19 compute-0 systemd[1]: libpod-c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7.scope: Deactivated successfully.
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.288 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:57:19 compute-0 conmon[273512]: conmon c917a9ba3391376db401 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7.scope/container/memory.events
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.292 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:19 compute-0 podman[273719]: 2025-11-22 03:57:19.292728602 +0000 UTC m=+0.059050749 container died c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.295 253465 INFO os_vif [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9c:72:17,bridge_name='br-int',has_traffic_filtering=True,id=e6035774-e683-4695-8e35-ced236ccbefb,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6035774-e6')
Nov 22 03:57:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7-userdata-shm.mount: Deactivated successfully.
Nov 22 03:57:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-602e680e9c136f8807b65504d019ce4232e9d23016e484821c54c8f11e9093e6-merged.mount: Deactivated successfully.
Nov 22 03:57:19 compute-0 podman[273719]: 2025-11-22 03:57:19.343952736 +0000 UTC m=+0.110274893 container cleanup c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:19 compute-0 systemd[1]: libpod-conmon-c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7.scope: Deactivated successfully.
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.426 253465 DEBUG nova.compute.manager [req-4f702cdd-5252-415c-a557-c586d6bdefdf req-dcc43055-8eb8-4972-bd0e-0dd44e03e169 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received event network-vif-unplugged-e6035774-e683-4695-8e35-ced236ccbefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.428 253465 DEBUG oslo_concurrency.lockutils [req-4f702cdd-5252-415c-a557-c586d6bdefdf req-dcc43055-8eb8-4972-bd0e-0dd44e03e169 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.429 253465 DEBUG oslo_concurrency.lockutils [req-4f702cdd-5252-415c-a557-c586d6bdefdf req-dcc43055-8eb8-4972-bd0e-0dd44e03e169 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.429 253465 DEBUG oslo_concurrency.lockutils [req-4f702cdd-5252-415c-a557-c586d6bdefdf req-dcc43055-8eb8-4972-bd0e-0dd44e03e169 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.430 253465 DEBUG nova.compute.manager [req-4f702cdd-5252-415c-a557-c586d6bdefdf req-dcc43055-8eb8-4972-bd0e-0dd44e03e169 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] No waiting events found dispatching network-vif-unplugged-e6035774-e683-4695-8e35-ced236ccbefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.430 253465 DEBUG nova.compute.manager [req-4f702cdd-5252-415c-a557-c586d6bdefdf req-dcc43055-8eb8-4972-bd0e-0dd44e03e169 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received event network-vif-unplugged-e6035774-e683-4695-8e35-ced236ccbefb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:57:19 compute-0 podman[273779]: 2025-11-22 03:57:19.434707689 +0000 UTC m=+0.051847368 container remove c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.443 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[bb8f521a-864e-44cc-8fb7-d987f440d951]: (4, ('Sat Nov 22 03:57:19 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 (c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7)\nc917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7\nSat Nov 22 03:57:19 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 (c917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7)\nc917a9ba3391376db401c014a2bf0d06510346d0ef4e1df356025ebf9fb657a7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.446 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f30bb177-443b-4dbc-9d0c-abedad3b7d16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.447 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c32f371-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.449 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:19 compute-0 kernel: tap4c32f371-f0: left promiscuous mode
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.473 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.477 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[985e40e9-f951-4eb4-a474-f18a40130e3c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.492 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[adfd9a1c-7e4c-4751-afa7-f97316c31f6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.494 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d34b70a6-ea2f-4403-9a30-7e96bba5ceb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.510 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[21a2aa99-454b-4e5d-b490-1631cec7a965]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405373, 'reachable_time': 44400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273794, 'error': None, 'target': 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.516 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:57:19 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:19.516 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[30c91997-3ce6-4c3a-b1be-c497eae0d138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d4c32f371\x2dff20\x2d4759\x2dbfb3\x2d24316a8c7a57.mount: Deactivated successfully.
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.755 253465 INFO nova.virt.libvirt.driver [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Deleting instance files /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad_del
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.756 253465 INFO nova.virt.libvirt.driver [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Deletion of /var/lib/nova/instances/e11a8b93-8ac0-460e-8780-bd0a8525f6ad_del complete
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.839 253465 INFO nova.compute.manager [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Took 0.81 seconds to destroy the instance on the hypervisor.
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.840 253465 DEBUG oslo.service.loopingcall [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.840 253465 DEBUG nova.compute.manager [-] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:57:19 compute-0 nova_compute[253461]: 2025-11-22 03:57:19.841 253465 DEBUG nova.network.neutron [-] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:57:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Nov 22 03:57:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3761410129' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Nov 22 03:57:20 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Nov 22 03:57:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 293 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 6.5 KiB/s wr, 92 op/s
Nov 22 03:57:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Nov 22 03:57:21 compute-0 ceph-mon[75011]: osdmap e239: 3 total, 3 up, 3 in
Nov 22 03:57:21 compute-0 ceph-mon[75011]: pgmap v1211: 305 pgs: 305 active+clean; 293 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 6.5 KiB/s wr, 92 op/s
Nov 22 03:57:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Nov 22 03:57:21 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.055 253465 DEBUG nova.network.neutron [-] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.084 253465 INFO nova.compute.manager [-] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Took 1.24 seconds to deallocate network for instance.
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.241 253465 DEBUG nova.compute.manager [req-c1fcf791-292c-4182-bb88-868e94379adf req-7b37906e-9eb0-4823-a122-927421b74773 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received event network-vif-deleted-e6035774-e683-4695-8e35-ced236ccbefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.262 253465 WARNING nova.volume.cinder [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Attachment b9569d65-e9c5-49d9-9572-c37ed6321405 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = b9569d65-e9c5-49d9-9572-c37ed6321405. (HTTP 404) (Request-ID: req-6395e368-ef6a-41c9-a02b-9568f7a541c9)
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.263 253465 INFO nova.compute.manager [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Took 0.18 seconds to detach 1 volumes for instance.
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.321 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.322 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.412 253465 DEBUG oslo_concurrency.processutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.513 253465 DEBUG nova.compute.manager [req-c063626a-f90c-4953-b6ad-22c3b7e4fb25 req-c4731410-b2f8-4711-b013-8a8f3de5859b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received event network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.515 253465 DEBUG oslo_concurrency.lockutils [req-c063626a-f90c-4953-b6ad-22c3b7e4fb25 req-c4731410-b2f8-4711-b013-8a8f3de5859b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.515 253465 DEBUG oslo_concurrency.lockutils [req-c063626a-f90c-4953-b6ad-22c3b7e4fb25 req-c4731410-b2f8-4711-b013-8a8f3de5859b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.516 253465 DEBUG oslo_concurrency.lockutils [req-c063626a-f90c-4953-b6ad-22c3b7e4fb25 req-c4731410-b2f8-4711-b013-8a8f3de5859b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.516 253465 DEBUG nova.compute.manager [req-c063626a-f90c-4953-b6ad-22c3b7e4fb25 req-c4731410-b2f8-4711-b013-8a8f3de5859b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] No waiting events found dispatching network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.517 253465 WARNING nova.compute.manager [req-c063626a-f90c-4953-b6ad-22c3b7e4fb25 req-c4731410-b2f8-4711-b013-8a8f3de5859b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Received unexpected event network-vif-plugged-e6035774-e683-4695-8e35-ced236ccbefb for instance with vm_state deleted and task_state None.
Nov 22 03:57:21 compute-0 sudo[273816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:21 compute-0 sudo[273816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:21 compute-0 sudo[273816]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:21 compute-0 sudo[273841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:57:21 compute-0 sudo[273841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:21 compute-0 sudo[273841]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:57:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1286924682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.891 253465 DEBUG oslo_concurrency.processutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:21 compute-0 nova_compute[253461]: 2025-11-22 03:57:21.899 253465 DEBUG nova.compute.provider_tree [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:57:21 compute-0 sudo[273866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:21 compute-0 sudo[273866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:21 compute-0 sudo[273866]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:22 compute-0 sudo[273893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:57:22 compute-0 sudo[273893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:22 compute-0 nova_compute[253461]: 2025-11-22 03:57:22.048 253465 DEBUG nova.scheduler.client.report [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:57:22 compute-0 ceph-mon[75011]: osdmap e240: 3 total, 3 up, 3 in
Nov 22 03:57:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1286924682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:22 compute-0 nova_compute[253461]: 2025-11-22 03:57:22.099 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:22 compute-0 nova_compute[253461]: 2025-11-22 03:57:22.143 253465 INFO nova.scheduler.client.report [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Deleted allocations for instance e11a8b93-8ac0-460e-8780-bd0a8525f6ad
Nov 22 03:57:22 compute-0 nova_compute[253461]: 2025-11-22 03:57:22.146 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 264 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 7.2 KiB/s wr, 147 op/s
Nov 22 03:57:22 compute-0 nova_compute[253461]: 2025-11-22 03:57:22.254 253465 DEBUG oslo_concurrency.lockutils [None req-dcbedbc6-755e-4a7c-a0af-a107ba25786b 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "e11a8b93-8ac0-460e-8780-bd0a8525f6ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:22 compute-0 sudo[273893]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 03:57:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:57:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:57:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:57:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:57:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:57:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:57:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:57:22 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8e160366-8dde-4b48-ace8-346da2eda0fe does not exist
Nov 22 03:57:22 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev edb34510-ec24-4d98-8b15-6dd32063ff51 does not exist
Nov 22 03:57:22 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 493cb637-8c39-4da9-8c81-3e70fd3531db does not exist
Nov 22 03:57:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:57:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:57:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:57:22 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:57:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:57:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:57:22 compute-0 sudo[273949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:22 compute-0 sudo[273949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:22 compute-0 sudo[273949]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:22 compute-0 sudo[273974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:57:22 compute-0 sudo[273974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:22 compute-0 sudo[273974]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:22 compute-0 sudo[273999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:22 compute-0 sudo[273999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:22 compute-0 sudo[273999]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:23.010 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:23.011 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:23.011 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Nov 22 03:57:23 compute-0 sudo[274024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:57:23 compute-0 sudo[274024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:23 compute-0 ceph-mon[75011]: pgmap v1213: 305 pgs: 305 active+clean; 264 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 7.2 KiB/s wr, 147 op/s
Nov 22 03:57:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 03:57:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:57:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:57:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:57:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:57:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:57:23 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:57:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Nov 22 03:57:23 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Nov 22 03:57:23 compute-0 podman[274089]: 2025-11-22 03:57:23.522392824 +0000 UTC m=+0.069455606 container create f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cannon, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:57:23 compute-0 podman[274089]: 2025-11-22 03:57:23.48712558 +0000 UTC m=+0.034188332 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:23 compute-0 systemd[1]: Started libpod-conmon-f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590.scope.
Nov 22 03:57:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:57:23 compute-0 podman[274089]: 2025-11-22 03:57:23.716686448 +0000 UTC m=+0.263749240 container init f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:57:23 compute-0 podman[274089]: 2025-11-22 03:57:23.741653699 +0000 UTC m=+0.288716451 container start f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:57:23 compute-0 hardcore_cannon[274105]: 167 167
Nov 22 03:57:23 compute-0 systemd[1]: libpod-f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590.scope: Deactivated successfully.
Nov 22 03:57:23 compute-0 conmon[274105]: conmon f3b87808b8510202dfb6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590.scope/container/memory.events
Nov 22 03:57:23 compute-0 podman[274089]: 2025-11-22 03:57:23.777645482 +0000 UTC m=+0.324708254 container attach f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cannon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 03:57:23 compute-0 podman[274089]: 2025-11-22 03:57:23.778851425 +0000 UTC m=+0.325914207 container died f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cannon, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:57:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5885480946f1287ca7b59f82c859b1f67fafeb02df4c72368dfed05028e352f-merged.mount: Deactivated successfully.
Nov 22 03:57:23 compute-0 podman[274089]: 2025-11-22 03:57:23.862865221 +0000 UTC m=+0.409928003 container remove f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Nov 22 03:57:23 compute-0 systemd[1]: libpod-conmon-f3b87808b8510202dfb67fdfe159b7fd1059663888832f0e11d5af22b6c13590.scope: Deactivated successfully.
Nov 22 03:57:24 compute-0 podman[274131]: 2025-11-22 03:57:24.100275104 +0000 UTC m=+0.073420729 container create 435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhaskara, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:57:24 compute-0 ceph-mon[75011]: osdmap e241: 3 total, 3 up, 3 in
Nov 22 03:57:24 compute-0 systemd[1]: Started libpod-conmon-435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f.scope.
Nov 22 03:57:24 compute-0 podman[274131]: 2025-11-22 03:57:24.077118494 +0000 UTC m=+0.050264139 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cdadd40cb98ab7a802b0a83d3c5720a5ea978ecf6ab6eaa9d6c01251e7e50d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cdadd40cb98ab7a802b0a83d3c5720a5ea978ecf6ab6eaa9d6c01251e7e50d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cdadd40cb98ab7a802b0a83d3c5720a5ea978ecf6ab6eaa9d6c01251e7e50d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cdadd40cb98ab7a802b0a83d3c5720a5ea978ecf6ab6eaa9d6c01251e7e50d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cdadd40cb98ab7a802b0a83d3c5720a5ea978ecf6ab6eaa9d6c01251e7e50d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:24 compute-0 podman[274131]: 2025-11-22 03:57:24.203114268 +0000 UTC m=+0.176259873 container init 435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhaskara, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:57:24 compute-0 podman[274131]: 2025-11-22 03:57:24.211151162 +0000 UTC m=+0.184296787 container start 435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhaskara, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:57:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 213 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 7.3 KiB/s wr, 146 op/s
Nov 22 03:57:24 compute-0 podman[274131]: 2025-11-22 03:57:24.21663112 +0000 UTC m=+0.189776795 container attach 435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:57:24 compute-0 nova_compute[253461]: 2025-11-22 03:57:24.303 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Nov 22 03:57:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Nov 22 03:57:25 compute-0 ceph-mon[75011]: pgmap v1215: 305 pgs: 305 active+clean; 213 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 7.3 KiB/s wr, 146 op/s
Nov 22 03:57:25 compute-0 jovial_bhaskara[274148]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:57:25 compute-0 jovial_bhaskara[274148]: --> relative data size: 1.0
Nov 22 03:57:25 compute-0 jovial_bhaskara[274148]: --> All data devices are unavailable
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/923677825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/923677825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:25 compute-0 systemd[1]: libpod-435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f.scope: Deactivated successfully.
Nov 22 03:57:25 compute-0 podman[274131]: 2025-11-22 03:57:25.519749512 +0000 UTC m=+1.492895177 container died 435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:57:25 compute-0 systemd[1]: libpod-435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f.scope: Consumed 1.235s CPU time.
Nov 22 03:57:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cdadd40cb98ab7a802b0a83d3c5720a5ea978ecf6ab6eaa9d6c01251e7e50d1-merged.mount: Deactivated successfully.
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321118257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3321118257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:25 compute-0 podman[274131]: 2025-11-22 03:57:25.583148137 +0000 UTC m=+1.556293722 container remove 435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:57:25 compute-0 systemd[1]: libpod-conmon-435cbd32d2f2aa8cb944d4345ef20ab7ab5d3bf38b60ac8d059d7b7b7673eb3f.scope: Deactivated successfully.
Nov 22 03:57:25 compute-0 sudo[274024]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:25 compute-0 sudo[274190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:25 compute-0 sudo[274190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:25 compute-0 sudo[274190]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:25 compute-0 sudo[274215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:57:25 compute-0 sudo[274215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:25 compute-0 sudo[274215]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:25 compute-0 sudo[274240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:25 compute-0 sudo[274240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:25 compute-0 sudo[274240]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Nov 22 03:57:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Nov 22 03:57:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Nov 22 03:57:25 compute-0 sudo[274265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:57:25 compute-0 sudo[274265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.090 253465 DEBUG oslo_concurrency.lockutils [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.092 253465 DEBUG oslo_concurrency.lockutils [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.113 253465 INFO nova.compute.manager [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Detaching volume 8dd7be21-cc11-4318-98e2-3ffa1436ab12
Nov 22 03:57:26 compute-0 ceph-mon[75011]: osdmap e242: 3 total, 3 up, 3 in
Nov 22 03:57:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/923677825' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/923677825' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3321118257' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3321118257' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:26 compute-0 ceph-mon[75011]: osdmap e243: 3 total, 3 up, 3 in
Nov 22 03:57:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 213 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 5.2 KiB/s wr, 124 op/s
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.316 253465 INFO nova.virt.block_device [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Attempting to driver detach volume 8dd7be21-cc11-4318-98e2-3ffa1436ab12 from mountpoint /dev/vdb
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.331 253465 DEBUG nova.virt.libvirt.driver [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Attempting to detach device vdb from instance c036cf5d-81f0-4f9e-9f31-67298476667c from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.332 253465 DEBUG nova.virt.libvirt.guest [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-8dd7be21-cc11-4318-98e2-3ffa1436ab12">
Nov 22 03:57:26 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   </source>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <serial>8dd7be21-cc11-4318-98e2-3ffa1436ab12</serial>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:57:26 compute-0 nova_compute[253461]: </disk>
Nov 22 03:57:26 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.342 253465 INFO nova.virt.libvirt.driver [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Successfully detached device vdb from instance c036cf5d-81f0-4f9e-9f31-67298476667c from the persistent domain config.
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.343 253465 DEBUG nova.virt.libvirt.driver [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance c036cf5d-81f0-4f9e-9f31-67298476667c from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.346 253465 DEBUG nova.virt.libvirt.guest [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-8dd7be21-cc11-4318-98e2-3ffa1436ab12">
Nov 22 03:57:26 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   </source>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <serial>8dd7be21-cc11-4318-98e2-3ffa1436ab12</serial>
Nov 22 03:57:26 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:57:26 compute-0 nova_compute[253461]: </disk>
Nov 22 03:57:26 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:57:26 compute-0 podman[274328]: 2025-11-22 03:57:26.360065194 +0000 UTC m=+0.057006118 container create ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 03:57:26 compute-0 systemd[1]: Started libpod-conmon-ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75.scope.
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.412 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763783846.4122794, c036cf5d-81f0-4f9e-9f31-67298476667c => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.415 253465 DEBUG nova.virt.libvirt.driver [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance c036cf5d-81f0-4f9e-9f31-67298476667c _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.419 253465 INFO nova.virt.libvirt.driver [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Successfully detached device vdb from instance c036cf5d-81f0-4f9e-9f31-67298476667c from the live domain config.
Nov 22 03:57:26 compute-0 podman[274328]: 2025-11-22 03:57:26.333390421 +0000 UTC m=+0.030331445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:57:26 compute-0 podman[274328]: 2025-11-22 03:57:26.457536134 +0000 UTC m=+0.154477058 container init ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:57:26 compute-0 podman[274328]: 2025-11-22 03:57:26.468646868 +0000 UTC m=+0.165587792 container start ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:57:26 compute-0 podman[274328]: 2025-11-22 03:57:26.473243902 +0000 UTC m=+0.170184826 container attach ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:57:26 compute-0 serene_mclaren[274345]: 167 167
Nov 22 03:57:26 compute-0 systemd[1]: libpod-ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75.scope: Deactivated successfully.
Nov 22 03:57:26 compute-0 podman[274328]: 2025-11-22 03:57:26.47582602 +0000 UTC m=+0.172766984 container died ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f01b75ecc9c7c89d4e8be6187dfa3841e5e872b1e0760d930931196f01cae5e-merged.mount: Deactivated successfully.
Nov 22 03:57:26 compute-0 podman[274328]: 2025-11-22 03:57:26.530752828 +0000 UTC m=+0.227693792 container remove ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 03:57:26 compute-0 systemd[1]: libpod-conmon-ed57af37bce3877403b6ce2176ee4df434fb0bbdbe9e3da2771e8d7334732e75.scope: Deactivated successfully.
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.620 253465 DEBUG nova.objects.instance [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'flavor' on Instance uuid c036cf5d-81f0-4f9e-9f31-67298476667c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:26 compute-0 nova_compute[253461]: 2025-11-22 03:57:26.686 253465 DEBUG oslo_concurrency.lockutils [None req-5477fbb2-705f-4965-acd0-df903faae90a e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3470437539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3470437539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:26 compute-0 podman[274371]: 2025-11-22 03:57:26.783260392 +0000 UTC m=+0.061765215 container create 67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:57:26 compute-0 systemd[1]: Started libpod-conmon-67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea.scope.
Nov 22 03:57:26 compute-0 podman[274371]: 2025-11-22 03:57:26.757643266 +0000 UTC m=+0.036148099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe37705ca9c2f1fbffa91d36e8c933b0ed8b77729960a9143dbc05ac5341261/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe37705ca9c2f1fbffa91d36e8c933b0ed8b77729960a9143dbc05ac5341261/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe37705ca9c2f1fbffa91d36e8c933b0ed8b77729960a9143dbc05ac5341261/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abe37705ca9c2f1fbffa91d36e8c933b0ed8b77729960a9143dbc05ac5341261/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:26 compute-0 podman[274371]: 2025-11-22 03:57:26.885986441 +0000 UTC m=+0.164491254 container init 67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 03:57:26 compute-0 podman[274371]: 2025-11-22 03:57:26.904986381 +0000 UTC m=+0.183491204 container start 67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:57:26 compute-0 podman[274371]: 2025-11-22 03:57:26.909270214 +0000 UTC m=+0.187775037 container attach 67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.148 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Nov 22 03:57:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Nov 22 03:57:27 compute-0 ceph-mon[75011]: pgmap v1218: 305 pgs: 305 active+clean; 213 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 5.2 KiB/s wr, 124 op/s
Nov 22 03:57:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3470437539' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3470437539' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.423 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.424 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.425 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.425 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.426 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.427 253465 INFO nova.compute.manager [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Terminating instance
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.429 253465 DEBUG nova.compute.manager [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:57:27 compute-0 kernel: tap77a13178-85 (unregistering): left promiscuous mode
Nov 22 03:57:27 compute-0 NetworkManager[48916]: <info>  [1763783847.4882] device (tap77a13178-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:57:27 compute-0 ovn_controller[152691]: 2025-11-22T03:57:27Z|00109|binding|INFO|Releasing lport 77a13178-8559-4b2b-af3d-991a871b7351 from this chassis (sb_readonly=0)
Nov 22 03:57:27 compute-0 ovn_controller[152691]: 2025-11-22T03:57:27Z|00110|binding|INFO|Setting lport 77a13178-8559-4b2b-af3d-991a871b7351 down in Southbound
Nov 22 03:57:27 compute-0 ovn_controller[152691]: 2025-11-22T03:57:27Z|00111|binding|INFO|Removing iface tap77a13178-85 ovn-installed in OVS
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.506 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.512 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:a7:62 10.100.0.7'], port_security=['fa:16:3e:3d:a7:62 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'c036cf5d-81f0-4f9e-9f31-67298476667c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8e17fcbd6721457f93b2fe5018fb3534', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ab74d7d-f920-4c91-ab55-8be5e55b4e62', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d20e9a4c-63a4-481f-abc2-5dcc033feed1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=77a13178-8559-4b2b-af3d-991a871b7351) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.513 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 77a13178-8559-4b2b-af3d-991a871b7351 in datapath b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb unbound from our chassis
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.515 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.515 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[035e6a7f-ca16-4113-80d2-3e04cd52e8af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.516 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb namespace which is not needed anymore
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.542 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:27 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 22 03:57:27 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 15.096s CPU time.
Nov 22 03:57:27 compute-0 systemd-machined[215728]: Machine qemu-8-instance-00000008 terminated.
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.666 253465 INFO nova.virt.libvirt.driver [-] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Instance destroyed successfully.
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.667 253465 DEBUG nova.objects.instance [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lazy-loading 'resources' on Instance uuid c036cf5d-81f0-4f9e-9f31-67298476667c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]: {
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:     "0": [
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:         {
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "devices": [
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "/dev/loop3"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             ],
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_name": "ceph_lv0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_size": "21470642176",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "name": "ceph_lv0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "tags": {
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.693 253465 DEBUG nova.virt.libvirt.vif [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:56:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1636563006',display_name='tempest-VolumesBackupsTest-instance-1636563006',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1636563006',id=8,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAZmXPkDdEmsqmkz8kqe43oFlJFd4mtzRvqTzdxQlIJdrm+TXvJOWJqYTKuDnf/jPUfL2yoATyIDrwn8REyMFcza6x9HqKqJXWCV8Fo3TAsCRy7bVFpoGuwCDzGXSnxhKA==',key_name='tempest-keypair-1370992232',keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:56:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8e17fcbd6721457f93b2fe5018fb3534',ramdisk_id='',reservation_id='r-1iuw06nn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-922932240',owner_user_name='tempest-VolumesBackupsTest-922932240-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:56:41Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e45192a50149470daea6e26cfd2de3a9',uuid=c036cf5d-81f0-4f9e-9f31-67298476667c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cluster_name": "ceph",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.crush_device_class": "",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.encrypted": "0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osd_id": "0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.694 253465 DEBUG nova.network.os_vif_util [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converting VIF {"id": "77a13178-8559-4b2b-af3d-991a871b7351", "address": "fa:16:3e:3d:a7:62", "network": {"id": "b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1574989340-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8e17fcbd6721457f93b2fe5018fb3534", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap77a13178-85", "ovs_interfaceid": "77a13178-8559-4b2b-af3d-991a871b7351", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.type": "block",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.vdo": "0"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             },
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "type": "block",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "vg_name": "ceph_vg0"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:         }
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:     ],
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:     "1": [
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:         {
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "devices": [
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "/dev/loop4"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             ],
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_name": "ceph_lv1",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_size": "21470642176",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "name": "ceph_lv1",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "tags": {
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cluster_name": "ceph",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.crush_device_class": "",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.encrypted": "0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osd_id": "1",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.type": "block",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.vdo": "0"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             },
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "type": "block",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "vg_name": "ceph_vg1"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:         }
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:     ],
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:     "2": [
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:         {
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "devices": [
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "/dev/loop5"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             ],
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_name": "ceph_lv2",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_size": "21470642176",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "name": "ceph_lv2",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "tags": {
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.cluster_name": "ceph",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.crush_device_class": "",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.encrypted": "0",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osd_id": "2",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.695 253465 DEBUG nova.network.os_vif_util [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3d:a7:62,bridge_name='br-int',has_traffic_filtering=True,id=77a13178-8559-4b2b-af3d-991a871b7351,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77a13178-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.type": "block",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:                 "ceph.vdo": "0"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             },
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "type": "block",
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:             "vg_name": "ceph_vg2"
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:         }
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]:     ]
Nov 22 03:57:27 compute-0 suspicious_thompson[274387]: }
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.695 253465 DEBUG os_vif [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:a7:62,bridge_name='br-int',has_traffic_filtering=True,id=77a13178-8559-4b2b-af3d-991a871b7351,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77a13178-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:57:27 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[273377]: [NOTICE]   (273382) : haproxy version is 2.8.14-c23fe91
Nov 22 03:57:27 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[273377]: [NOTICE]   (273382) : path to executable is /usr/sbin/haproxy
Nov 22 03:57:27 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[273377]: [WARNING]  (273382) : Exiting Master process...
Nov 22 03:57:27 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[273377]: [ALERT]    (273382) : Current worker (273388) exited with code 143 (Terminated)
Nov 22 03:57:27 compute-0 neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb[273377]: [WARNING]  (273382) : All workers exited. Exiting... (0)
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.698 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.698 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap77a13178-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:27 compute-0 systemd[1]: libpod-d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc.scope: Deactivated successfully.
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.702 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.705 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:57:27 compute-0 podman[274419]: 2025-11-22 03:57:27.706917938 +0000 UTC m=+0.064723032 container died d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.707 253465 INFO os_vif [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3d:a7:62,bridge_name='br-int',has_traffic_filtering=True,id=77a13178-8559-4b2b-af3d-991a871b7351,network=Network(b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap77a13178-85')
Nov 22 03:57:27 compute-0 systemd[1]: libpod-67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea.scope: Deactivated successfully.
Nov 22 03:57:27 compute-0 conmon[274387]: conmon 67ae53aca43ddd23604f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea.scope/container/memory.events
Nov 22 03:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc-userdata-shm.mount: Deactivated successfully.
Nov 22 03:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-397b58bf0d1ea36ea251b0a28d87ebb3295969f99251d3956123bb6a1a36726b-merged.mount: Deactivated successfully.
Nov 22 03:57:27 compute-0 podman[274371]: 2025-11-22 03:57:27.745410987 +0000 UTC m=+1.023915810 container died 67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.783 253465 DEBUG nova.compute.manager [req-79aa4a92-5959-437f-a67d-cdccab89e3dd req-8bf46da2-5442-4efc-927a-731db76614f7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received event network-vif-unplugged-77a13178-8559-4b2b-af3d-991a871b7351 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.783 253465 DEBUG oslo_concurrency.lockutils [req-79aa4a92-5959-437f-a67d-cdccab89e3dd req-8bf46da2-5442-4efc-927a-731db76614f7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.784 253465 DEBUG oslo_concurrency.lockutils [req-79aa4a92-5959-437f-a67d-cdccab89e3dd req-8bf46da2-5442-4efc-927a-731db76614f7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.785 253465 DEBUG oslo_concurrency.lockutils [req-79aa4a92-5959-437f-a67d-cdccab89e3dd req-8bf46da2-5442-4efc-927a-731db76614f7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.785 253465 DEBUG nova.compute.manager [req-79aa4a92-5959-437f-a67d-cdccab89e3dd req-8bf46da2-5442-4efc-927a-731db76614f7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] No waiting events found dispatching network-vif-unplugged-77a13178-8559-4b2b-af3d-991a871b7351 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.786 253465 DEBUG nova.compute.manager [req-79aa4a92-5959-437f-a67d-cdccab89e3dd req-8bf46da2-5442-4efc-927a-731db76614f7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received event network-vif-unplugged-77a13178-8559-4b2b-af3d-991a871b7351 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:57:27 compute-0 podman[274419]: 2025-11-22 03:57:27.818237456 +0000 UTC m=+0.176042540 container cleanup d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:57:27 compute-0 systemd[1]: libpod-conmon-d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc.scope: Deactivated successfully.
Nov 22 03:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-abe37705ca9c2f1fbffa91d36e8c933b0ed8b77729960a9143dbc05ac5341261-merged.mount: Deactivated successfully.
Nov 22 03:57:27 compute-0 podman[274487]: 2025-11-22 03:57:27.924126057 +0000 UTC m=+0.063290709 container remove d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:57:27 compute-0 podman[274371]: 2025-11-22 03:57:27.916848716 +0000 UTC m=+1.195353499 container remove 67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.934 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[738d0cb7-2ce9-4935-a429-7f620fc6b060]: (4, ('Sat Nov 22 03:57:27 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb (d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc)\nd0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc\nSat Nov 22 03:57:27 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb (d0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc)\nd0ccae870b5a997dff2451a288ffdaf8f950eeac2bfa372c00fc6e0c4b45befc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:27 compute-0 systemd[1]: libpod-conmon-67ae53aca43ddd23604f6d638bfdf24ed2ec36893db68c3e59da3be9b1f1ecea.scope: Deactivated successfully.
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.938 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c93efd-7001-4278-a312-62bcce94c3a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.939 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb33063bb-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.941 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:27 compute-0 kernel: tapb33063bb-90: left promiscuous mode
Nov 22 03:57:27 compute-0 sudo[274265]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:27 compute-0 nova_compute[253461]: 2025-11-22 03:57:27.974 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.979 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[be486b81-802c-4cb2-9ec1-9d0445880bc8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:27.999 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1b46f4d7-fe5d-4978-af37-c1e47665fee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:28.000 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[60380cb4-f164-45a3-bd30-a0bd0d26c06d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:28.017 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[11d365d7-8ae2-4bd6-b570-d2436c804dd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405240, 'reachable_time': 20513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274514, 'error': None, 'target': 'ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:28.019 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b33063bb-98b7-49c3-9e0b-1ae7b9fe02cb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:57:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:28.019 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[6da23d27-a50e-455d-8c04-a2c0d91b36f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:28 compute-0 systemd[1]: run-netns-ovnmeta\x2db33063bb\x2d98b7\x2d49c3\x2d9e0b\x2d1ae7b9fe02cb.mount: Deactivated successfully.
Nov 22 03:57:28 compute-0 sudo[274500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:28 compute-0 sudo[274500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:28 compute-0 sudo[274500]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:28 compute-0 sudo[274529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:57:28 compute-0 sudo[274529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:28 compute-0 sudo[274529]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:28 compute-0 sudo[274554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:28 compute-0 sudo[274554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Nov 22 03:57:28 compute-0 sudo[274554]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Nov 22 03:57:28 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Nov 22 03:57:28 compute-0 ceph-mon[75011]: osdmap e244: 3 total, 3 up, 3 in
Nov 22 03:57:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 192 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 10 KiB/s wr, 146 op/s
Nov 22 03:57:28 compute-0 sudo[274579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:57:28 compute-0 sudo[274579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:28 compute-0 nova_compute[253461]: 2025-11-22 03:57:28.330 253465 INFO nova.virt.libvirt.driver [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Deleting instance files /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c_del
Nov 22 03:57:28 compute-0 nova_compute[253461]: 2025-11-22 03:57:28.332 253465 INFO nova.virt.libvirt.driver [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Deletion of /var/lib/nova/instances/c036cf5d-81f0-4f9e-9f31-67298476667c_del complete
Nov 22 03:57:28 compute-0 nova_compute[253461]: 2025-11-22 03:57:28.417 253465 INFO nova.compute.manager [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Took 0.99 seconds to destroy the instance on the hypervisor.
Nov 22 03:57:28 compute-0 nova_compute[253461]: 2025-11-22 03:57:28.418 253465 DEBUG oslo.service.loopingcall [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:57:28 compute-0 nova_compute[253461]: 2025-11-22 03:57:28.418 253465 DEBUG nova.compute.manager [-] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:57:28 compute-0 nova_compute[253461]: 2025-11-22 03:57:28.419 253465 DEBUG nova.network.neutron [-] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:57:28 compute-0 podman[274644]: 2025-11-22 03:57:28.763599304 +0000 UTC m=+0.055922408 container create aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kapitsa, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:57:28 compute-0 systemd[1]: Started libpod-conmon-aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd.scope.
Nov 22 03:57:28 compute-0 podman[274644]: 2025-11-22 03:57:28.742849671 +0000 UTC m=+0.035172775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:57:28 compute-0 podman[274644]: 2025-11-22 03:57:28.859762944 +0000 UTC m=+0.152086078 container init aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:57:28 compute-0 podman[274644]: 2025-11-22 03:57:28.872465496 +0000 UTC m=+0.164788630 container start aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:57:28 compute-0 podman[274644]: 2025-11-22 03:57:28.876464031 +0000 UTC m=+0.168787165 container attach aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kapitsa, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:57:28 compute-0 nervous_kapitsa[274660]: 167 167
Nov 22 03:57:28 compute-0 systemd[1]: libpod-aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd.scope: Deactivated successfully.
Nov 22 03:57:28 compute-0 podman[274644]: 2025-11-22 03:57:28.87868746 +0000 UTC m=+0.171010594 container died aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a8747f2b06d816699186894a4e5f8b83f41412c14beff306b74ecb86173d5f4-merged.mount: Deactivated successfully.
Nov 22 03:57:28 compute-0 podman[274644]: 2025-11-22 03:57:28.930861785 +0000 UTC m=+0.223184908 container remove aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_kapitsa, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:57:28 compute-0 systemd[1]: libpod-conmon-aea77dbde00c19109aa59c73828404961f54c651e3d8ee939cae1ac8bf0227fd.scope: Deactivated successfully.
Nov 22 03:57:29 compute-0 podman[274685]: 2025-11-22 03:57:29.152595934 +0000 UTC m=+0.064588502 container create 963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:29 compute-0 systemd[1]: Started libpod-conmon-963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158.scope.
Nov 22 03:57:29 compute-0 podman[274685]: 2025-11-22 03:57:29.126809324 +0000 UTC m=+0.038801932 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:57:29 compute-0 ceph-mon[75011]: osdmap e245: 3 total, 3 up, 3 in
Nov 22 03:57:29 compute-0 ceph-mon[75011]: pgmap v1221: 305 pgs: 305 active+clean; 192 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 10 KiB/s wr, 146 op/s
Nov 22 03:57:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d78439fda3bea5484ea5858811b5f7a6e6e375178a86b327c78316256daedc96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d78439fda3bea5484ea5858811b5f7a6e6e375178a86b327c78316256daedc96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d78439fda3bea5484ea5858811b5f7a6e6e375178a86b327c78316256daedc96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d78439fda3bea5484ea5858811b5f7a6e6e375178a86b327c78316256daedc96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Nov 22 03:57:29 compute-0 podman[274685]: 2025-11-22 03:57:29.263568759 +0000 UTC m=+0.175561327 container init 963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:57:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Nov 22 03:57:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Nov 22 03:57:29 compute-0 podman[274685]: 2025-11-22 03:57:29.27980618 +0000 UTC m=+0.191798738 container start 963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 03:57:29 compute-0 podman[274685]: 2025-11-22 03:57:29.283716014 +0000 UTC m=+0.195708582 container attach 963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.697 253465 DEBUG nova.network.neutron [-] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.728 253465 INFO nova.compute.manager [-] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Took 1.31 seconds to deallocate network for instance.
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.801 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.802 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.826 253465 DEBUG nova.compute.manager [req-1ce566bc-686d-4145-8457-c01e6bb1d0a8 req-115683c1-99fb-46b1-a013-b9b7a2350b01 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received event network-vif-deleted-77a13178-8559-4b2b-af3d-991a871b7351 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.850 253465 DEBUG oslo_concurrency.processutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.905 253465 DEBUG nova.compute.manager [req-29ed85b2-1ab7-4a7c-b389-1cbda6736252 req-b8e115f8-8c67-47da-87e9-1ef5a7b5592e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received event network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.906 253465 DEBUG oslo_concurrency.lockutils [req-29ed85b2-1ab7-4a7c-b389-1cbda6736252 req-b8e115f8-8c67-47da-87e9-1ef5a7b5592e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.906 253465 DEBUG oslo_concurrency.lockutils [req-29ed85b2-1ab7-4a7c-b389-1cbda6736252 req-b8e115f8-8c67-47da-87e9-1ef5a7b5592e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.907 253465 DEBUG oslo_concurrency.lockutils [req-29ed85b2-1ab7-4a7c-b389-1cbda6736252 req-b8e115f8-8c67-47da-87e9-1ef5a7b5592e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.907 253465 DEBUG nova.compute.manager [req-29ed85b2-1ab7-4a7c-b389-1cbda6736252 req-b8e115f8-8c67-47da-87e9-1ef5a7b5592e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] No waiting events found dispatching network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:57:29 compute-0 nova_compute[253461]: 2025-11-22 03:57:29.907 253465 WARNING nova.compute.manager [req-29ed85b2-1ab7-4a7c-b389-1cbda6736252 req-b8e115f8-8c67-47da-87e9-1ef5a7b5592e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Received unexpected event network-vif-plugged-77a13178-8559-4b2b-af3d-991a871b7351 for instance with vm_state deleted and task_state None.
Nov 22 03:57:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 197 KiB/s rd, 16 KiB/s wr, 271 op/s
Nov 22 03:57:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]: {
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "osd_id": 1,
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "type": "bluestore"
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:     },
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "osd_id": 0,
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "type": "bluestore"
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:     },
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "osd_id": 2,
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:         "type": "bluestore"
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]:     }
Nov 22 03:57:30 compute-0 cranky_hofstadter[274702]: }
Nov 22 03:57:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Nov 22 03:57:30 compute-0 ceph-mon[75011]: osdmap e246: 3 total, 3 up, 3 in
Nov 22 03:57:30 compute-0 systemd[1]: libpod-963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158.scope: Deactivated successfully.
Nov 22 03:57:30 compute-0 systemd[1]: libpod-963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158.scope: Consumed 1.027s CPU time.
Nov 22 03:57:30 compute-0 conmon[274702]: conmon 963ac45e99df8774021d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158.scope/container/memory.events
Nov 22 03:57:30 compute-0 podman[274685]: 2025-11-22 03:57:30.338669092 +0000 UTC m=+1.250661660 container died 963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 03:57:30 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Nov 22 03:57:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:57:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077551973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:30 compute-0 nova_compute[253461]: 2025-11-22 03:57:30.393 253465 DEBUG oslo_concurrency.processutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-d78439fda3bea5484ea5858811b5f7a6e6e375178a86b327c78316256daedc96-merged.mount: Deactivated successfully.
Nov 22 03:57:30 compute-0 nova_compute[253461]: 2025-11-22 03:57:30.402 253465 DEBUG nova.compute.provider_tree [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:57:30 compute-0 nova_compute[253461]: 2025-11-22 03:57:30.426 253465 DEBUG nova.scheduler.client.report [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:57:30 compute-0 podman[274685]: 2025-11-22 03:57:30.44484441 +0000 UTC m=+1.356836968 container remove 963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 03:57:30 compute-0 nova_compute[253461]: 2025-11-22 03:57:30.457 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:30 compute-0 podman[274755]: 2025-11-22 03:57:30.460642007 +0000 UTC m=+0.115134227 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 03:57:30 compute-0 systemd[1]: libpod-conmon-963ac45e99df8774021df1f59118b3878ff178ac564bf4453e4a68295c96e158.scope: Deactivated successfully.
Nov 22 03:57:30 compute-0 sudo[274579]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:57:30 compute-0 nova_compute[253461]: 2025-11-22 03:57:30.499 253465 INFO nova.scheduler.client.report [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Deleted allocations for instance c036cf5d-81f0-4f9e-9f31-67298476667c
Nov 22 03:57:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:57:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:57:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:57:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8801121c-99ec-4fc4-8692-4898ff83db57 does not exist
Nov 22 03:57:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 831f1aae-af9d-43d1-998b-47a08ceb014e does not exist
Nov 22 03:57:30 compute-0 sudo[274789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:57:30 compute-0 sudo[274789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:30 compute-0 sudo[274789]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:30 compute-0 nova_compute[253461]: 2025-11-22 03:57:30.647 253465 DEBUG oslo_concurrency.lockutils [None req-4b5c17a4-fba1-4565-9492-b116dcd53af7 e45192a50149470daea6e26cfd2de3a9 8e17fcbd6721457f93b2fe5018fb3534 - - default default] Lock "c036cf5d-81f0-4f9e-9f31-67298476667c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:30 compute-0 sudo[274814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:57:30 compute-0 sudo[274814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:57:30 compute-0 sudo[274814]: pam_unix(sudo:session): session closed for user root
Nov 22 03:57:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e247 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Nov 22 03:57:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Nov 22 03:57:30 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Nov 22 03:57:31 compute-0 ceph-mon[75011]: pgmap v1223: 305 pgs: 305 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 197 KiB/s rd, 16 KiB/s wr, 271 op/s
Nov 22 03:57:31 compute-0 ceph-mon[75011]: osdmap e247: 3 total, 3 up, 3 in
Nov 22 03:57:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1077551973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:57:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:57:31 compute-0 ceph-mon[75011]: osdmap e248: 3 total, 3 up, 3 in
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.153 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 12 KiB/s wr, 215 op/s
Nov 22 03:57:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Nov 22 03:57:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Nov 22 03:57:32 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.457 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.458 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.458 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.479 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.480 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.480 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.480 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.481 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.739 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/413216158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/413216158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:57:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4058775112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:32 compute-0 nova_compute[253461]: 2025-11-22 03:57:32.963 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.218 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.220 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4550MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.220 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.221 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.312 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.313 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:57:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.342 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:33 compute-0 ceph-mon[75011]: pgmap v1226: 305 pgs: 305 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 147 KiB/s rd, 12 KiB/s wr, 215 op/s
Nov 22 03:57:33 compute-0 ceph-mon[75011]: osdmap e249: 3 total, 3 up, 3 in
Nov 22 03:57:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/413216158' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/413216158' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4058775112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Nov 22 03:57:33 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Nov 22 03:57:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:57:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3388162476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.816 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:33 compute-0 nova_compute[253461]: 2025-11-22 03:57:33.823 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:57:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 11 KiB/s wr, 157 op/s
Nov 22 03:57:34 compute-0 nova_compute[253461]: 2025-11-22 03:57:34.268 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783839.2671897, e11a8b93-8ac0-460e-8780-bd0a8525f6ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:57:34 compute-0 nova_compute[253461]: 2025-11-22 03:57:34.269 253465 INFO nova.compute.manager [-] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] VM Stopped (Lifecycle Event)
Nov 22 03:57:34 compute-0 nova_compute[253461]: 2025-11-22 03:57:34.312 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:57:34 compute-0 nova_compute[253461]: 2025-11-22 03:57:34.333 253465 DEBUG nova.compute.manager [None req-f675e7f3-2347-4af2-a853-6755cac85da4 - - - - - -] [instance: e11a8b93-8ac0-460e-8780-bd0a8525f6ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:57:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Nov 22 03:57:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Nov 22 03:57:34 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Nov 22 03:57:34 compute-0 nova_compute[253461]: 2025-11-22 03:57:34.532 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:57:34 compute-0 nova_compute[253461]: 2025-11-22 03:57:34.533 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:34 compute-0 ceph-mon[75011]: osdmap e250: 3 total, 3 up, 3 in
Nov 22 03:57:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3388162476' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:34 compute-0 ceph-mon[75011]: pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 11 KiB/s wr, 157 op/s
Nov 22 03:57:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386131611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386131611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192919694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1192919694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:35 compute-0 nova_compute[253461]: 2025-11-22 03:57:35.503 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:35 compute-0 nova_compute[253461]: 2025-11-22 03:57:35.504 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:35 compute-0 nova_compute[253461]: 2025-11-22 03:57:35.504 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:35 compute-0 ceph-mon[75011]: osdmap e251: 3 total, 3 up, 3 in
Nov 22 03:57:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1386131611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1386131611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1192919694' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1192919694' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Nov 22 03:57:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Nov 22 03:57:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:35.901802) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783855901855, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1370, "num_deletes": 270, "total_data_size": 1782093, "memory_usage": 1807152, "flush_reason": "Manual Compaction"}
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783855937144, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1747876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23723, "largest_seqno": 25092, "table_properties": {"data_size": 1741255, "index_size": 3696, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14870, "raw_average_key_size": 20, "raw_value_size": 1727624, "raw_average_value_size": 2392, "num_data_blocks": 163, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 270, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783776, "oldest_key_time": 1763783776, "file_creation_time": 1763783855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 35408 microseconds, and 8524 cpu microseconds.
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:35.937206) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1747876 bytes OK
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:35.937262) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:35.941967) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:35.941990) EVENT_LOG_v1 {"time_micros": 1763783855941983, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:35.942013) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1775569, prev total WAL file size 1775569, number of live WAL files 2.
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:35.943038) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1706KB)], [53(9141KB)]
Nov 22 03:57:35 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783855943076, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11108741, "oldest_snapshot_seqno": -1}
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5281 keys, 11012477 bytes, temperature: kUnknown
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783856096577, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 11012477, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10970466, "index_size": 27693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13253, "raw_key_size": 131032, "raw_average_key_size": 24, "raw_value_size": 10868745, "raw_average_value_size": 2058, "num_data_blocks": 1149, "num_entries": 5281, "num_filter_entries": 5281, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:36.097335) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 11012477 bytes
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:36.099942) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.3 rd, 71.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.9 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(12.7) write-amplify(6.3) OK, records in: 5828, records dropped: 547 output_compression: NoCompression
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:36.099974) EVENT_LOG_v1 {"time_micros": 1763783856099960, "job": 28, "event": "compaction_finished", "compaction_time_micros": 153626, "compaction_time_cpu_micros": 45368, "output_level": 6, "num_output_files": 1, "total_output_size": 11012477, "num_input_records": 5828, "num_output_records": 5281, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783856101136, "job": 28, "event": "table_file_deletion", "file_number": 55}
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783856104863, "job": 28, "event": "table_file_deletion", "file_number": 53}
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:35.942945) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:36.105002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:36.105012) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:36.105016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:36.105020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:57:36 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:57:36.105024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:57:36
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'images', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'backups']
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 6.0 KiB/s wr, 88 op/s
Nov 22 03:57:36 compute-0 nova_compute[253461]: 2025-11-22 03:57:36.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:36 compute-0 nova_compute[253461]: 2025-11-22 03:57:36.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:57:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:57:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4201154890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4201154890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:36 compute-0 ceph-mon[75011]: osdmap e252: 3 total, 3 up, 3 in
Nov 22 03:57:36 compute-0 ceph-mon[75011]: pgmap v1232: 305 pgs: 305 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 6.0 KiB/s wr, 88 op/s
Nov 22 03:57:36 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4201154890' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:36 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4201154890' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:37 compute-0 nova_compute[253461]: 2025-11-22 03:57:37.155 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:37 compute-0 nova_compute[253461]: 2025-11-22 03:57:37.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:37 compute-0 nova_compute[253461]: 2025-11-22 03:57:37.741 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.6 KiB/s wr, 95 op/s
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.457 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.458 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.487 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.580 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.580 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.589 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.589 253465 INFO nova.compute.claims [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:57:38 compute-0 nova_compute[253461]: 2025-11-22 03:57:38.721 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.072 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:57:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4034832745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.220 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.225 253465 DEBUG nova.compute.provider_tree [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.237 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.243 253465 DEBUG nova.scheduler.client.report [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.269 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.271 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:57:39 compute-0 ceph-mon[75011]: pgmap v1233: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 5.6 KiB/s wr, 95 op/s
Nov 22 03:57:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4034832745' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.349 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.349 253465 DEBUG nova.network.neutron [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.369 253465 INFO nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.393 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:57:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:39.398 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.398 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:39 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:39.400 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.480 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.482 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.482 253465 INFO nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Creating image(s)
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.505 253465 DEBUG nova.storage.rbd_utils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.530 253465 DEBUG nova.storage.rbd_utils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.561 253465 DEBUG nova.storage.rbd_utils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.565 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.599 253465 DEBUG nova.policy [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '323c39d407144b438e9fbcdc7c67710e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5846275e26354bb095399d10c8b52285', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.656 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.657 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.658 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.659 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.691 253465 DEBUG nova.storage.rbd_utils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:57:39 compute-0 nova_compute[253461]: 2025-11-22 03:57:39.696 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 6.0 KiB/s wr, 147 op/s
Nov 22 03:57:40 compute-0 nova_compute[253461]: 2025-11-22 03:57:40.314 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:40 compute-0 nova_compute[253461]: 2025-11-22 03:57:40.400 253465 DEBUG nova.storage.rbd_utils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] resizing rbd image aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:57:40 compute-0 nova_compute[253461]: 2025-11-22 03:57:40.518 253465 DEBUG nova.objects.instance [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'migration_context' on Instance uuid aaadb0bf-8f4d-4eb7-9688-60999c8129dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:40 compute-0 nova_compute[253461]: 2025-11-22 03:57:40.548 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:57:40 compute-0 nova_compute[253461]: 2025-11-22 03:57:40.549 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Ensure instance console log exists: /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:57:40 compute-0 nova_compute[253461]: 2025-11-22 03:57:40.550 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:40 compute-0 nova_compute[253461]: 2025-11-22 03:57:40.550 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:40 compute-0 nova_compute[253461]: 2025-11-22 03:57:40.550 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e252 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Nov 22 03:57:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Nov 22 03:57:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Nov 22 03:57:41 compute-0 nova_compute[253461]: 2025-11-22 03:57:41.008 253465 DEBUG nova.network.neutron [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Successfully created port: efd1a6a8-37bb-4721-9db8-ab78b987ebb0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:57:41 compute-0 ceph-mon[75011]: pgmap v1234: 305 pgs: 305 active+clean; 88 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 6.0 KiB/s wr, 147 op/s
Nov 22 03:57:41 compute-0 ceph-mon[75011]: osdmap e253: 3 total, 3 up, 3 in
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.032 253465 DEBUG nova.network.neutron [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Successfully updated port: efd1a6a8-37bb-4721-9db8-ab78b987ebb0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.055 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.056 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquired lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.056 253465 DEBUG nova.network.neutron [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.144 253465 DEBUG nova.compute.manager [req-80a9416d-aeeb-43ef-95c7-5f91e9188118 req-ec13f40f-c098-4b42-9ffc-470532dde62a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received event network-changed-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.145 253465 DEBUG nova.compute.manager [req-80a9416d-aeeb-43ef-95c7-5f91e9188118 req-ec13f40f-c098-4b42-9ffc-470532dde62a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Refreshing instance network info cache due to event network-changed-efd1a6a8-37bb-4721-9db8-ab78b987ebb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.146 253465 DEBUG oslo_concurrency.lockutils [req-80a9416d-aeeb-43ef-95c7-5f91e9188118 req-ec13f40f-c098-4b42-9ffc-470532dde62a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.156 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.222 253465 DEBUG nova.network.neutron [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:57:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1236: 305 pgs: 305 active+clean; 107 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 90 op/s
Nov 22 03:57:42 compute-0 ceph-mon[75011]: pgmap v1236: 305 pgs: 305 active+clean; 107 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.4 MiB/s wr, 90 op/s
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.664 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783847.6633766, c036cf5d-81f0-4f9e-9f31-67298476667c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.666 253465 INFO nova.compute.manager [-] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] VM Stopped (Lifecycle Event)
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.696 253465 DEBUG nova.compute.manager [None req-ae3e1f00-2705-441b-8ece-06bfb2167c51 - - - - - -] [instance: c036cf5d-81f0-4f9e-9f31-67298476667c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:57:42 compute-0 nova_compute[253461]: 2025-11-22 03:57:42.776 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.349 253465 DEBUG nova.network.neutron [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Updating instance_info_cache with network_info: [{"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.388 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Releasing lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.389 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Instance network_info: |[{"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.390 253465 DEBUG oslo_concurrency.lockutils [req-80a9416d-aeeb-43ef-95c7-5f91e9188118 req-ec13f40f-c098-4b42-9ffc-470532dde62a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.390 253465 DEBUG nova.network.neutron [req-80a9416d-aeeb-43ef-95c7-5f91e9188118 req-ec13f40f-c098-4b42-9ffc-470532dde62a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Refreshing network info cache for port efd1a6a8-37bb-4721-9db8-ab78b987ebb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.396 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Start _get_guest_xml network_info=[{"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.403 253465 WARNING nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.408 253465 DEBUG nova.virt.libvirt.host [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.408 253465 DEBUG nova.virt.libvirt.host [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.412 253465 DEBUG nova.virt.libvirt.host [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.413 253465 DEBUG nova.virt.libvirt.host [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.414 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.414 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.415 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.416 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.416 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.417 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.417 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.418 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.418 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.419 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.419 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.420 253465 DEBUG nova.virt.hardware [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.425 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:57:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2740596974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.925 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.961 253465 DEBUG nova.storage.rbd_utils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:57:43 compute-0 nova_compute[253461]: 2025-11-22 03:57:43.967 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2740596974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 128 op/s
Nov 22 03:57:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:57:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2588844625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.440 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.443 253465 DEBUG nova.virt.libvirt.vif [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1790647099',display_name='tempest-VolumesSnapshotTestJSON-instance-1790647099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1790647099',id=10,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqrpIPGFLG6Qx4n5YxknoazZOZGTcxQjQwzckhjY+ySVgDzYzPRtC/1lU1gb8R0Aq/kYIiuFEqZqNOPYUHs/HYmhJFFwcFQWglGzDim0t9caZbWXc0Kf+g1y6udhWy48Q==',key_name='tempest-keypair-627494330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5846275e26354bb095399d10c8b52285',ramdisk_id='',reservation_id='r-wgho1gzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-724001677',owner_user_name='tempest-VolumesSnapshotTestJSON-724001677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:57:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='323c39d407144b438e9fbcdc7c67710e',uuid=aaadb0bf-8f4d-4eb7-9688-60999c8129dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.444 253465 DEBUG nova.network.os_vif_util [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converting VIF {"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.445 253465 DEBUG nova.network.os_vif_util [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:9d:7b,bridge_name='br-int',has_traffic_filtering=True,id=efd1a6a8-37bb-4721-9db8-ab78b987ebb0,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd1a6a8-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.447 253465 DEBUG nova.objects.instance [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'pci_devices' on Instance uuid aaadb0bf-8f4d-4eb7-9688-60999c8129dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.478 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <uuid>aaadb0bf-8f4d-4eb7-9688-60999c8129dd</uuid>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <name>instance-0000000a</name>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1790647099</nova:name>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:57:43</nova:creationTime>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <nova:user uuid="323c39d407144b438e9fbcdc7c67710e">tempest-VolumesSnapshotTestJSON-724001677-project-member</nova:user>
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <nova:project uuid="5846275e26354bb095399d10c8b52285">tempest-VolumesSnapshotTestJSON-724001677</nova:project>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <nova:port uuid="efd1a6a8-37bb-4721-9db8-ab78b987ebb0">
Nov 22 03:57:44 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <system>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <entry name="serial">aaadb0bf-8f4d-4eb7-9688-60999c8129dd</entry>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <entry name="uuid">aaadb0bf-8f4d-4eb7-9688-60999c8129dd</entry>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </system>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <os>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   </os>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <features>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   </features>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk">
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       </source>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk.config">
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       </source>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:57:44 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:09:9d:7b"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <target dev="tapefd1a6a8-37"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd/console.log" append="off"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <video>
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </video>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:57:44 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:57:44 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:57:44 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:57:44 compute-0 nova_compute[253461]: </domain>
Nov 22 03:57:44 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.479 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Preparing to wait for external event network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.480 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.480 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.481 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.482 253465 DEBUG nova.virt.libvirt.vif [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1790647099',display_name='tempest-VolumesSnapshotTestJSON-instance-1790647099',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1790647099',id=10,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqrpIPGFLG6Qx4n5YxknoazZOZGTcxQjQwzckhjY+ySVgDzYzPRtC/1lU1gb8R0Aq/kYIiuFEqZqNOPYUHs/HYmhJFFwcFQWglGzDim0t9caZbWXc0Kf+g1y6udhWy48Q==',key_name='tempest-keypair-627494330',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5846275e26354bb095399d10c8b52285',ramdisk_id='',reservation_id='r-wgho1gzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-724001677',owner_user_name='tempest-VolumesSnapshotTestJSON-724001677-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:57:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='323c39d407144b438e9fbcdc7c67710e',uuid=aaadb0bf-8f4d-4eb7-9688-60999c8129dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.483 253465 DEBUG nova.network.os_vif_util [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converting VIF {"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.484 253465 DEBUG nova.network.os_vif_util [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:9d:7b,bridge_name='br-int',has_traffic_filtering=True,id=efd1a6a8-37bb-4721-9db8-ab78b987ebb0,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd1a6a8-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.484 253465 DEBUG os_vif [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:9d:7b,bridge_name='br-int',has_traffic_filtering=True,id=efd1a6a8-37bb-4721-9db8-ab78b987ebb0,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd1a6a8-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.485 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.486 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.487 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.491 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.491 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapefd1a6a8-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.492 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapefd1a6a8-37, col_values=(('external_ids', {'iface-id': 'efd1a6a8-37bb-4721-9db8-ab78b987ebb0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:9d:7b', 'vm-uuid': 'aaadb0bf-8f4d-4eb7-9688-60999c8129dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.495 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:44 compute-0 NetworkManager[48916]: <info>  [1763783864.4960] manager: (tapefd1a6a8-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.498 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.505 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.508 253465 INFO os_vif [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:9d:7b,bridge_name='br-int',has_traffic_filtering=True,id=efd1a6a8-37bb-4721-9db8-ab78b987ebb0,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd1a6a8-37')
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.585 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.586 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.586 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No VIF found with MAC fa:16:3e:09:9d:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.586 253465 INFO nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Using config drive
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.609 253465 DEBUG nova.storage.rbd_utils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.712 253465 DEBUG nova.network.neutron [req-80a9416d-aeeb-43ef-95c7-5f91e9188118 req-ec13f40f-c098-4b42-9ffc-470532dde62a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Updated VIF entry in instance network info cache for port efd1a6a8-37bb-4721-9db8-ab78b987ebb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.713 253465 DEBUG nova.network.neutron [req-80a9416d-aeeb-43ef-95c7-5f91e9188118 req-ec13f40f-c098-4b42-9ffc-470532dde62a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Updating instance_info_cache with network_info: [{"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.729 253465 DEBUG oslo_concurrency.lockutils [req-80a9416d-aeeb-43ef-95c7-5f91e9188118 req-ec13f40f-c098-4b42-9ffc-470532dde62a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.890 253465 INFO nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Creating config drive at /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd/disk.config
Nov 22 03:57:44 compute-0 nova_compute[253461]: 2025-11-22 03:57:44.901 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmw1x0s2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:45 compute-0 ceph-mon[75011]: pgmap v1237: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.6 MiB/s wr, 128 op/s
Nov 22 03:57:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2588844625' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:57:45 compute-0 nova_compute[253461]: 2025-11-22 03:57:45.032 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmw1x0s2h" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:45 compute-0 nova_compute[253461]: 2025-11-22 03:57:45.070 253465 DEBUG nova.storage.rbd_utils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] rbd image aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:57:45 compute-0 nova_compute[253461]: 2025-11-22 03:57:45.073 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd/disk.config aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:57:45 compute-0 nova_compute[253461]: 2025-11-22 03:57:45.675 253465 DEBUG oslo_concurrency.processutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd/disk.config aaadb0bf-8f4d-4eb7-9688-60999c8129dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:57:45 compute-0 nova_compute[253461]: 2025-11-22 03:57:45.677 253465 INFO nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Deleting local config drive /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd/disk.config because it was imported into RBD.
Nov 22 03:57:45 compute-0 kernel: tapefd1a6a8-37: entered promiscuous mode
Nov 22 03:57:45 compute-0 NetworkManager[48916]: <info>  [1763783865.7505] manager: (tapefd1a6a8-37): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Nov 22 03:57:45 compute-0 ovn_controller[152691]: 2025-11-22T03:57:45Z|00112|binding|INFO|Claiming lport efd1a6a8-37bb-4721-9db8-ab78b987ebb0 for this chassis.
Nov 22 03:57:45 compute-0 ovn_controller[152691]: 2025-11-22T03:57:45Z|00113|binding|INFO|efd1a6a8-37bb-4721-9db8-ab78b987ebb0: Claiming fa:16:3e:09:9d:7b 10.100.0.3
Nov 22 03:57:45 compute-0 nova_compute[253461]: 2025-11-22 03:57:45.752 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.767 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:9d:7b 10.100.0.3'], port_security=['fa:16:3e:09:9d:7b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'aaadb0bf-8f4d-4eb7-9688-60999c8129dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5846275e26354bb095399d10c8b52285', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bdf45843-e259-4187-812e-2f8dc06dc437', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3e3d611-3c95-4b51-bc26-a179069ce7f3, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=efd1a6a8-37bb-4721-9db8-ab78b987ebb0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.768 162689 INFO neutron.agent.ovn.metadata.agent [-] Port efd1a6a8-37bb-4721-9db8-ab78b987ebb0 in datapath 4c32f371-ff20-4759-bfb3-24316a8c7a57 bound to our chassis
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.769 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c32f371-ff20-4759-bfb3-24316a8c7a57
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.782 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e97629-4352-4b18-af13-556622f869fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.783 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c32f371-f1 in ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.786 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c32f371-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.786 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b6a3c339-eec6-450b-9fec-7bc8fb92d535]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.787 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[91c0c14e-d89b-4023-a4b4-3d81d4351a41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.802 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[d29ddf00-9e4c-4b74-b790-9498f614df90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 systemd-machined[215728]: New machine qemu-10-instance-0000000a.
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.829 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[978495a0-f1ee-4275-b134-1dcbcc87aacd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.861 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[9c854ef7-2ab1-4d9b-abac-2bf831cb29bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.867 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c478cc46-3628-486c-b16d-37863707ed97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 NetworkManager[48916]: <info>  [1763783865.8690] manager: (tap4c32f371-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/65)
Nov 22 03:57:45 compute-0 nova_compute[253461]: 2025-11-22 03:57:45.874 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:45 compute-0 ovn_controller[152691]: 2025-11-22T03:57:45Z|00114|binding|INFO|Setting lport efd1a6a8-37bb-4721-9db8-ab78b987ebb0 ovn-installed in OVS
Nov 22 03:57:45 compute-0 ovn_controller[152691]: 2025-11-22T03:57:45Z|00115|binding|INFO|Setting lport efd1a6a8-37bb-4721-9db8-ab78b987ebb0 up in Southbound
Nov 22 03:57:45 compute-0 systemd-udevd[275250]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:57:45 compute-0 systemd-udevd[275249]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:57:45 compute-0 nova_compute[253461]: 2025-11-22 03:57:45.881 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:45 compute-0 NetworkManager[48916]: <info>  [1763783865.8970] device (tapefd1a6a8-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:57:45 compute-0 NetworkManager[48916]: <info>  [1763783865.8977] device (tapefd1a6a8-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.898 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[ab60bce3-a1dc-4a90-bbb7-c211cc0ebb31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.902 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[464ac4d7-538f-4c12-bc86-b5f564a0b80e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 podman[275205]: 2025-11-22 03:57:45.914494381 +0000 UTC m=+0.115432276 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 03:57:45 compute-0 NetworkManager[48916]: <info>  [1763783865.9280] device (tap4c32f371-f0): carrier: link connected
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.934 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[9d63f8c4-e64c-4461-b486-263c2a8b45c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 podman[275207]: 2025-11-22 03:57:45.946863413 +0000 UTC m=+0.144329768 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.950 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ebc9469f-1028-40e7-857c-0874d037f35d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c32f371-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:ec:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411852, 'reachable_time': 44725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275283, 'error': None, 'target': 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.969 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6a86b69a-c283-4dec-876d-7239fabf2cf5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:ec95'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411852, 'tstamp': 411852}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275286, 'error': None, 'target': 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:45 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:45.987 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[04185f5c-8768-4e9b-ba80-9d28e94e648e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c32f371-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:ec:95'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 37], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411852, 'reachable_time': 44725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275287, 'error': None, 'target': 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Nov 22 03:57:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Nov 22 03:57:46 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.026 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0f701bf6-7420-4da9-a1fd-f64714501377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.074 253465 DEBUG nova.compute.manager [req-07809fd9-20e2-40a5-ab3d-3df6492793ac req-a408c3a5-f15c-4d8c-9a6d-25e97ccb2d56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received event network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.074 253465 DEBUG oslo_concurrency.lockutils [req-07809fd9-20e2-40a5-ab3d-3df6492793ac req-a408c3a5-f15c-4d8c-9a6d-25e97ccb2d56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.075 253465 DEBUG oslo_concurrency.lockutils [req-07809fd9-20e2-40a5-ab3d-3df6492793ac req-a408c3a5-f15c-4d8c-9a6d-25e97ccb2d56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.075 253465 DEBUG oslo_concurrency.lockutils [req-07809fd9-20e2-40a5-ab3d-3df6492793ac req-a408c3a5-f15c-4d8c-9a6d-25e97ccb2d56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.075 253465 DEBUG nova.compute.manager [req-07809fd9-20e2-40a5-ab3d-3df6492793ac req-a408c3a5-f15c-4d8c-9a6d-25e97ccb2d56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Processing event network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.089 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[153191eb-7d97-4824-80b6-3631f28017bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.090 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c32f371-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.090 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.090 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c32f371-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.092 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:46 compute-0 NetworkManager[48916]: <info>  [1763783866.0931] manager: (tap4c32f371-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 22 03:57:46 compute-0 kernel: tap4c32f371-f0: entered promiscuous mode
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.095 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.096 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c32f371-f0, col_values=(('external_ids', {'iface-id': '2af56aef-09c6-4f74-ad59-cabe02948eac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.097 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:46 compute-0 ovn_controller[152691]: 2025-11-22T03:57:46Z|00116|binding|INFO|Releasing lport 2af56aef-09c6-4f74-ad59-cabe02948eac from this chassis (sb_readonly=0)
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.097 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.098 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c32f371-ff20-4759-bfb3-24316a8c7a57.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c32f371-ff20-4759-bfb3-24316a8c7a57.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.099 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c79c1580-ebe9-4eb3-96ed-d2a65e70c2f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.099 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-4c32f371-ff20-4759-bfb3-24316a8c7a57
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/4c32f371-ff20-4759-bfb3-24316a8c7a57.pid.haproxy
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 4c32f371-ff20-4759-bfb3-24316a8c7a57
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:57:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:46.100 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'env', 'PROCESS_TAG=haproxy-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c32f371-ff20-4759-bfb3-24316a8c7a57.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.113 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 108 op/s
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034688731116103405 of space, bias 1.0, pg target 0.10406619334831022 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:57:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.444 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783866.443893, aaadb0bf-8f4d-4eb7-9688-60999c8129dd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.445 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] VM Started (Lifecycle Event)
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.448 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.452 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.457 253465 INFO nova.virt.libvirt.driver [-] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Instance spawned successfully.
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.458 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:57:46 compute-0 podman[275359]: 2025-11-22 03:57:46.45855425 +0000 UTC m=+0.062658946 container create 70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.476 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.487 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.495 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.496 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.497 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.498 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.499 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.500 253465 DEBUG nova.virt.libvirt.driver [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:57:46 compute-0 podman[275359]: 2025-11-22 03:57:46.420158979 +0000 UTC m=+0.024263665 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.515 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.516 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783866.4448261, aaadb0bf-8f4d-4eb7-9688-60999c8129dd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.516 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] VM Paused (Lifecycle Event)
Nov 22 03:57:46 compute-0 systemd[1]: Started libpod-conmon-70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3.scope.
Nov 22 03:57:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:57:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974ef768f35f378d8906cdb8bf2062d1c28ab9f9e2e182bc22a8eace3e6109cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.565 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.570 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783866.4515371, aaadb0bf-8f4d-4eb7-9688-60999c8129dd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.570 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] VM Resumed (Lifecycle Event)
Nov 22 03:57:46 compute-0 podman[275359]: 2025-11-22 03:57:46.574391919 +0000 UTC m=+0.178496605 container init 70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 03:57:46 compute-0 podman[275359]: 2025-11-22 03:57:46.585373365 +0000 UTC m=+0.189478001 container start 70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.599 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.603 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.609 253465 INFO nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Took 7.13 seconds to spawn the instance on the hypervisor.
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.609 253465 DEBUG nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:57:46 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[275375]: [NOTICE]   (275379) : New worker (275381) forked
Nov 22 03:57:46 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[275375]: [NOTICE]   (275379) : Loading success.
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.629 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.688 253465 INFO nova.compute.manager [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Took 8.15 seconds to build instance.
Nov 22 03:57:46 compute-0 nova_compute[253461]: 2025-11-22 03:57:46.705 253465 DEBUG oslo_concurrency.lockutils [None req-e7b9164a-65f1-4ae0-9036-f99d296c38c5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:47 compute-0 ceph-mon[75011]: osdmap e254: 3 total, 3 up, 3 in
Nov 22 03:57:47 compute-0 ceph-mon[75011]: pgmap v1239: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 108 op/s
Nov 22 03:57:47 compute-0 nova_compute[253461]: 2025-11-22 03:57:47.159 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:47 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:57:47.405 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:57:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Nov 22 03:57:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Nov 22 03:57:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.180 253465 DEBUG nova.compute.manager [req-c968a52d-5636-49cc-ad4d-5bec9029564a req-fe8fff25-11fa-4c27-800e-1506f41713ee f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received event network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.181 253465 DEBUG oslo_concurrency.lockutils [req-c968a52d-5636-49cc-ad4d-5bec9029564a req-fe8fff25-11fa-4c27-800e-1506f41713ee f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.182 253465 DEBUG oslo_concurrency.lockutils [req-c968a52d-5636-49cc-ad4d-5bec9029564a req-fe8fff25-11fa-4c27-800e-1506f41713ee f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.182 253465 DEBUG oslo_concurrency.lockutils [req-c968a52d-5636-49cc-ad4d-5bec9029564a req-fe8fff25-11fa-4c27-800e-1506f41713ee f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.184 253465 DEBUG nova.compute.manager [req-c968a52d-5636-49cc-ad4d-5bec9029564a req-fe8fff25-11fa-4c27-800e-1506f41713ee f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] No waiting events found dispatching network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.184 253465 WARNING nova.compute.manager [req-c968a52d-5636-49cc-ad4d-5bec9029564a req-fe8fff25-11fa-4c27-800e-1506f41713ee f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received unexpected event network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 for instance with vm_state active and task_state None.
Nov 22 03:57:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 148 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 85 op/s
Nov 22 03:57:48 compute-0 ovn_controller[152691]: 2025-11-22T03:57:48Z|00117|binding|INFO|Releasing lport 2af56aef-09c6-4f74-ad59-cabe02948eac from this chassis (sb_readonly=0)
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.490 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:48 compute-0 NetworkManager[48916]: <info>  [1763783868.4913] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 22 03:57:48 compute-0 NetworkManager[48916]: <info>  [1763783868.4926] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.569 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:48 compute-0 ovn_controller[152691]: 2025-11-22T03:57:48Z|00118|binding|INFO|Releasing lport 2af56aef-09c6-4f74-ad59-cabe02948eac from this chassis (sb_readonly=0)
Nov 22 03:57:48 compute-0 nova_compute[253461]: 2025-11-22 03:57:48.575 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:49 compute-0 ceph-mon[75011]: osdmap e255: 3 total, 3 up, 3 in
Nov 22 03:57:49 compute-0 ceph-mon[75011]: pgmap v1241: 305 pgs: 305 active+clean; 148 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.1 MiB/s wr, 85 op/s
Nov 22 03:57:49 compute-0 nova_compute[253461]: 2025-11-22 03:57:49.513 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345134376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345134376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1630640879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1630640879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 196 op/s
Nov 22 03:57:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/345134376' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/345134376' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1630640879' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1630640879' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:50 compute-0 nova_compute[253461]: 2025-11-22 03:57:50.330 253465 DEBUG nova.compute.manager [req-890c050c-7402-419f-85f2-f3768ece6f1c req-f5ce1d05-5505-4c50-8a0e-3416a4f64f05 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received event network-changed-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:57:50 compute-0 nova_compute[253461]: 2025-11-22 03:57:50.330 253465 DEBUG nova.compute.manager [req-890c050c-7402-419f-85f2-f3768ece6f1c req-f5ce1d05-5505-4c50-8a0e-3416a4f64f05 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Refreshing instance network info cache due to event network-changed-efd1a6a8-37bb-4721-9db8-ab78b987ebb0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:57:50 compute-0 nova_compute[253461]: 2025-11-22 03:57:50.331 253465 DEBUG oslo_concurrency.lockutils [req-890c050c-7402-419f-85f2-f3768ece6f1c req-f5ce1d05-5505-4c50-8a0e-3416a4f64f05 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:57:50 compute-0 nova_compute[253461]: 2025-11-22 03:57:50.331 253465 DEBUG oslo_concurrency.lockutils [req-890c050c-7402-419f-85f2-f3768ece6f1c req-f5ce1d05-5505-4c50-8a0e-3416a4f64f05 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:57:50 compute-0 nova_compute[253461]: 2025-11-22 03:57:50.331 253465 DEBUG nova.network.neutron [req-890c050c-7402-419f-85f2-f3768ece6f1c req-f5ce1d05-5505-4c50-8a0e-3416a4f64f05 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Refreshing network info cache for port efd1a6a8-37bb-4721-9db8-ab78b987ebb0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:57:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:51 compute-0 ceph-mon[75011]: pgmap v1242: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.0 MiB/s wr, 196 op/s
Nov 22 03:57:52 compute-0 nova_compute[253461]: 2025-11-22 03:57:52.165 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1243: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 203 op/s
Nov 22 03:57:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:57:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2705353893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:57:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2705353893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:52 compute-0 ceph-mon[75011]: pgmap v1243: 305 pgs: 305 active+clean; 180 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 203 op/s
Nov 22 03:57:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2705353893' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:57:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2705353893' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:57:52 compute-0 nova_compute[253461]: 2025-11-22 03:57:52.703 253465 DEBUG nova.network.neutron [req-890c050c-7402-419f-85f2-f3768ece6f1c req-f5ce1d05-5505-4c50-8a0e-3416a4f64f05 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Updated VIF entry in instance network info cache for port efd1a6a8-37bb-4721-9db8-ab78b987ebb0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:57:52 compute-0 nova_compute[253461]: 2025-11-22 03:57:52.704 253465 DEBUG nova.network.neutron [req-890c050c-7402-419f-85f2-f3768ece6f1c req-f5ce1d05-5505-4c50-8a0e-3416a4f64f05 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Updating instance_info_cache with network_info: [{"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:57:52 compute-0 nova_compute[253461]: 2025-11-22 03:57:52.734 253465 DEBUG oslo_concurrency.lockutils [req-890c050c-7402-419f-85f2-f3768ece6f1c req-f5ce1d05-5505-4c50-8a0e-3416a4f64f05 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-aaadb0bf-8f4d-4eb7-9688-60999c8129dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:57:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 148 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 244 op/s
Nov 22 03:57:54 compute-0 nova_compute[253461]: 2025-11-22 03:57:54.518 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:55 compute-0 ceph-mon[75011]: pgmap v1244: 305 pgs: 305 active+clean; 148 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.6 MiB/s wr, 244 op/s
Nov 22 03:57:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:57:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Nov 22 03:57:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Nov 22 03:57:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Nov 22 03:57:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1246: 305 pgs: 305 active+clean; 148 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 223 op/s
Nov 22 03:57:56 compute-0 ceph-mon[75011]: osdmap e256: 3 total, 3 up, 3 in
Nov 22 03:57:56 compute-0 ceph-mon[75011]: pgmap v1246: 305 pgs: 305 active+clean; 148 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.6 MiB/s wr, 223 op/s
Nov 22 03:57:57 compute-0 nova_compute[253461]: 2025-11-22 03:57:57.163 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 180 op/s
Nov 22 03:57:58 compute-0 nova_compute[253461]: 2025-11-22 03:57:58.586 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:57:59 compute-0 ceph-mon[75011]: pgmap v1247: 305 pgs: 305 active+clean; 134 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.3 MiB/s wr, 180 op/s
Nov 22 03:57:59 compute-0 ovn_controller[152691]: 2025-11-22T03:57:59Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:9d:7b 10.100.0.3
Nov 22 03:57:59 compute-0 ovn_controller[152691]: 2025-11-22T03:57:59Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:9d:7b 10.100.0.3
Nov 22 03:57:59 compute-0 nova_compute[253461]: 2025-11-22 03:57:59.519 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 145 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Nov 22 03:58:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3058877627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3058877627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:01 compute-0 ceph-mon[75011]: pgmap v1248: 305 pgs: 305 active+clean; 145 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Nov 22 03:58:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3058877627' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3058877627' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:01 compute-0 podman[275392]: 2025-11-22 03:58:01.424988644 +0000 UTC m=+0.097633716 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 03:58:01 compute-0 nova_compute[253461]: 2025-11-22 03:58:01.453 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:02 compute-0 nova_compute[253461]: 2025-11-22 03:58:02.167 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 159 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 2.5 MiB/s wr, 97 op/s
Nov 22 03:58:03 compute-0 ceph-mon[75011]: pgmap v1249: 305 pgs: 305 active+clean; 159 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 241 KiB/s rd, 2.5 MiB/s wr, 97 op/s
Nov 22 03:58:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Nov 22 03:58:04 compute-0 nova_compute[253461]: 2025-11-22 03:58:04.522 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:05 compute-0 ceph-mon[75011]: pgmap v1250: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 248 KiB/s rd, 2.6 MiB/s wr, 73 op/s
Nov 22 03:58:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Nov 22 03:58:06 compute-0 nova_compute[253461]: 2025-11-22 03:58:06.247 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:07 compute-0 nova_compute[253461]: 2025-11-22 03:58:07.168 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:07 compute-0 ceph-mon[75011]: pgmap v1251: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 240 KiB/s rd, 2.5 MiB/s wr, 70 op/s
Nov 22 03:58:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 03:58:08 compute-0 nova_compute[253461]: 2025-11-22 03:58:08.294 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:09 compute-0 ceph-mon[75011]: pgmap v1252: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Nov 22 03:58:09 compute-0 nova_compute[253461]: 2025-11-22 03:58:09.524 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1844391935' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1844391935' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1253: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 22 03:58:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1844391935' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1844391935' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:10 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4079657907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:10 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4079657907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:11 compute-0 ceph-mon[75011]: pgmap v1253: 305 pgs: 305 active+clean; 167 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 207 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Nov 22 03:58:11 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4079657907' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:11 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4079657907' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:12 compute-0 nova_compute[253461]: 2025-11-22 03:58:12.171 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 691 KiB/s wr, 52 op/s
Nov 22 03:58:12 compute-0 ceph-mon[75011]: pgmap v1254: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 112 KiB/s rd, 691 KiB/s wr, 52 op/s
Nov 22 03:58:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2647267384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2647267384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2647267384' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2647267384' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 75 KiB/s wr, 53 op/s
Nov 22 03:58:14 compute-0 ceph-mon[75011]: pgmap v1255: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 75 KiB/s wr, 53 op/s
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.481 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.482 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.506 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.528 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.608 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.609 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.619 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.620 253465 INFO nova.compute.claims [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:58:14 compute-0 nova_compute[253461]: 2025-11-22 03:58:14.773 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:58:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2999441610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.233 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.243 253465 DEBUG nova.compute.provider_tree [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.265 253465 DEBUG nova.scheduler.client.report [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.296 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.297 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.366 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.366 253465 DEBUG nova.network.neutron [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:58:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2999441610' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.390 253465 INFO nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.408 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.512 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.513 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.515 253465 INFO nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Creating image(s)
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.540 253465 DEBUG nova.storage.rbd_utils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] rbd image 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.567 253465 DEBUG nova.storage.rbd_utils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] rbd image 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.599 253465 DEBUG nova.storage.rbd_utils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] rbd image 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.604 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.690 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.691 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.692 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.693 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.728 253465 DEBUG nova.storage.rbd_utils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] rbd image 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.733 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:15 compute-0 nova_compute[253461]: 2025-11-22 03:58:15.819 253465 DEBUG nova.policy [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '591897fd5144401c810090ba1c0bf667', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4426ce11629e407f98cae838e2e3e2cc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:58:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.017 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.082 253465 DEBUG nova.storage.rbd_utils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] resizing rbd image 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.183 253465 DEBUG nova.objects.instance [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lazy-loading 'migration_context' on Instance uuid 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.205 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.205 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Ensure instance console log exists: /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.206 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.207 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.208 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 15 KiB/s wr, 41 op/s
Nov 22 03:58:16 compute-0 ceph-mon[75011]: pgmap v1256: 305 pgs: 305 active+clean; 167 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 15 KiB/s wr, 41 op/s
Nov 22 03:58:16 compute-0 podman[275600]: 2025-11-22 03:58:16.409457786 +0000 UTC m=+0.078366760 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 03:58:16 compute-0 podman[275601]: 2025-11-22 03:58:16.473120769 +0000 UTC m=+0.135742779 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible)
Nov 22 03:58:16 compute-0 nova_compute[253461]: 2025-11-22 03:58:16.695 253465 DEBUG nova.network.neutron [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Successfully created port: 9aa4306f-f805-476d-840c-1580581292f0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.173 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.341 253465 DEBUG nova.network.neutron [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Successfully updated port: 9aa4306f-f805-476d-840c-1580581292f0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.357 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.357 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquired lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.358 253465 DEBUG nova.network.neutron [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:58:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Nov 22 03:58:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Nov 22 03:58:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.428 253465 DEBUG nova.compute.manager [req-27618141-022a-40d0-964c-661f3ae17710 req-52f3563f-8759-41c4-b015-decca760a940 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-changed-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.429 253465 DEBUG nova.compute.manager [req-27618141-022a-40d0-964c-661f3ae17710 req-52f3563f-8759-41c4-b015-decca760a940 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Refreshing instance network info cache due to event network-changed-9aa4306f-f805-476d-840c-1580581292f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.429 253465 DEBUG oslo_concurrency.lockutils [req-27618141-022a-40d0-964c-661f3ae17710 req-52f3563f-8759-41c4-b015-decca760a940 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:58:17 compute-0 nova_compute[253461]: 2025-11-22 03:58:17.489 253465 DEBUG nova.network.neutron [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:58:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 173 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 434 KiB/s wr, 54 op/s
Nov 22 03:58:18 compute-0 ceph-mon[75011]: osdmap e257: 3 total, 3 up, 3 in
Nov 22 03:58:18 compute-0 ceph-mon[75011]: pgmap v1258: 305 pgs: 305 active+clean; 173 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 434 KiB/s wr, 54 op/s
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.570 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.747 253465 DEBUG nova.network.neutron [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updating instance_info_cache with network_info: [{"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.793 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Releasing lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.794 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Instance network_info: |[{"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.794 253465 DEBUG oslo_concurrency.lockutils [req-27618141-022a-40d0-964c-661f3ae17710 req-52f3563f-8759-41c4-b015-decca760a940 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.795 253465 DEBUG nova.network.neutron [req-27618141-022a-40d0-964c-661f3ae17710 req-52f3563f-8759-41c4-b015-decca760a940 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Refreshing network info cache for port 9aa4306f-f805-476d-840c-1580581292f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.801 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Start _get_guest_xml network_info=[{"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.810 253465 WARNING nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.816 253465 DEBUG nova.virt.libvirt.host [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.817 253465 DEBUG nova.virt.libvirt.host [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.823 253465 DEBUG nova.virt.libvirt.host [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.824 253465 DEBUG nova.virt.libvirt.host [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.824 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.825 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.826 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.826 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.827 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.827 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.828 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.828 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.828 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.829 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.829 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.830 253465 DEBUG nova.virt.hardware [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:58:19 compute-0 nova_compute[253461]: 2025-11-22 03:58:19.835 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 214 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Nov 22 03:58:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:58:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3737141581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.349 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.372 253465 DEBUG nova.storage.rbd_utils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] rbd image 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.376 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:58:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329309337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.793 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.795 253465 DEBUG nova.virt.libvirt.vif [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:58:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-541169332',display_name='tempest-TestEncryptedCinderVolumes-server-541169332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-541169332',id=11,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxjrGAjJ8fj1C8a62zTUzuU6GsxwTL3F2ybuXPp1Sl/wPUQoANOSNs3maBd4JEnOAKFKQej+9qnQoLgCjrw1UKq0wBLK7Lpk3EV+2Jz/99GGAoOyYRRBTCdklsTvE8+WA==',key_name='tempest-keypair-1526528794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4426ce11629e407f98cae838e2e3e2cc',ramdisk_id='',reservation_id='r-sn7j743k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-268735114',owner_user_name='tempest-TestEncryptedCinderVolumes-268735114-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:58:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='591897fd5144401c810090ba1c0bf667',uuid=18ad6aa8-f2c4-484c-82c5-d369b6f5af5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.795 253465 DEBUG nova.network.os_vif_util [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Converting VIF {"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.796 253465 DEBUG nova.network.os_vif_util [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b9:45,bridge_name='br-int',has_traffic_filtering=True,id=9aa4306f-f805-476d-840c-1580581292f0,network=Network(dcb0f91f-b5dc-48b6-805a-0fe3231189f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aa4306f-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.799 253465 DEBUG nova.objects.instance [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lazy-loading 'pci_devices' on Instance uuid 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:58:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.910 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <uuid>18ad6aa8-f2c4-484c-82c5-d369b6f5af5f</uuid>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <name>instance-0000000b</name>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-541169332</nova:name>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:58:19</nova:creationTime>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <nova:user uuid="591897fd5144401c810090ba1c0bf667">tempest-TestEncryptedCinderVolumes-268735114-project-member</nova:user>
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <nova:project uuid="4426ce11629e407f98cae838e2e3e2cc">tempest-TestEncryptedCinderVolumes-268735114</nova:project>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <nova:port uuid="9aa4306f-f805-476d-840c-1580581292f0">
Nov 22 03:58:20 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <system>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <entry name="serial">18ad6aa8-f2c4-484c-82c5-d369b6f5af5f</entry>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <entry name="uuid">18ad6aa8-f2c4-484c-82c5-d369b6f5af5f</entry>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </system>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <os>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   </os>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <features>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   </features>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk">
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       </source>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk.config">
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       </source>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:58:20 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:70:b9:45"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <target dev="tap9aa4306f-f8"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f/console.log" append="off"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <video>
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </video>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:58:20 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:58:20 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:58:20 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:58:20 compute-0 nova_compute[253461]: </domain>
Nov 22 03:58:20 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.913 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Preparing to wait for external event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.914 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.915 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.915 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.917 253465 DEBUG nova.virt.libvirt.vif [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:58:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-541169332',display_name='tempest-TestEncryptedCinderVolumes-server-541169332',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-541169332',id=11,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxjrGAjJ8fj1C8a62zTUzuU6GsxwTL3F2ybuXPp1Sl/wPUQoANOSNs3maBd4JEnOAKFKQej+9qnQoLgCjrw1UKq0wBLK7Lpk3EV+2Jz/99GGAoOyYRRBTCdklsTvE8+WA==',key_name='tempest-keypair-1526528794',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4426ce11629e407f98cae838e2e3e2cc',ramdisk_id='',reservation_id='r-sn7j743k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-268735114',owner_user_name='tempest-TestEncryptedCinderVolumes-268735114-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:58:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='591897fd5144401c810090ba1c0bf667',uuid=18ad6aa8-f2c4-484c-82c5-d369b6f5af5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.917 253465 DEBUG nova.network.os_vif_util [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Converting VIF {"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.918 253465 DEBUG nova.network.os_vif_util [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:b9:45,bridge_name='br-int',has_traffic_filtering=True,id=9aa4306f-f805-476d-840c-1580581292f0,network=Network(dcb0f91f-b5dc-48b6-805a-0fe3231189f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aa4306f-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.919 253465 DEBUG os_vif [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b9:45,bridge_name='br-int',has_traffic_filtering=True,id=9aa4306f-f805-476d-840c-1580581292f0,network=Network(dcb0f91f-b5dc-48b6-805a-0fe3231189f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aa4306f-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.923 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.924 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.925 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.929 253465 DEBUG oslo_concurrency.lockutils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.930 253465 DEBUG oslo_concurrency.lockutils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.934 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.934 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aa4306f-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.935 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9aa4306f-f8, col_values=(('external_ids', {'iface-id': '9aa4306f-f805-476d-840c-1580581292f0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:b9:45', 'vm-uuid': '18ad6aa8-f2c4-484c-82c5-d369b6f5af5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.937 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:20 compute-0 NetworkManager[48916]: <info>  [1763783900.9386] manager: (tap9aa4306f-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.940 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.950 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:20 compute-0 nova_compute[253461]: 2025-11-22 03:58:20.952 253465 INFO os_vif [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:b9:45,bridge_name='br-int',has_traffic_filtering=True,id=9aa4306f-f805-476d-840c-1580581292f0,network=Network(dcb0f91f-b5dc-48b6-805a-0fe3231189f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aa4306f-f8')
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.030 253465 DEBUG nova.objects.instance [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'flavor' on Instance uuid aaadb0bf-8f4d-4eb7-9688-60999c8129dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.113 253465 INFO nova.virt.libvirt.driver [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Ignoring supplied device name: /dev/vdb
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.172 253465 DEBUG oslo_concurrency.lockutils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.175 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.176 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.176 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] No VIF found with MAC fa:16:3e:70:b9:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.177 253465 INFO nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Using config drive
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.208 253465 DEBUG nova.storage.rbd_utils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] rbd image 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:58:21 compute-0 ceph-mon[75011]: pgmap v1259: 305 pgs: 305 active+clean; 214 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Nov 22 03:58:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3737141581' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:58:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2329309337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.554 253465 DEBUG oslo_concurrency.lockutils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.555 253465 DEBUG oslo_concurrency.lockutils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.555 253465 INFO nova.compute.manager [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Attaching volume 4b8d7c5d-90e5-4caa-87aa-7177962ed2da to /dev/vdb
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.798 253465 DEBUG os_brick.utils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.800 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.817 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.817 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[17ec2b79-9b96-4177-9d68-2a612811210f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.819 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.831 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.831 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e401880d-0893-4603-abb0-d129205696a2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.835 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.848 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.849 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[3917b178-8721-4677-ace0-eccb069dcfa3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.851 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[f7b7ff1a-66fb-44e6-8777-70711498e240]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.852 253465 DEBUG oslo_concurrency.processutils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.889 253465 DEBUG oslo_concurrency.processutils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.893 253465 DEBUG os_brick.initiator.connectors.lightos [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.894 253465 DEBUG os_brick.initiator.connectors.lightos [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.894 253465 DEBUG os_brick.initiator.connectors.lightos [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.895 253465 DEBUG os_brick.utils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 03:58:21 compute-0 nova_compute[253461]: 2025-11-22 03:58:21.896 253465 DEBUG nova.virt.block_device [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Updating existing volume attachment record: 4f201155-cd59-4001-a2e2-9e748ec21598 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.174 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 214 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.522 253465 INFO nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Creating config drive at /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f/disk.config
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.533 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiizya4su execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.572 253465 DEBUG nova.network.neutron [req-27618141-022a-40d0-964c-661f3ae17710 req-52f3563f-8759-41c4-b015-decca760a940 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updated VIF entry in instance network info cache for port 9aa4306f-f805-476d-840c-1580581292f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.574 253465 DEBUG nova.network.neutron [req-27618141-022a-40d0-964c-661f3ae17710 req-52f3563f-8759-41c4-b015-decca760a940 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updating instance_info_cache with network_info: [{"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:58:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:58:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3621626449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.596 253465 DEBUG oslo_concurrency.lockutils [req-27618141-022a-40d0-964c-661f3ae17710 req-52f3563f-8759-41c4-b015-decca760a940 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.685 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiizya4su" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.726 253465 DEBUG nova.storage.rbd_utils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] rbd image 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.731 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f/disk.config 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.773 253465 DEBUG nova.objects.instance [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'flavor' on Instance uuid aaadb0bf-8f4d-4eb7-9688-60999c8129dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.804 253465 DEBUG nova.virt.libvirt.driver [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Attempting to attach volume 4b8d7c5d-90e5-4caa-87aa-7177962ed2da with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.810 253465 DEBUG nova.virt.libvirt.guest [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 03:58:22 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:58:22 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-4b8d7c5d-90e5-4caa-87aa-7177962ed2da">
Nov 22 03:58:22 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:58:22 compute-0 nova_compute[253461]:   </source>
Nov 22 03:58:22 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 03:58:22 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:58:22 compute-0 nova_compute[253461]:   </auth>
Nov 22 03:58:22 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:58:22 compute-0 nova_compute[253461]:   <serial>4b8d7c5d-90e5-4caa-87aa-7177962ed2da</serial>
Nov 22 03:58:22 compute-0 nova_compute[253461]: </disk>
Nov 22 03:58:22 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.941 253465 DEBUG oslo_concurrency.processutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f/disk.config 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.209s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.942 253465 INFO nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Deleting local config drive /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f/disk.config because it was imported into RBD.
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.953 253465 DEBUG nova.virt.libvirt.driver [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.953 253465 DEBUG nova.virt.libvirt.driver [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.954 253465 DEBUG nova.virt.libvirt.driver [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:58:22 compute-0 nova_compute[253461]: 2025-11-22 03:58:22.954 253465 DEBUG nova.virt.libvirt.driver [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] No VIF found with MAC fa:16:3e:09:9d:7b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:58:23 compute-0 kernel: tap9aa4306f-f8: entered promiscuous mode
Nov 22 03:58:23 compute-0 NetworkManager[48916]: <info>  [1763783903.0040] manager: (tap9aa4306f-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Nov 22 03:58:23 compute-0 ovn_controller[152691]: 2025-11-22T03:58:23Z|00119|binding|INFO|Claiming lport 9aa4306f-f805-476d-840c-1580581292f0 for this chassis.
Nov 22 03:58:23 compute-0 ovn_controller[152691]: 2025-11-22T03:58:23Z|00120|binding|INFO|9aa4306f-f805-476d-840c-1580581292f0: Claiming fa:16:3e:70:b9:45 10.100.0.9
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.007 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.010 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.011 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.011 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.014 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b9:45 10.100.0.9'], port_security=['fa:16:3e:70:b9:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '18ad6aa8-f2c4-484c-82c5-d369b6f5af5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4426ce11629e407f98cae838e2e3e2cc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '61bae170-9bbe-46b6-9dcc-ab0c87d6de4f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b0dfdb4-bf10-486b-b28e-52cb821a7c4f, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=9aa4306f-f805-476d-840c-1580581292f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.016 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 9aa4306f-f805-476d-840c-1580581292f0 in datapath dcb0f91f-b5dc-48b6-805a-0fe3231189f2 bound to our chassis
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.017 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network dcb0f91f-b5dc-48b6-805a-0fe3231189f2
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.034 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 ovn_controller[152691]: 2025-11-22T03:58:23Z|00121|binding|INFO|Setting lport 9aa4306f-f805-476d-840c-1580581292f0 up in Southbound
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.037 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 ovn_controller[152691]: 2025-11-22T03:58:23Z|00122|binding|INFO|Setting lport 9aa4306f-f805-476d-840c-1580581292f0 ovn-installed in OVS
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.039 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.038 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4a74127d-c3b5-4b90-9bd4-3e65d77b27d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.040 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdcb0f91f-b1 in ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.043 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdcb0f91f-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.043 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a74fa0-7493-4713-a188-0993c60d63ce]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 systemd-machined[215728]: New machine qemu-11-instance-0000000b.
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.045 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7439d370-9b29-46c0-a8b0-907db2b9814f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 22 03:58:23 compute-0 systemd-udevd[275808]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.060 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[1936552d-41e2-4ec3-99ab-9f39b1f04fd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 NetworkManager[48916]: <info>  [1763783903.0768] device (tap9aa4306f-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:58:23 compute-0 NetworkManager[48916]: <info>  [1763783903.0779] device (tap9aa4306f-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.089 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[16d614b7-064f-476c-8d37-44c4cda64542]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.118 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[6a8e68b4-7ca9-4b3e-87cf-16906046bc0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.124 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6b100cea-a83a-43dd-8078-db5111c0b62c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 systemd-udevd[275811]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:58:23 compute-0 NetworkManager[48916]: <info>  [1763783903.1262] manager: (tapdcb0f91f-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.152 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[1776a015-47b7-46b9-9445-b7e38ddc0851]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.156 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[72c2f624-8908-4fb4-9e0a-6348056f1d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.176 253465 DEBUG oslo_concurrency.lockutils [None req-ebe8bf8b-d5ac-4404-9ba1-a727e63419f5 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:23 compute-0 NetworkManager[48916]: <info>  [1763783903.1918] device (tapdcb0f91f-b0): carrier: link connected
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.200 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[7baddc1e-6483-4cc9-817f-47c1fde784f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.217 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0c569d98-5950-49ed-b90e-3806baf97d56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdcb0f91f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:dc:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 415579, 'reachable_time': 18787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275839, 'error': None, 'target': 'ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.242 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0d4861-7b73-40b3-8357-dd864958fb76]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe85:dc9e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 415579, 'tstamp': 415579}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275846, 'error': None, 'target': 'ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.271 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b2f65fa3-39de-4fd4-8aab-9082ea3ccdf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdcb0f91f-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:85:dc:9e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 415579, 'reachable_time': 18787, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275859, 'error': None, 'target': 'ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1387093524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1387093524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:23 compute-0 ceph-mon[75011]: pgmap v1260: 305 pgs: 305 active+clean; 214 MiB data, 358 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Nov 22 03:58:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3621626449' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:58:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1387093524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1387093524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.316 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd9124f-09ee-40d3-8555-bb64c8081ac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.343 253465 DEBUG nova.compute.manager [req-55d6853a-ac69-44a5-a4c6-ec4277cf0d22 req-e337c752-7cfb-4241-b02f-a0399f1346ab f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.344 253465 DEBUG oslo_concurrency.lockutils [req-55d6853a-ac69-44a5-a4c6-ec4277cf0d22 req-e337c752-7cfb-4241-b02f-a0399f1346ab f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.344 253465 DEBUG oslo_concurrency.lockutils [req-55d6853a-ac69-44a5-a4c6-ec4277cf0d22 req-e337c752-7cfb-4241-b02f-a0399f1346ab f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.344 253465 DEBUG oslo_concurrency.lockutils [req-55d6853a-ac69-44a5-a4c6-ec4277cf0d22 req-e337c752-7cfb-4241-b02f-a0399f1346ab f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.344 253465 DEBUG nova.compute.manager [req-55d6853a-ac69-44a5-a4c6-ec4277cf0d22 req-e337c752-7cfb-4241-b02f-a0399f1346ab f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Processing event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.379 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783903.379297, 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.380 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] VM Started (Lifecycle Event)
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.382 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.383 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b00100f8-7d07-436b-8865-237c54eaa829]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.386 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcb0f91f-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.386 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.387 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.388 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdcb0f91f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.389 253465 INFO nova.virt.libvirt.driver [-] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Instance spawned successfully.
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.389 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:58:23 compute-0 NetworkManager[48916]: <info>  [1763783903.3915] manager: (tapdcb0f91f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.391 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 kernel: tapdcb0f91f-b0: entered promiscuous mode
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.394 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.397 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.397 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdcb0f91f-b0, col_values=(('external_ids', {'iface-id': 'b589b132-ca33-4c3a-b65c-a4a6033a69d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.399 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 ovn_controller[152691]: 2025-11-22T03:58:23Z|00123|binding|INFO|Releasing lport b589b132-ca33-4c3a-b65c-a4a6033a69d2 from this chassis (sb_readonly=0)
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.403 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.406 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.406 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.406 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.407 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.407 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.407 253465 DEBUG nova.virt.libvirt.driver [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.420 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.422 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/dcb0f91f-b5dc-48b6-805a-0fe3231189f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/dcb0f91f-b5dc-48b6-805a-0fe3231189f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.422 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2accb6d1-6ead-49e3-83c1-68bc350987d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.423 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-dcb0f91f-b5dc-48b6-805a-0fe3231189f2
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/dcb0f91f-b5dc-48b6-805a-0fe3231189f2.pid.haproxy
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID dcb0f91f-b5dc-48b6-805a-0fe3231189f2
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.424 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'env', 'PROCESS_TAG=haproxy-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/dcb0f91f-b5dc-48b6-805a-0fe3231189f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.424 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.431 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.431 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783903.3815813, 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.432 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] VM Paused (Lifecycle Event)
Nov 22 03:58:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:23.439 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.460 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.464 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783903.3858707, 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.464 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] VM Resumed (Lifecycle Event)
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.469 253465 INFO nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Took 7.96 seconds to spawn the instance on the hypervisor.
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.469 253465 DEBUG nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.492 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.494 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.521 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.535 253465 INFO nova.compute.manager [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Took 8.98 seconds to build instance.
Nov 22 03:58:23 compute-0 nova_compute[253461]: 2025-11-22 03:58:23.550 253465 DEBUG oslo_concurrency.lockutils [None req-75080c05-315d-4296-a063-b726dd1b7d8d 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:23 compute-0 podman[275914]: 2025-11-22 03:58:23.870181355 +0000 UTC m=+0.067787576 container create cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 03:58:23 compute-0 podman[275914]: 2025-11-22 03:58:23.837647271 +0000 UTC m=+0.035253472 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:58:23 compute-0 systemd[1]: Started libpod-conmon-cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c.scope.
Nov 22 03:58:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:58:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/696df3737fcdaa3df0196d9cc309edbc341c5a3f2ba7e99df235a43dd0188f85/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:23 compute-0 podman[275914]: 2025-11-22 03:58:23.990214454 +0000 UTC m=+0.187820705 container init cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:58:23 compute-0 podman[275914]: 2025-11-22 03:58:23.997177089 +0000 UTC m=+0.194783270 container start cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:58:24 compute-0 neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2[275929]: [NOTICE]   (275933) : New worker (275935) forked
Nov 22 03:58:24 compute-0 neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2[275929]: [NOTICE]   (275933) : Loading success.
Nov 22 03:58:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:24.106 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:58:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 300 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Nov 22 03:58:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Nov 22 03:58:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Nov 22 03:58:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Nov 22 03:58:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Nov 22 03:58:25 compute-0 ceph-mon[75011]: pgmap v1261: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 300 KiB/s rd, 2.2 MiB/s wr, 117 op/s
Nov 22 03:58:25 compute-0 ceph-mon[75011]: osdmap e258: 3 total, 3 up, 3 in
Nov 22 03:58:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Nov 22 03:58:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Nov 22 03:58:25 compute-0 nova_compute[253461]: 2025-11-22 03:58:25.479 253465 DEBUG nova.compute.manager [req-1a2c9bba-1121-4486-9972-72c4f94ef2b4 req-27918e2b-77b8-4376-82bc-0a0d2e45086c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:58:25 compute-0 nova_compute[253461]: 2025-11-22 03:58:25.479 253465 DEBUG oslo_concurrency.lockutils [req-1a2c9bba-1121-4486-9972-72c4f94ef2b4 req-27918e2b-77b8-4376-82bc-0a0d2e45086c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:25 compute-0 nova_compute[253461]: 2025-11-22 03:58:25.479 253465 DEBUG oslo_concurrency.lockutils [req-1a2c9bba-1121-4486-9972-72c4f94ef2b4 req-27918e2b-77b8-4376-82bc-0a0d2e45086c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:25 compute-0 nova_compute[253461]: 2025-11-22 03:58:25.479 253465 DEBUG oslo_concurrency.lockutils [req-1a2c9bba-1121-4486-9972-72c4f94ef2b4 req-27918e2b-77b8-4376-82bc-0a0d2e45086c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:25 compute-0 nova_compute[253461]: 2025-11-22 03:58:25.479 253465 DEBUG nova.compute.manager [req-1a2c9bba-1121-4486-9972-72c4f94ef2b4 req-27918e2b-77b8-4376-82bc-0a0d2e45086c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] No waiting events found dispatching network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:58:25 compute-0 nova_compute[253461]: 2025-11-22 03:58:25.480 253465 WARNING nova.compute.manager [req-1a2c9bba-1121-4486-9972-72c4f94ef2b4 req-27918e2b-77b8-4376-82bc-0a0d2e45086c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received unexpected event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 for instance with vm_state active and task_state None.
Nov 22 03:58:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:25 compute-0 nova_compute[253461]: 2025-11-22 03:58:25.939 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 371 KiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 22 03:58:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Nov 22 03:58:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Nov 22 03:58:26 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Nov 22 03:58:26 compute-0 ceph-mon[75011]: osdmap e259: 3 total, 3 up, 3 in
Nov 22 03:58:27 compute-0 nova_compute[253461]: 2025-11-22 03:58:27.178 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Nov 22 03:58:27 compute-0 ceph-mon[75011]: pgmap v1264: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 371 KiB/s rd, 2.2 MiB/s wr, 138 op/s
Nov 22 03:58:27 compute-0 ceph-mon[75011]: osdmap e260: 3 total, 3 up, 3 in
Nov 22 03:58:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Nov 22 03:58:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Nov 22 03:58:27 compute-0 nova_compute[253461]: 2025-11-22 03:58:27.604 253465 DEBUG nova.compute.manager [req-c63772b3-2be7-4110-88de-a9c3b7275a54 req-6fdc23b3-4380-4556-8991-33d53908a08f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-changed-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:58:27 compute-0 nova_compute[253461]: 2025-11-22 03:58:27.605 253465 DEBUG nova.compute.manager [req-c63772b3-2be7-4110-88de-a9c3b7275a54 req-6fdc23b3-4380-4556-8991-33d53908a08f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Refreshing instance network info cache due to event network-changed-9aa4306f-f805-476d-840c-1580581292f0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:58:27 compute-0 nova_compute[253461]: 2025-11-22 03:58:27.605 253465 DEBUG oslo_concurrency.lockutils [req-c63772b3-2be7-4110-88de-a9c3b7275a54 req-6fdc23b3-4380-4556-8991-33d53908a08f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:58:27 compute-0 nova_compute[253461]: 2025-11-22 03:58:27.605 253465 DEBUG oslo_concurrency.lockutils [req-c63772b3-2be7-4110-88de-a9c3b7275a54 req-6fdc23b3-4380-4556-8991-33d53908a08f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:58:27 compute-0 nova_compute[253461]: 2025-11-22 03:58:27.605 253465 DEBUG nova.network.neutron [req-c63772b3-2be7-4110-88de-a9c3b7275a54 req-6fdc23b3-4380-4556-8991-33d53908a08f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Refreshing network info cache for port 9aa4306f-f805-476d-840c-1580581292f0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:58:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 897 KiB/s rd, 4.0 KiB/s wr, 100 op/s
Nov 22 03:58:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Nov 22 03:58:28 compute-0 ceph-mon[75011]: osdmap e261: 3 total, 3 up, 3 in
Nov 22 03:58:28 compute-0 ceph-mon[75011]: pgmap v1267: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 897 KiB/s rd, 4.0 KiB/s wr, 100 op/s
Nov 22 03:58:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Nov 22 03:58:28 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Nov 22 03:58:28 compute-0 nova_compute[253461]: 2025-11-22 03:58:28.582 253465 DEBUG nova.network.neutron [req-c63772b3-2be7-4110-88de-a9c3b7275a54 req-6fdc23b3-4380-4556-8991-33d53908a08f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updated VIF entry in instance network info cache for port 9aa4306f-f805-476d-840c-1580581292f0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:58:28 compute-0 nova_compute[253461]: 2025-11-22 03:58:28.584 253465 DEBUG nova.network.neutron [req-c63772b3-2be7-4110-88de-a9c3b7275a54 req-6fdc23b3-4380-4556-8991-33d53908a08f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updating instance_info_cache with network_info: [{"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:58:28 compute-0 nova_compute[253461]: 2025-11-22 03:58:28.602 253465 DEBUG oslo_concurrency.lockutils [req-c63772b3-2be7-4110-88de-a9c3b7275a54 req-6fdc23b3-4380-4556-8991-33d53908a08f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:58:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Nov 22 03:58:29 compute-0 ceph-mon[75011]: osdmap e262: 3 total, 3 up, 3 in
Nov 22 03:58:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Nov 22 03:58:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Nov 22 03:58:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 19 KiB/s wr, 425 op/s
Nov 22 03:58:30 compute-0 ceph-mon[75011]: osdmap e263: 3 total, 3 up, 3 in
Nov 22 03:58:30 compute-0 ceph-mon[75011]: pgmap v1270: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 19 KiB/s wr, 425 op/s
Nov 22 03:58:30 compute-0 sudo[275944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:30 compute-0 sudo[275944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:30 compute-0 sudo[275944]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Nov 22 03:58:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Nov 22 03:58:30 compute-0 sudo[275969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:58:30 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Nov 22 03:58:30 compute-0 sudo[275969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:30 compute-0 sudo[275969]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:30 compute-0 nova_compute[253461]: 2025-11-22 03:58:30.942 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:31 compute-0 sudo[275994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:31 compute-0 sudo[275994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:31 compute-0 sudo[275994]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:31 compute-0 sudo[276019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 03:58:31 compute-0 sudo[276019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:31.108 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2702601098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2702601098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:31 compute-0 podman[276089]: 2025-11-22 03:58:31.552295425 +0000 UTC m=+0.082296430 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 22 03:58:31 compute-0 podman[276134]: 2025-11-22 03:58:31.706232102 +0000 UTC m=+0.083349923 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 03:58:31 compute-0 podman[276134]: 2025-11-22 03:58:31.818786347 +0000 UTC m=+0.195904188 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:58:31 compute-0 ceph-mon[75011]: osdmap e264: 3 total, 3 up, 3 in
Nov 22 03:58:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2702601098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2702601098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.180 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.191 253465 DEBUG oslo_concurrency.lockutils [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.192 253465 DEBUG oslo_concurrency.lockutils [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.210 253465 INFO nova.compute.manager [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Detaching volume 4b8d7c5d-90e5-4caa-87aa-7177962ed2da
Nov 22 03:58:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 20 KiB/s wr, 343 op/s
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.365 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.394 253465 INFO nova.virt.block_device [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Attempting to driver detach volume 4b8d7c5d-90e5-4caa-87aa-7177962ed2da from mountpoint /dev/vdb
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.410 253465 DEBUG nova.virt.libvirt.driver [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Attempting to detach device vdb from instance aaadb0bf-8f4d-4eb7-9688-60999c8129dd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.413 253465 DEBUG nova.virt.libvirt.guest [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-4b8d7c5d-90e5-4caa-87aa-7177962ed2da">
Nov 22 03:58:32 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   </source>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <serial>4b8d7c5d-90e5-4caa-87aa-7177962ed2da</serial>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:58:32 compute-0 nova_compute[253461]: </disk>
Nov 22 03:58:32 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.424 253465 INFO nova.virt.libvirt.driver [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Successfully detached device vdb from instance aaadb0bf-8f4d-4eb7-9688-60999c8129dd from the persistent domain config.
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.425 253465 DEBUG nova.virt.libvirt.driver [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance aaadb0bf-8f4d-4eb7-9688-60999c8129dd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.426 253465 DEBUG nova.virt.libvirt.guest [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-4b8d7c5d-90e5-4caa-87aa-7177962ed2da">
Nov 22 03:58:32 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   </source>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <serial>4b8d7c5d-90e5-4caa-87aa-7177962ed2da</serial>
Nov 22 03:58:32 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:58:32 compute-0 nova_compute[253461]: </disk>
Nov 22 03:58:32 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.448 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.565 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763783912.5646653, aaadb0bf-8f4d-4eb7-9688-60999c8129dd => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.567 253465 DEBUG nova.virt.libvirt.driver [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance aaadb0bf-8f4d-4eb7-9688-60999c8129dd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.571 253465 INFO nova.virt.libvirt.driver [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Successfully detached device vdb from instance aaadb0bf-8f4d-4eb7-9688-60999c8129dd from the live domain config.
Nov 22 03:58:32 compute-0 sudo[276019]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:58:32 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:58:32 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:32 compute-0 sudo[276294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:32 compute-0 sudo[276294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:32 compute-0 sudo[276294]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.759 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.760 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.760 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.760 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.795 253465 DEBUG nova.objects.instance [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'flavor' on Instance uuid aaadb0bf-8f4d-4eb7-9688-60999c8129dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:58:32 compute-0 sudo[276319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:58:32 compute-0 sudo[276319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:32 compute-0 sudo[276319]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.859 253465 DEBUG oslo_concurrency.lockutils [None req-a522d1de-2e24-49e3-bea8-14ac3e432732 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.861 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.862 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.863 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.864 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.866 253465 INFO nova.compute.manager [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Terminating instance
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.868 253465 DEBUG nova.compute.manager [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:58:32 compute-0 ceph-mon[75011]: pgmap v1272: 305 pgs: 305 active+clean; 214 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 20 KiB/s wr, 343 op/s
Nov 22 03:58:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:32 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:32 compute-0 kernel: tapefd1a6a8-37 (unregistering): left promiscuous mode
Nov 22 03:58:32 compute-0 NetworkManager[48916]: <info>  [1763783912.9282] device (tapefd1a6a8-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.942 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:32 compute-0 ovn_controller[152691]: 2025-11-22T03:58:32Z|00124|binding|INFO|Releasing lport efd1a6a8-37bb-4721-9db8-ab78b987ebb0 from this chassis (sb_readonly=0)
Nov 22 03:58:32 compute-0 ovn_controller[152691]: 2025-11-22T03:58:32Z|00125|binding|INFO|Setting lport efd1a6a8-37bb-4721-9db8-ab78b987ebb0 down in Southbound
Nov 22 03:58:32 compute-0 ovn_controller[152691]: 2025-11-22T03:58:32Z|00126|binding|INFO|Removing iface tapefd1a6a8-37 ovn-installed in OVS
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.946 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:32 compute-0 sudo[276344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:32.954 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:9d:7b 10.100.0.3'], port_security=['fa:16:3e:09:9d:7b 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'aaadb0bf-8f4d-4eb7-9688-60999c8129dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5846275e26354bb095399d10c8b52285', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bdf45843-e259-4187-812e-2f8dc06dc437', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3e3d611-3c95-4b51-bc26-a179069ce7f3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=efd1a6a8-37bb-4721-9db8-ab78b987ebb0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:58:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:32.958 162689 INFO neutron.agent.ovn.metadata.agent [-] Port efd1a6a8-37bb-4721-9db8-ab78b987ebb0 in datapath 4c32f371-ff20-4759-bfb3-24316a8c7a57 unbound from our chassis
Nov 22 03:58:32 compute-0 sudo[276344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:32 compute-0 sudo[276344]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:32.964 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c32f371-ff20-4759-bfb3-24316a8c7a57, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:58:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:32.970 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[21962225-131f-431f-90f1-12ad64a70529]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:32.971 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 namespace which is not needed anymore
Nov 22 03:58:32 compute-0 nova_compute[253461]: 2025-11-22 03:58:32.973 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:32 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 22 03:58:32 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 15.115s CPU time.
Nov 22 03:58:32 compute-0 systemd-machined[215728]: Machine qemu-10-instance-0000000a terminated.
Nov 22 03:58:33 compute-0 sudo[276373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:58:33 compute-0 sudo[276373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.110 253465 INFO nova.virt.libvirt.driver [-] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Instance destroyed successfully.
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.110 253465 DEBUG nova.objects.instance [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lazy-loading 'resources' on Instance uuid aaadb0bf-8f4d-4eb7-9688-60999c8129dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.128 253465 DEBUG nova.virt.libvirt.vif [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:57:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1790647099',display_name='tempest-VolumesSnapshotTestJSON-instance-1790647099',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1790647099',id=10,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNqrpIPGFLG6Qx4n5YxknoazZOZGTcxQjQwzckhjY+ySVgDzYzPRtC/1lU1gb8R0Aq/kYIiuFEqZqNOPYUHs/HYmhJFFwcFQWglGzDim0t9caZbWXc0Kf+g1y6udhWy48Q==',key_name='tempest-keypair-627494330',keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:57:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5846275e26354bb095399d10c8b52285',ramdisk_id='',reservation_id='r-wgho1gzf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-724001677',owner_user_name='tempest-VolumesSnapshotTestJSON-724001677-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:57:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='323c39d407144b438e9fbcdc7c67710e',uuid=aaadb0bf-8f4d-4eb7-9688-60999c8129dd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.128 253465 DEBUG nova.network.os_vif_util [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converting VIF {"id": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "address": "fa:16:3e:09:9d:7b", "network": {"id": "4c32f371-ff20-4759-bfb3-24316a8c7a57", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-34683241-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5846275e26354bb095399d10c8b52285", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapefd1a6a8-37", "ovs_interfaceid": "efd1a6a8-37bb-4721-9db8-ab78b987ebb0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.130 253465 DEBUG nova.network.os_vif_util [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:9d:7b,bridge_name='br-int',has_traffic_filtering=True,id=efd1a6a8-37bb-4721-9db8-ab78b987ebb0,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd1a6a8-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.130 253465 DEBUG os_vif [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:9d:7b,bridge_name='br-int',has_traffic_filtering=True,id=efd1a6a8-37bb-4721-9db8-ab78b987ebb0,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd1a6a8-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:58:33 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[275375]: [NOTICE]   (275379) : haproxy version is 2.8.14-c23fe91
Nov 22 03:58:33 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[275375]: [NOTICE]   (275379) : path to executable is /usr/sbin/haproxy
Nov 22 03:58:33 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[275375]: [WARNING]  (275379) : Exiting Master process...
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.133 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:33 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[275375]: [ALERT]    (275379) : Current worker (275381) exited with code 143 (Terminated)
Nov 22 03:58:33 compute-0 neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57[275375]: [WARNING]  (275379) : All workers exited. Exiting... (0)
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.136 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapefd1a6a8-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:33 compute-0 systemd[1]: libpod-70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3.scope: Deactivated successfully.
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.141 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.144 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:58:33 compute-0 podman[276416]: 2025-11-22 03:58:33.145006094 +0000 UTC m=+0.068780836 container died 70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118)
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.146 253465 INFO os_vif [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:9d:7b,bridge_name='br-int',has_traffic_filtering=True,id=efd1a6a8-37bb-4721-9db8-ab78b987ebb0,network=Network(4c32f371-ff20-4759-bfb3-24316a8c7a57),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapefd1a6a8-37')
Nov 22 03:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3-userdata-shm.mount: Deactivated successfully.
Nov 22 03:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-974ef768f35f378d8906cdb8bf2062d1c28ab9f9e2e182bc22a8eace3e6109cb-merged.mount: Deactivated successfully.
Nov 22 03:58:33 compute-0 podman[276416]: 2025-11-22 03:58:33.191839279 +0000 UTC m=+0.115614021 container cleanup 70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 03:58:33 compute-0 systemd[1]: libpod-conmon-70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3.scope: Deactivated successfully.
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.239 253465 DEBUG nova.compute.manager [req-7242df78-db4b-4f69-bddc-2e380a5c9bf4 req-8d087a03-0a10-4ccf-8224-8b452ae1a153 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received event network-vif-unplugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.241 253465 DEBUG oslo_concurrency.lockutils [req-7242df78-db4b-4f69-bddc-2e380a5c9bf4 req-8d087a03-0a10-4ccf-8224-8b452ae1a153 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.241 253465 DEBUG oslo_concurrency.lockutils [req-7242df78-db4b-4f69-bddc-2e380a5c9bf4 req-8d087a03-0a10-4ccf-8224-8b452ae1a153 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.242 253465 DEBUG oslo_concurrency.lockutils [req-7242df78-db4b-4f69-bddc-2e380a5c9bf4 req-8d087a03-0a10-4ccf-8224-8b452ae1a153 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.242 253465 DEBUG nova.compute.manager [req-7242df78-db4b-4f69-bddc-2e380a5c9bf4 req-8d087a03-0a10-4ccf-8224-8b452ae1a153 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] No waiting events found dispatching network-vif-unplugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.243 253465 DEBUG nova.compute.manager [req-7242df78-db4b-4f69-bddc-2e380a5c9bf4 req-8d087a03-0a10-4ccf-8224-8b452ae1a153 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received event network-vif-unplugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:58:33 compute-0 podman[276480]: 2025-11-22 03:58:33.294048032 +0000 UTC m=+0.050752345 container remove 70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.303 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c1077fd9-43f6-4790-8540-8c91c4c52dcb]: (4, ('Sat Nov 22 03:58:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 (70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3)\n70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3\nSat Nov 22 03:58:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 (70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3)\n70a0836ed0623dae1cfb973e9b616843c8975d07d5f22ae20367e01fccd737e3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.305 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[144293e5-eb22-4e0e-bd67-14f61c6ace3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.306 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c32f371-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:58:33 compute-0 kernel: tap4c32f371-f0: left promiscuous mode
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.307 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.335 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.336 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.339 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab57825-06b2-497e-893f-6633bab54bdf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.353 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[422d02c7-462f-4877-98a6-7640c37d2487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.354 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[89d54f46-6f3b-4f4b-bc92-a2b84a7cc0e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.369 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[34b3b9db-5efb-46c2-9299-5c5f5753993e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411845, 'reachable_time': 43651, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276499, 'error': None, 'target': 'ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d4c32f371\x2dff20\x2d4759\x2dbfb3\x2d24316a8c7a57.mount: Deactivated successfully.
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.372 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c32f371-ff20-4759-bfb3-24316a8c7a57 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:58:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:58:33.372 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[33fab6aa-7683-4848-8c61-0cb1dcdb973c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:58:33 compute-0 sudo[276373]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:58:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:58:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:58:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:58:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:58:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:33 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 095dabe1-4873-4cf5-b7ec-af5a8222a734 does not exist
Nov 22 03:58:33 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e92a6bc8-881d-465e-8eed-a37bbbdcc80d does not exist
Nov 22 03:58:33 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev fc5f7316-64cb-4f1f-a97e-d6af2d5d1f69 does not exist
Nov 22 03:58:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:58:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:58:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:58:33 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:58:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:58:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.647 253465 INFO nova.virt.libvirt.driver [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Deleting instance files /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd_del
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.648 253465 INFO nova.virt.libvirt.driver [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Deletion of /var/lib/nova/instances/aaadb0bf-8f4d-4eb7-9688-60999c8129dd_del complete
Nov 22 03:58:33 compute-0 sudo[276516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:33 compute-0 sudo[276516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:33 compute-0 sudo[276516]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.729 253465 INFO nova.compute.manager [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.730 253465 DEBUG oslo.service.loopingcall [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.730 253465 DEBUG nova.compute.manager [-] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:58:33 compute-0 nova_compute[253461]: 2025-11-22 03:58:33.731 253465 DEBUG nova.network.neutron [-] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:58:33 compute-0 sudo[276541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:58:33 compute-0 sudo[276541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:33 compute-0 sudo[276541]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:33 compute-0 sudo[276566]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:33 compute-0 sudo[276566]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:33 compute-0 sudo[276566]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Nov 22 03:58:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:58:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:58:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:58:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:58:33 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:58:33 compute-0 sudo[276591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:58:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Nov 22 03:58:33 compute-0 sudo[276591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:33 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Nov 22 03:58:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 26 KiB/s wr, 462 op/s
Nov 22 03:58:34 compute-0 podman[276659]: 2025-11-22 03:58:34.396657334 +0000 UTC m=+0.055682037 container create bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 03:58:34 compute-0 systemd[1]: Started libpod-conmon-bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b.scope.
Nov 22 03:58:34 compute-0 podman[276659]: 2025-11-22 03:58:34.37676986 +0000 UTC m=+0.035794593 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:58:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:58:34 compute-0 podman[276659]: 2025-11-22 03:58:34.51254788 +0000 UTC m=+0.171572583 container init bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:58:34 compute-0 podman[276659]: 2025-11-22 03:58:34.524087425 +0000 UTC m=+0.183112118 container start bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:58:34 compute-0 podman[276659]: 2025-11-22 03:58:34.531927712 +0000 UTC m=+0.190952445 container attach bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 03:58:34 compute-0 nostalgic_yonath[276675]: 167 167
Nov 22 03:58:34 compute-0 systemd[1]: libpod-bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b.scope: Deactivated successfully.
Nov 22 03:58:34 compute-0 conmon[276675]: conmon bd207283a5352007c72d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b.scope/container/memory.events
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.578 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updating instance_info_cache with network_info: [{"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:58:34 compute-0 podman[276680]: 2025-11-22 03:58:34.593646856 +0000 UTC m=+0.036307995 container died bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.599 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.600 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.600 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.600 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.622 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.623 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.623 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.623 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.624 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-086a5b48030c579edd6da466824935d584136d71eb9d09a27ff8d5ab635e199d-merged.mount: Deactivated successfully.
Nov 22 03:58:34 compute-0 podman[276680]: 2025-11-22 03:58:34.663466194 +0000 UTC m=+0.106127343 container remove bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_yonath, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:58:34 compute-0 systemd[1]: libpod-conmon-bd207283a5352007c72d6bc7a2e52702123b313ea82599e2551a5505da7c2b0b.scope: Deactivated successfully.
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.797 253465 DEBUG nova.network.neutron [-] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.819 253465 INFO nova.compute.manager [-] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Took 1.09 seconds to deallocate network for instance.
Nov 22 03:58:34 compute-0 podman[276725]: 2025-11-22 03:58:34.92429319 +0000 UTC m=+0.054471422 container create 0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:58:34 compute-0 ceph-mon[75011]: osdmap e265: 3 total, 3 up, 3 in
Nov 22 03:58:34 compute-0 ceph-mon[75011]: pgmap v1274: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 26 KiB/s wr, 462 op/s
Nov 22 03:58:34 compute-0 systemd[1]: Started libpod-conmon-0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3.scope.
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.986 253465 WARNING nova.volume.cinder [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Attachment 4f201155-cd59-4001-a2e2-9e748ec21598 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 4f201155-cd59-4001-a2e2-9e748ec21598. (HTTP 404) (Request-ID: req-3deffadd-4aa6-469b-b374-f32b6f6f4c2f)
Nov 22 03:58:34 compute-0 nova_compute[253461]: 2025-11-22 03:58:34.987 253465 INFO nova.compute.manager [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Took 0.17 seconds to detach 1 volumes for instance.
Nov 22 03:58:34 compute-0 podman[276725]: 2025-11-22 03:58:34.900451876 +0000 UTC m=+0.030630118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:58:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac48b399916fc67ddd6024e5e7555397b951aaa8f353e2d3cbfa20a66dc3a1f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac48b399916fc67ddd6024e5e7555397b951aaa8f353e2d3cbfa20a66dc3a1f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac48b399916fc67ddd6024e5e7555397b951aaa8f353e2d3cbfa20a66dc3a1f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac48b399916fc67ddd6024e5e7555397b951aaa8f353e2d3cbfa20a66dc3a1f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac48b399916fc67ddd6024e5e7555397b951aaa8f353e2d3cbfa20a66dc3a1f0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:35 compute-0 podman[276725]: 2025-11-22 03:58:35.020798405 +0000 UTC m=+0.150976647 container init 0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 03:58:35 compute-0 podman[276725]: 2025-11-22 03:58:35.030761212 +0000 UTC m=+0.160939414 container start 0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 03:58:35 compute-0 podman[276725]: 2025-11-22 03:58:35.03466146 +0000 UTC m=+0.164839682 container attach 0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.059 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.059 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:58:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/967473858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.109 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.160 253465 DEBUG oslo_concurrency.processutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.208 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.209 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.304 253465 DEBUG nova.compute.manager [req-b68e0dcb-23be-40a4-83b2-630e92500857 req-8ccea7af-606c-43c7-8c93-3e655aa4810a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received event network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.304 253465 DEBUG oslo_concurrency.lockutils [req-b68e0dcb-23be-40a4-83b2-630e92500857 req-8ccea7af-606c-43c7-8c93-3e655aa4810a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.304 253465 DEBUG oslo_concurrency.lockutils [req-b68e0dcb-23be-40a4-83b2-630e92500857 req-8ccea7af-606c-43c7-8c93-3e655aa4810a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.304 253465 DEBUG oslo_concurrency.lockutils [req-b68e0dcb-23be-40a4-83b2-630e92500857 req-8ccea7af-606c-43c7-8c93-3e655aa4810a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.305 253465 DEBUG nova.compute.manager [req-b68e0dcb-23be-40a4-83b2-630e92500857 req-8ccea7af-606c-43c7-8c93-3e655aa4810a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] No waiting events found dispatching network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.305 253465 WARNING nova.compute.manager [req-b68e0dcb-23be-40a4-83b2-630e92500857 req-8ccea7af-606c-43c7-8c93-3e655aa4810a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received unexpected event network-vif-plugged-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 for instance with vm_state deleted and task_state None.
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.305 253465 DEBUG nova.compute.manager [req-b68e0dcb-23be-40a4-83b2-630e92500857 req-8ccea7af-606c-43c7-8c93-3e655aa4810a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Received event network-vif-deleted-efd1a6a8-37bb-4721-9db8-ab78b987ebb0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.374 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.376 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4379MB free_disk=59.94963455200195GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.376 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:58:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/595544190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.588 253465 DEBUG oslo_concurrency.processutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.595 253465 DEBUG nova.compute.provider_tree [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.622 253465 DEBUG nova.scheduler.client.report [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.648 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.655 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.685 253465 INFO nova.scheduler.client.report [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Deleted allocations for instance aaadb0bf-8f4d-4eb7-9688-60999c8129dd
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.731 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.731 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.732 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.751 253465 DEBUG oslo_concurrency.lockutils [None req-723e7776-12d8-45a1-8998-72ac8ada8df3 323c39d407144b438e9fbcdc7c67710e 5846275e26354bb095399d10c8b52285 - - default default] Lock "aaadb0bf-8f4d-4eb7-9688-60999c8129dd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:35 compute-0 nova_compute[253461]: 2025-11-22 03:58:35.763 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:58:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e265 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Nov 22 03:58:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Nov 22 03:58:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Nov 22 03:58:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/967473858' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/595544190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:35 compute-0 ceph-mon[75011]: osdmap e266: 3 total, 3 up, 3 in
Nov 22 03:58:36 compute-0 ovn_controller[152691]: 2025-11-22T03:58:36Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:b9:45 10.100.0.9
Nov 22 03:58:36 compute-0 ovn_controller[152691]: 2025-11-22T03:58:36Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:b9:45 10.100.0.9
Nov 22 03:58:36 compute-0 quizzical_satoshi[276742]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:58:36 compute-0 quizzical_satoshi[276742]: --> relative data size: 1.0
Nov 22 03:58:36 compute-0 quizzical_satoshi[276742]: --> All data devices are unavailable
Nov 22 03:58:36 compute-0 systemd[1]: libpod-0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3.scope: Deactivated successfully.
Nov 22 03:58:36 compute-0 systemd[1]: libpod-0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3.scope: Consumed 1.023s CPU time.
Nov 22 03:58:36 compute-0 podman[276725]: 2025-11-22 03:58:36.159291884 +0000 UTC m=+1.289470136 container died 0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:58:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac48b399916fc67ddd6024e5e7555397b951aaa8f353e2d3cbfa20a66dc3a1f0-merged.mount: Deactivated successfully.
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:58:36
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'images']
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:58:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:58:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267963622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:36 compute-0 podman[276725]: 2025-11-22 03:58:36.228325484 +0000 UTC m=+1.358503686 container remove 0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:58:36 compute-0 nova_compute[253461]: 2025-11-22 03:58:36.238 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:58:36 compute-0 systemd[1]: libpod-conmon-0e79ebf12fda956cffaab23ada4629d29c2394a93a4e9b05f4d32098451c47e3.scope: Deactivated successfully.
Nov 22 03:58:36 compute-0 nova_compute[253461]: 2025-11-22 03:58:36.245 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 15 KiB/s wr, 234 op/s
Nov 22 03:58:36 compute-0 nova_compute[253461]: 2025-11-22 03:58:36.264 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:58:36 compute-0 sudo[276591]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:36 compute-0 nova_compute[253461]: 2025-11-22 03:58:36.285 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:58:36 compute-0 nova_compute[253461]: 2025-11-22 03:58:36.285 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:36 compute-0 sudo[276828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:36 compute-0 sudo[276828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:36 compute-0 sudo[276828]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:36 compute-0 sudo[276853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:58:36 compute-0 sudo[276853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:36 compute-0 sudo[276853]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:58:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:58:36 compute-0 sudo[276878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:36 compute-0 sudo[276878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:36 compute-0 sudo[276878]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:36 compute-0 sudo[276903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:58:36 compute-0 sudo[276903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:36 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1267963622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:58:36 compute-0 ceph-mon[75011]: pgmap v1276: 305 pgs: 305 active+clean; 167 MiB data, 346 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 15 KiB/s wr, 234 op/s
Nov 22 03:58:37 compute-0 podman[276968]: 2025-11-22 03:58:37.054306085 +0000 UTC m=+0.063074087 container create 5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 03:58:37 compute-0 systemd[1]: Started libpod-conmon-5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1.scope.
Nov 22 03:58:37 compute-0 nova_compute[253461]: 2025-11-22 03:58:37.114 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:37 compute-0 nova_compute[253461]: 2025-11-22 03:58:37.115 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:37 compute-0 nova_compute[253461]: 2025-11-22 03:58:37.116 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:37 compute-0 nova_compute[253461]: 2025-11-22 03:58:37.116 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:37 compute-0 nova_compute[253461]: 2025-11-22 03:58:37.117 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:58:37 compute-0 podman[276968]: 2025-11-22 03:58:37.028100877 +0000 UTC m=+0.036868969 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:58:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:58:37 compute-0 podman[276968]: 2025-11-22 03:58:37.156549291 +0000 UTC m=+0.165317393 container init 5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:58:37 compute-0 podman[276968]: 2025-11-22 03:58:37.167099813 +0000 UTC m=+0.175867865 container start 5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:58:37 compute-0 podman[276968]: 2025-11-22 03:58:37.171499912 +0000 UTC m=+0.180267974 container attach 5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 03:58:37 compute-0 admiring_sammet[276984]: 167 167
Nov 22 03:58:37 compute-0 systemd[1]: libpod-5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1.scope: Deactivated successfully.
Nov 22 03:58:37 compute-0 podman[276968]: 2025-11-22 03:58:37.175484155 +0000 UTC m=+0.184252197 container died 5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:58:37 compute-0 nova_compute[253461]: 2025-11-22 03:58:37.183 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1dd5dcb5c49bc8f07b158133498f6f50eb9da7b08b661590ee4b8ab3550c60d-merged.mount: Deactivated successfully.
Nov 22 03:58:37 compute-0 podman[276968]: 2025-11-22 03:58:37.225294333 +0000 UTC m=+0.234062375 container remove 5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_sammet, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 03:58:37 compute-0 systemd[1]: libpod-conmon-5d7491299af95c1a239b6c31163f6d98669891a4744669eec84fcb9fb2cd38b1.scope: Deactivated successfully.
Nov 22 03:58:37 compute-0 podman[277008]: 2025-11-22 03:58:37.438555498 +0000 UTC m=+0.055328365 container create bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:58:37 compute-0 systemd[1]: Started libpod-conmon-bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9.scope.
Nov 22 03:58:37 compute-0 podman[277008]: 2025-11-22 03:58:37.410039899 +0000 UTC m=+0.026812806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:58:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d19dc0053637aa4b38e941667947b02c352463ac797b9df04e830abb270574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d19dc0053637aa4b38e941667947b02c352463ac797b9df04e830abb270574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d19dc0053637aa4b38e941667947b02c352463ac797b9df04e830abb270574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d19dc0053637aa4b38e941667947b02c352463ac797b9df04e830abb270574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:37 compute-0 podman[277008]: 2025-11-22 03:58:37.546668921 +0000 UTC m=+0.163441798 container init bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Nov 22 03:58:37 compute-0 podman[277008]: 2025-11-22 03:58:37.565373092 +0000 UTC m=+0.182145949 container start bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 03:58:37 compute-0 podman[277008]: 2025-11-22 03:58:37.569789081 +0000 UTC m=+0.186561948 container attach bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:58:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Nov 22 03:58:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Nov 22 03:58:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Nov 22 03:58:38 compute-0 nova_compute[253461]: 2025-11-22 03:58:38.139 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 152 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 1.3 MiB/s wr, 218 op/s
Nov 22 03:58:38 compute-0 great_mccarthy[277024]: {
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:     "0": [
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:         {
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "devices": [
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "/dev/loop3"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             ],
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_name": "ceph_lv0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_size": "21470642176",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "name": "ceph_lv0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "tags": {
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cluster_name": "ceph",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.crush_device_class": "",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.encrypted": "0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osd_id": "0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.type": "block",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.vdo": "0"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             },
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "type": "block",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "vg_name": "ceph_vg0"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:         }
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:     ],
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:     "1": [
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:         {
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "devices": [
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "/dev/loop4"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             ],
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_name": "ceph_lv1",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_size": "21470642176",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "name": "ceph_lv1",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "tags": {
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cluster_name": "ceph",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.crush_device_class": "",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.encrypted": "0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osd_id": "1",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.type": "block",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.vdo": "0"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             },
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "type": "block",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "vg_name": "ceph_vg1"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:         }
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:     ],
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:     "2": [
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:         {
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "devices": [
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "/dev/loop5"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             ],
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_name": "ceph_lv2",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_size": "21470642176",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "name": "ceph_lv2",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "tags": {
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.cluster_name": "ceph",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.crush_device_class": "",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.encrypted": "0",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osd_id": "2",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.type": "block",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:                 "ceph.vdo": "0"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             },
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "type": "block",
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:             "vg_name": "ceph_vg2"
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:         }
Nov 22 03:58:38 compute-0 great_mccarthy[277024]:     ]
Nov 22 03:58:38 compute-0 great_mccarthy[277024]: }
Nov 22 03:58:38 compute-0 podman[277008]: 2025-11-22 03:58:38.391212306 +0000 UTC m=+1.007985133 container died bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:58:38 compute-0 systemd[1]: libpod-bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9.scope: Deactivated successfully.
Nov 22 03:58:38 compute-0 nova_compute[253461]: 2025-11-22 03:58:38.426 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-98d19dc0053637aa4b38e941667947b02c352463ac797b9df04e830abb270574-merged.mount: Deactivated successfully.
Nov 22 03:58:38 compute-0 nova_compute[253461]: 2025-11-22 03:58:38.451 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:38 compute-0 podman[277008]: 2025-11-22 03:58:38.470697579 +0000 UTC m=+1.087470406 container remove bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:58:38 compute-0 systemd[1]: libpod-conmon-bf3f8683d9ad2589645762dfa9112aabca0af6953b79c7c1b6d435dc68efcad9.scope: Deactivated successfully.
Nov 22 03:58:38 compute-0 sudo[276903]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:38 compute-0 sudo[277047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:38 compute-0 sudo[277047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:38 compute-0 sudo[277047]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:38 compute-0 sudo[277072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:58:38 compute-0 sudo[277072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:38 compute-0 sudo[277072]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:38 compute-0 sudo[277097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:38 compute-0 sudo[277097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:38 compute-0 sudo[277097]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:38 compute-0 sudo[277122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:58:38 compute-0 sudo[277122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:38 compute-0 ceph-mon[75011]: osdmap e267: 3 total, 3 up, 3 in
Nov 22 03:58:38 compute-0 ceph-mon[75011]: pgmap v1278: 305 pgs: 305 active+clean; 152 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 1.3 MiB/s wr, 218 op/s
Nov 22 03:58:39 compute-0 podman[277187]: 2025-11-22 03:58:39.158906226 +0000 UTC m=+0.050337905 container create b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:58:39 compute-0 systemd[1]: Started libpod-conmon-b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486.scope.
Nov 22 03:58:39 compute-0 podman[277187]: 2025-11-22 03:58:39.138707152 +0000 UTC m=+0.030138921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:58:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:58:39 compute-0 podman[277187]: 2025-11-22 03:58:39.25538392 +0000 UTC m=+0.146815649 container init b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 03:58:39 compute-0 podman[277187]: 2025-11-22 03:58:39.266620662 +0000 UTC m=+0.158052371 container start b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:58:39 compute-0 podman[277187]: 2025-11-22 03:58:39.270666978 +0000 UTC m=+0.162098707 container attach b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 03:58:39 compute-0 relaxed_bartik[277203]: 167 167
Nov 22 03:58:39 compute-0 systemd[1]: libpod-b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486.scope: Deactivated successfully.
Nov 22 03:58:39 compute-0 podman[277187]: 2025-11-22 03:58:39.274778367 +0000 UTC m=+0.166210136 container died b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d104dd7cef65cdcfeb7233c28748faef93474106c55bc3347545bf6c3341d33-merged.mount: Deactivated successfully.
Nov 22 03:58:39 compute-0 podman[277187]: 2025-11-22 03:58:39.327946112 +0000 UTC m=+0.219377831 container remove b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:58:39 compute-0 systemd[1]: libpod-conmon-b109cfe95915b427405b1b5f014f8d6e014d01b816c4d48132b02b37e29e1486.scope: Deactivated successfully.
Nov 22 03:58:39 compute-0 nova_compute[253461]: 2025-11-22 03:58:39.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:58:39 compute-0 podman[277227]: 2025-11-22 03:58:39.536240388 +0000 UTC m=+0.056829358 container create c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kalam, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:58:39 compute-0 systemd[1]: Started libpod-conmon-c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93.scope.
Nov 22 03:58:39 compute-0 podman[277227]: 2025-11-22 03:58:39.505329197 +0000 UTC m=+0.025918227 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:58:39 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45b1466d639edc8a151e43358f832c09eb3c73d619357266b651915401e4f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45b1466d639edc8a151e43358f832c09eb3c73d619357266b651915401e4f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45b1466d639edc8a151e43358f832c09eb3c73d619357266b651915401e4f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb45b1466d639edc8a151e43358f832c09eb3c73d619357266b651915401e4f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:58:39 compute-0 podman[277227]: 2025-11-22 03:58:39.654870363 +0000 UTC m=+0.175459373 container init c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kalam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 03:58:39 compute-0 podman[277227]: 2025-11-22 03:58:39.665076595 +0000 UTC m=+0.185665555 container start c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:58:39 compute-0 podman[277227]: 2025-11-22 03:58:39.675902818 +0000 UTC m=+0.196491848 container attach c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:58:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 744 KiB/s rd, 4.1 MiB/s wr, 285 op/s
Nov 22 03:58:40 compute-0 condescending_kalam[277243]: {
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "osd_id": 1,
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "type": "bluestore"
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:     },
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "osd_id": 0,
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "type": "bluestore"
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:     },
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "osd_id": 2,
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:         "type": "bluestore"
Nov 22 03:58:40 compute-0 condescending_kalam[277243]:     }
Nov 22 03:58:40 compute-0 condescending_kalam[277243]: }
Nov 22 03:58:40 compute-0 systemd[1]: libpod-c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93.scope: Deactivated successfully.
Nov 22 03:58:40 compute-0 podman[277276]: 2025-11-22 03:58:40.667040684 +0000 UTC m=+0.031502700 container died c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb45b1466d639edc8a151e43358f832c09eb3c73d619357266b651915401e4f7-merged.mount: Deactivated successfully.
Nov 22 03:58:40 compute-0 podman[277276]: 2025-11-22 03:58:40.754630945 +0000 UTC m=+0.119092911 container remove c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:58:40 compute-0 systemd[1]: libpod-conmon-c2fd73d43bef287f0d7553fbc1ffaa5e13e4578867a0751b417af0e9cf352a93.scope: Deactivated successfully.
Nov 22 03:58:40 compute-0 sudo[277122]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:58:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:58:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:40 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e388b69b-ff1e-43e2-872f-3dfc5660c528 does not exist
Nov 22 03:58:40 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 540a505d-8a4c-4f26-b0d9-77713568656b does not exist
Nov 22 03:58:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:40 compute-0 sudo[277291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:58:40 compute-0 sudo[277291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:40 compute-0 sudo[277291]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:41 compute-0 sudo[277316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:58:41 compute-0 sudo[277316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:58:41 compute-0 sudo[277316]: pam_unix(sudo:session): session closed for user root
Nov 22 03:58:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Nov 22 03:58:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Nov 22 03:58:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Nov 22 03:58:41 compute-0 ceph-mon[75011]: pgmap v1279: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 744 KiB/s rd, 4.1 MiB/s wr, 285 op/s
Nov 22 03:58:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:41 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:58:42 compute-0 nova_compute[253461]: 2025-11-22 03:58:42.185 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 705 KiB/s rd, 4.0 MiB/s wr, 239 op/s
Nov 22 03:58:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Nov 22 03:58:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Nov 22 03:58:42 compute-0 ceph-mon[75011]: osdmap e268: 3 total, 3 up, 3 in
Nov 22 03:58:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Nov 22 03:58:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3182274762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3182274762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:43 compute-0 nova_compute[253461]: 2025-11-22 03:58:43.182 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:43 compute-0 ceph-mon[75011]: pgmap v1281: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 705 KiB/s rd, 4.0 MiB/s wr, 239 op/s
Nov 22 03:58:43 compute-0 ceph-mon[75011]: osdmap e269: 3 total, 3 up, 3 in
Nov 22 03:58:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3182274762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3182274762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3208936559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3208936559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1283: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 729 KiB/s rd, 2.8 MiB/s wr, 345 op/s
Nov 22 03:58:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Nov 22 03:58:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Nov 22 03:58:44 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Nov 22 03:58:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3208936559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3208936559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100298465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4100298465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Nov 22 03:58:45 compute-0 ceph-mon[75011]: pgmap v1283: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 729 KiB/s rd, 2.8 MiB/s wr, 345 op/s
Nov 22 03:58:45 compute-0 ceph-mon[75011]: osdmap e270: 3 total, 3 up, 3 in
Nov 22 03:58:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4100298465' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4100298465' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Nov 22 03:58:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Nov 22 03:58:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834831061' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834831061' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 38 KiB/s wr, 243 op/s
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007595274142085043 of space, bias 1.0, pg target 0.22785822426255128 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003516566142475744 of space, bias 1.0, pg target 0.10549698427427232 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663034365435958 of space, bias 1.0, pg target 0.19989103096307873 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:58:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:58:46 compute-0 ceph-mon[75011]: osdmap e271: 3 total, 3 up, 3 in
Nov 22 03:58:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2834831061' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2834831061' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:46 compute-0 ceph-mon[75011]: pgmap v1286: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 189 KiB/s rd, 38 KiB/s wr, 243 op/s
Nov 22 03:58:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/513201091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/513201091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:47 compute-0 nova_compute[253461]: 2025-11-22 03:58:47.188 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/513201091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/513201091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:47 compute-0 podman[277341]: 2025-11-22 03:58:47.418176249 +0000 UTC m=+0.077443370 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 03:58:47 compute-0 podman[277342]: 2025-11-22 03:58:47.452142135 +0000 UTC m=+0.110292681 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 03:58:48 compute-0 nova_compute[253461]: 2025-11-22 03:58:48.109 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783913.1073475, aaadb0bf-8f4d-4eb7-9688-60999c8129dd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:58:48 compute-0 nova_compute[253461]: 2025-11-22 03:58:48.110 253465 INFO nova.compute.manager [-] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] VM Stopped (Lifecycle Event)
Nov 22 03:58:48 compute-0 nova_compute[253461]: 2025-11-22 03:58:48.216 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 33 KiB/s wr, 254 op/s
Nov 22 03:58:48 compute-0 nova_compute[253461]: 2025-11-22 03:58:48.328 253465 DEBUG nova.compute.manager [None req-beaa4ff5-32e6-4ab5-8723-14e40891211a - - - - - -] [instance: aaadb0bf-8f4d-4eb7-9688-60999c8129dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:58:48 compute-0 ceph-mon[75011]: pgmap v1287: 305 pgs: 305 active+clean; 167 MiB data, 355 MiB used, 60 GiB / 60 GiB avail; 194 KiB/s rd, 33 KiB/s wr, 254 op/s
Nov 22 03:58:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Nov 22 03:58:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Nov 22 03:58:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Nov 22 03:58:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 167 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 8.2 KiB/s wr, 160 op/s
Nov 22 03:58:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Nov 22 03:58:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Nov 22 03:58:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Nov 22 03:58:51 compute-0 ceph-mon[75011]: osdmap e272: 3 total, 3 up, 3 in
Nov 22 03:58:51 compute-0 ceph-mon[75011]: pgmap v1289: 305 pgs: 305 active+clean; 167 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 8.2 KiB/s wr, 160 op/s
Nov 22 03:58:51 compute-0 ceph-mon[75011]: osdmap e273: 3 total, 3 up, 3 in
Nov 22 03:58:52 compute-0 nova_compute[253461]: 2025-11-22 03:58:52.190 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 167 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 8.3 KiB/s wr, 151 op/s
Nov 22 03:58:53 compute-0 nova_compute[253461]: 2025-11-22 03:58:53.219 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Nov 22 03:58:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Nov 22 03:58:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Nov 22 03:58:53 compute-0 ceph-mon[75011]: pgmap v1291: 305 pgs: 305 active+clean; 167 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 8.3 KiB/s wr, 151 op/s
Nov 22 03:58:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273490956' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4273490956' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 167 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 8.2 KiB/s wr, 88 op/s
Nov 22 03:58:54 compute-0 ceph-mon[75011]: osdmap e274: 3 total, 3 up, 3 in
Nov 22 03:58:54 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4273490956' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:54 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4273490956' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:54 compute-0 ceph-mon[75011]: pgmap v1293: 305 pgs: 305 active+clean; 167 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 8.2 KiB/s wr, 88 op/s
Nov 22 03:58:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Nov 22 03:58:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Nov 22 03:58:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Nov 22 03:58:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:58:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Nov 22 03:58:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Nov 22 03:58:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Nov 22 03:58:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 167 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.9 KiB/s wr, 30 op/s
Nov 22 03:58:56 compute-0 ceph-mon[75011]: osdmap e275: 3 total, 3 up, 3 in
Nov 22 03:58:56 compute-0 ceph-mon[75011]: osdmap e276: 3 total, 3 up, 3 in
Nov 22 03:58:56 compute-0 ceph-mon[75011]: pgmap v1296: 305 pgs: 305 active+clean; 167 MiB data, 363 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 4.9 KiB/s wr, 30 op/s
Nov 22 03:58:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2690505288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2690505288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:57 compute-0 nova_compute[253461]: 2025-11-22 03:58:57.193 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2690505288' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2690505288' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:58:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3763658356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:58:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3763658356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:58 compute-0 nova_compute[253461]: 2025-11-22 03:58:58.222 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:58:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 7.7 KiB/s wr, 119 op/s
Nov 22 03:58:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Nov 22 03:58:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3763658356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:58:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3763658356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:58:58 compute-0 ceph-mon[75011]: pgmap v1297: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 7.7 KiB/s wr, 119 op/s
Nov 22 03:58:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Nov 22 03:58:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Nov 22 03:58:59 compute-0 nova_compute[253461]: 2025-11-22 03:58:59.189 253465 DEBUG oslo_concurrency.lockutils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:58:59 compute-0 nova_compute[253461]: 2025-11-22 03:58:59.190 253465 DEBUG oslo_concurrency.lockutils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:58:59 compute-0 nova_compute[253461]: 2025-11-22 03:58:59.228 253465 DEBUG nova.objects.instance [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lazy-loading 'flavor' on Instance uuid 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:58:59 compute-0 nova_compute[253461]: 2025-11-22 03:58:59.331 253465 DEBUG oslo_concurrency.lockutils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.141s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:58:59 compute-0 ceph-mon[75011]: osdmap e277: 3 total, 3 up, 3 in
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.056 253465 DEBUG oslo_concurrency.lockutils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.057 253465 DEBUG oslo_concurrency.lockutils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.058 253465 INFO nova.compute.manager [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Attaching volume 1d88cadc-266d-4e71-876a-6b2d03b662af to /dev/vdb
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.224 253465 DEBUG os_brick.utils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.226 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.246 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.247 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[fbe5aea7-e139-42d9-bc68-68adb9a12f0f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.248 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.2 KiB/s wr, 122 op/s
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.263 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.264 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[817fc113-f72e-4ad1-b68a-599bb1304ab2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.266 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.283 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.283 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[1124b7af-ab9a-469d-ac4a-5570e193d3e8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.286 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e58c6322-48df-4d9e-92a0-936caede395d]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.287 253465 DEBUG oslo_concurrency.processutils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.329 253465 DEBUG oslo_concurrency.processutils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "nvme version" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.336 253465 DEBUG os_brick.initiator.connectors.lightos [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.337 253465 DEBUG os_brick.initiator.connectors.lightos [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.338 253465 DEBUG os_brick.initiator.connectors.lightos [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.339 253465 DEBUG os_brick.utils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] <== get_connector_properties: return (113ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 03:59:00 compute-0 nova_compute[253461]: 2025-11-22 03:59:00.339 253465 DEBUG nova.virt.block_device [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updating existing volume attachment record: 5d933ab2-79c3-41b0-9da1-8d2c1d901c6d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 03:59:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4258110706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4258110706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.468605) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783940468669, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1585, "num_deletes": 511, "total_data_size": 1784657, "memory_usage": 1818704, "flush_reason": "Manual Compaction"}
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Nov 22 03:59:00 compute-0 ceph-mon[75011]: pgmap v1299: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 5.2 KiB/s wr, 122 op/s
Nov 22 03:59:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4258110706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4258110706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783940487629, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1342745, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25093, "largest_seqno": 26677, "table_properties": {"data_size": 1336337, "index_size": 3034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18977, "raw_average_key_size": 20, "raw_value_size": 1320781, "raw_average_value_size": 1435, "num_data_blocks": 133, "num_entries": 920, "num_filter_entries": 920, "num_deletions": 511, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783856, "oldest_key_time": 1763783856, "file_creation_time": 1763783940, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 19085 microseconds, and 7703 cpu microseconds.
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.487691) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1342745 bytes OK
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.487718) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.490878) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.490903) EVENT_LOG_v1 {"time_micros": 1763783940490895, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.490927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1776399, prev total WAL file size 1776399, number of live WAL files 2.
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.492665) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1311KB)], [56(10MB)]
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783940492718, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12355222, "oldest_snapshot_seqno": -1}
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5197 keys, 7688797 bytes, temperature: kUnknown
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783940573174, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7688797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7652018, "index_size": 22653, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13061, "raw_key_size": 130777, "raw_average_key_size": 25, "raw_value_size": 7556373, "raw_average_value_size": 1453, "num_data_blocks": 920, "num_entries": 5197, "num_filter_entries": 5197, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763783940, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.573541) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7688797 bytes
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.577591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.3 rd, 95.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 10.5 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(14.9) write-amplify(5.7) OK, records in: 6201, records dropped: 1004 output_compression: NoCompression
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.577621) EVENT_LOG_v1 {"time_micros": 1763783940577609, "job": 30, "event": "compaction_finished", "compaction_time_micros": 80572, "compaction_time_cpu_micros": 26562, "output_level": 6, "num_output_files": 1, "total_output_size": 7688797, "num_input_records": 6201, "num_output_records": 5197, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783940578207, "job": 30, "event": "table_file_deletion", "file_number": 58}
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763783940581367, "job": 30, "event": "table_file_deletion", "file_number": 56}
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.492496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.581508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.581515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.581518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.581521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:00 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-03:59:00.581525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 03:59:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Nov 22 03:59:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Nov 22 03:59:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Nov 22 03:59:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/286462444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.244 253465 DEBUG os_brick.encryptors [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Using volume encryption metadata '{'encryption_key_id': 'd9c32e84-c7e8-48d0-89ef-b290952addc5', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1d88cadc-266d-4e71-876a-6b2d03b662af', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1d88cadc-266d-4e71-876a-6b2d03b662af', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '18ad6aa8-f2c4-484c-82c5-d369b6f5af5f', 'attached_at': '', 'detached_at': '', 'volume_id': '1d88cadc-266d-4e71-876a-6b2d03b662af', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.254 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.276 253465 DEBUG barbicanclient.v1.secrets [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.277 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.309 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.310 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.335 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.336 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.369 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.369 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.405 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.406 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.431 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.432 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.464 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.465 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.500 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.500 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.538 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.539 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.580 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.581 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.628 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.629 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.681 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.681 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.737 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.738 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 ovn_controller[152691]: 2025-11-22T03:59:01Z|00127|binding|INFO|Releasing lport b589b132-ca33-4c3a-b65c-a4a6033a69d2 from this chassis (sb_readonly=0)
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.772 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.773 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.804 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.805 253465 INFO barbicanclient.base [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Calculated Secrets uuid ref: secrets/d9c32e84-c7e8-48d0-89ef-b290952addc5
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.825 253465 DEBUG barbicanclient.client [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.826 253465 DEBUG nova.virt.libvirt.host [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 03:59:01 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 03:59:01 compute-0 nova_compute[253461]:     <volume>1d88cadc-266d-4e71-876a-6b2d03b662af</volume>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   </usage>
Nov 22 03:59:01 compute-0 nova_compute[253461]: </secret>
Nov 22 03:59:01 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.839 253465 DEBUG nova.objects.instance [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lazy-loading 'flavor' on Instance uuid 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.863 253465 DEBUG nova.virt.libvirt.driver [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Attempting to attach volume 1d88cadc-266d-4e71-876a-6b2d03b662af with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.864 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:01 compute-0 nova_compute[253461]: 2025-11-22 03:59:01.871 253465 DEBUG nova.virt.libvirt.guest [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 03:59:01 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-1d88cadc-266d-4e71-876a-6b2d03b662af">
Nov 22 03:59:01 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   </source>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 03:59:01 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   </auth>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   <serial>1d88cadc-266d-4e71-876a-6b2d03b662af</serial>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 03:59:01 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="c71b9465-ef77-48c8-bb49-206dd82e86c3"/>
Nov 22 03:59:01 compute-0 nova_compute[253461]:   </encryption>
Nov 22 03:59:01 compute-0 nova_compute[253461]: </disk>
Nov 22 03:59:01 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 03:59:01 compute-0 ceph-mon[75011]: osdmap e278: 3 total, 3 up, 3 in
Nov 22 03:59:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/286462444' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:02 compute-0 nova_compute[253461]: 2025-11-22 03:59:02.196 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3830678097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3830678097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 6.9 KiB/s wr, 173 op/s
Nov 22 03:59:02 compute-0 podman[277412]: 2025-11-22 03:59:02.430501862 +0000 UTC m=+0.095983715 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:59:02 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3830678097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:02 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3830678097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:02 compute-0 ceph-mon[75011]: pgmap v1301: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 6.9 KiB/s wr, 173 op/s
Nov 22 03:59:03 compute-0 nova_compute[253461]: 2025-11-22 03:59:03.224 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Nov 22 03:59:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Nov 22 03:59:03 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Nov 22 03:59:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.8 KiB/s wr, 88 op/s
Nov 22 03:59:04 compute-0 nova_compute[253461]: 2025-11-22 03:59:04.307 253465 DEBUG nova.virt.libvirt.driver [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:59:04 compute-0 nova_compute[253461]: 2025-11-22 03:59:04.308 253465 DEBUG nova.virt.libvirt.driver [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:59:04 compute-0 nova_compute[253461]: 2025-11-22 03:59:04.308 253465 DEBUG nova.virt.libvirt.driver [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:59:04 compute-0 nova_compute[253461]: 2025-11-22 03:59:04.309 253465 DEBUG nova.virt.libvirt.driver [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] No VIF found with MAC fa:16:3e:70:b9:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:59:04 compute-0 nova_compute[253461]: 2025-11-22 03:59:04.549 253465 DEBUG oslo_concurrency.lockutils [None req-2dd810f9-1309-47a8-a014-7563e3ba8486 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.492s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/812566777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/812566777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:04 compute-0 ceph-mon[75011]: osdmap e279: 3 total, 3 up, 3 in
Nov 22 03:59:04 compute-0 ceph-mon[75011]: pgmap v1303: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.8 KiB/s wr, 88 op/s
Nov 22 03:59:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/812566777' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/812566777' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Nov 22 03:59:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Nov 22 03:59:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Nov 22 03:59:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.3 KiB/s wr, 71 op/s
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.469 253465 DEBUG oslo_concurrency.lockutils [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.469 253465 DEBUG oslo_concurrency.lockutils [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.491 253465 INFO nova.compute.manager [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Detaching volume 1d88cadc-266d-4e71-876a-6b2d03b662af
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.679 253465 INFO nova.virt.block_device [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Attempting to driver detach volume 1d88cadc-266d-4e71-876a-6b2d03b662af from mountpoint /dev/vdb
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.856 253465 DEBUG os_brick.encryptors [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Using volume encryption metadata '{'encryption_key_id': 'd9c32e84-c7e8-48d0-89ef-b290952addc5', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-1d88cadc-266d-4e71-876a-6b2d03b662af', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '1d88cadc-266d-4e71-876a-6b2d03b662af', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '18ad6aa8-f2c4-484c-82c5-d369b6f5af5f', 'attached_at': '', 'detached_at': '', 'volume_id': '1d88cadc-266d-4e71-876a-6b2d03b662af', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.866 253465 DEBUG nova.virt.libvirt.driver [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Attempting to detach device vdb from instance 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.866 253465 DEBUG nova.virt.libvirt.guest [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-1d88cadc-266d-4e71-876a-6b2d03b662af">
Nov 22 03:59:06 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   </source>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <serial>1d88cadc-266d-4e71-876a-6b2d03b662af</serial>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 03:59:06 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="c71b9465-ef77-48c8-bb49-206dd82e86c3"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   </encryption>
Nov 22 03:59:06 compute-0 nova_compute[253461]: </disk>
Nov 22 03:59:06 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.879 253465 INFO nova.virt.libvirt.driver [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Successfully detached device vdb from instance 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f from the persistent domain config.
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.879 253465 DEBUG nova.virt.libvirt.driver [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 03:59:06 compute-0 nova_compute[253461]: 2025-11-22 03:59:06.880 253465 DEBUG nova.virt.libvirt.guest [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-1d88cadc-266d-4e71-876a-6b2d03b662af">
Nov 22 03:59:06 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   </source>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <serial>1d88cadc-266d-4e71-876a-6b2d03b662af</serial>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 03:59:06 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="c71b9465-ef77-48c8-bb49-206dd82e86c3"/>
Nov 22 03:59:06 compute-0 nova_compute[253461]:   </encryption>
Nov 22 03:59:06 compute-0 nova_compute[253461]: </disk>
Nov 22 03:59:06 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 03:59:06 compute-0 ceph-mon[75011]: osdmap e280: 3 total, 3 up, 3 in
Nov 22 03:59:06 compute-0 ceph-mon[75011]: pgmap v1305: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 2.3 KiB/s wr, 71 op/s
Nov 22 03:59:07 compute-0 nova_compute[253461]: 2025-11-22 03:59:07.012 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763783947.01194, 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 03:59:07 compute-0 nova_compute[253461]: 2025-11-22 03:59:07.015 253465 DEBUG nova.virt.libvirt.driver [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 03:59:07 compute-0 nova_compute[253461]: 2025-11-22 03:59:07.017 253465 INFO nova.virt.libvirt.driver [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Successfully detached device vdb from instance 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f from the live domain config.
Nov 22 03:59:07 compute-0 nova_compute[253461]: 2025-11-22 03:59:07.198 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:07 compute-0 nova_compute[253461]: 2025-11-22 03:59:07.311 253465 DEBUG nova.objects.instance [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lazy-loading 'flavor' on Instance uuid 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:59:07 compute-0 nova_compute[253461]: 2025-11-22 03:59:07.348 253465 DEBUG oslo_concurrency.lockutils [None req-382a5345-0f02-46c5-bad6-1a5a87e38e6a 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Nov 22 03:59:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Nov 22 03:59:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.227 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.3 KiB/s wr, 74 op/s
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.397 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.398 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.398 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.399 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.399 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.400 253465 INFO nova.compute.manager [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Terminating instance
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.401 253465 DEBUG nova.compute.manager [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 03:59:08 compute-0 kernel: tap9aa4306f-f8 (unregistering): left promiscuous mode
Nov 22 03:59:08 compute-0 NetworkManager[48916]: <info>  [1763783948.4674] device (tap9aa4306f-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.481 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00128|binding|INFO|Releasing lport 9aa4306f-f805-476d-840c-1580581292f0 from this chassis (sb_readonly=0)
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00129|binding|INFO|Setting lport 9aa4306f-f805-476d-840c-1580581292f0 down in Southbound
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00130|binding|INFO|Removing iface tap9aa4306f-f8 ovn-installed in OVS
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.484 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.502 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b9:45 10.100.0.9'], port_security=['fa:16:3e:70:b9:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '18ad6aa8-f2c4-484c-82c5-d369b6f5af5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4426ce11629e407f98cae838e2e3e2cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61bae170-9bbe-46b6-9dcc-ab0c87d6de4f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b0dfdb4-bf10-486b-b28e-52cb821a7c4f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=9aa4306f-f805-476d-840c-1580581292f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.503 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 9aa4306f-f805-476d-840c-1580581292f0 in datapath dcb0f91f-b5dc-48b6-805a-0fe3231189f2 unbound from our chassis
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.504 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dcb0f91f-b5dc-48b6-805a-0fe3231189f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.506 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[846bd16f-5b50-4909-8e79-ebe395082b35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.506 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2 namespace which is not needed anymore
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.557 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 22 03:59:08 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 16.520s CPU time.
Nov 22 03:59:08 compute-0 systemd-machined[215728]: Machine qemu-11-instance-0000000b terminated.
Nov 22 03:59:08 compute-0 kernel: tap9aa4306f-f8: entered promiscuous mode
Nov 22 03:59:08 compute-0 kernel: tap9aa4306f-f8 (unregistering): left promiscuous mode
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00131|binding|INFO|Claiming lport 9aa4306f-f805-476d-840c-1580581292f0 for this chassis.
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00132|binding|INFO|9aa4306f-f805-476d-840c-1580581292f0: Claiming fa:16:3e:70:b9:45 10.100.0.9
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.622 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 NetworkManager[48916]: <info>  [1763783948.6256] manager: (tap9aa4306f-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.630 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b9:45 10.100.0.9'], port_security=['fa:16:3e:70:b9:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '18ad6aa8-f2c4-484c-82c5-d369b6f5af5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4426ce11629e407f98cae838e2e3e2cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61bae170-9bbe-46b6-9dcc-ab0c87d6de4f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b0dfdb4-bf10-486b-b28e-52cb821a7c4f, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=9aa4306f-f805-476d-840c-1580581292f0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00133|binding|INFO|Setting lport 9aa4306f-f805-476d-840c-1580581292f0 ovn-installed in OVS
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00134|binding|INFO|Setting lport 9aa4306f-f805-476d-840c-1580581292f0 up in Southbound
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00135|binding|INFO|Releasing lport 9aa4306f-f805-476d-840c-1580581292f0 from this chassis (sb_readonly=1)
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00136|if_status|INFO|Dropped 1 log messages in last 324 seconds (most recently, 324 seconds ago) due to excessive rate
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00137|if_status|INFO|Not setting lport 9aa4306f-f805-476d-840c-1580581292f0 down as sb is readonly
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00138|binding|INFO|Removing iface tap9aa4306f-f8 ovn-installed in OVS
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.648 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.650 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.652 253465 INFO nova.virt.libvirt.driver [-] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Instance destroyed successfully.
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.653 253465 DEBUG nova.objects.instance [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lazy-loading 'resources' on Instance uuid 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00139|binding|INFO|Releasing lport 9aa4306f-f805-476d-840c-1580581292f0 from this chassis (sb_readonly=0)
Nov 22 03:59:08 compute-0 ovn_controller[152691]: 2025-11-22T03:59:08Z|00140|binding|INFO|Setting lport 9aa4306f-f805-476d-840c-1580581292f0 down in Southbound
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.662 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:b9:45 10.100.0.9'], port_security=['fa:16:3e:70:b9:45 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '18ad6aa8-f2c4-484c-82c5-d369b6f5af5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4426ce11629e407f98cae838e2e3e2cc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '61bae170-9bbe-46b6-9dcc-ab0c87d6de4f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0b0dfdb4-bf10-486b-b28e-52cb821a7c4f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=9aa4306f-f805-476d-840c-1580581292f0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.673 253465 DEBUG nova.virt.libvirt.vif [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:58:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-541169332',display_name='tempest-TestEncryptedCinderVolumes-server-541169332',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-541169332',id=11,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxjrGAjJ8fj1C8a62zTUzuU6GsxwTL3F2ybuXPp1Sl/wPUQoANOSNs3maBd4JEnOAKFKQej+9qnQoLgCjrw1UKq0wBLK7Lpk3EV+2Jz/99GGAoOyYRRBTCdklsTvE8+WA==',key_name='tempest-keypair-1526528794',keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:58:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4426ce11629e407f98cae838e2e3e2cc',ramdisk_id='',reservation_id='r-sn7j743k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-268735114',owner_user_name='tempest-TestEncryptedCinderVolumes-268735114-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T03:58:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='591897fd5144401c810090ba1c0bf667',uuid=18ad6aa8-f2c4-484c-82c5-d369b6f5af5f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.674 253465 DEBUG nova.network.os_vif_util [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Converting VIF {"id": "9aa4306f-f805-476d-840c-1580581292f0", "address": "fa:16:3e:70:b9:45", "network": {"id": "dcb0f91f-b5dc-48b6-805a-0fe3231189f2", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-2066429546-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4426ce11629e407f98cae838e2e3e2cc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aa4306f-f8", "ovs_interfaceid": "9aa4306f-f805-476d-840c-1580581292f0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.675 253465 DEBUG nova.network.os_vif_util [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:70:b9:45,bridge_name='br-int',has_traffic_filtering=True,id=9aa4306f-f805-476d-840c-1580581292f0,network=Network(dcb0f91f-b5dc-48b6-805a-0fe3231189f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aa4306f-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.676 253465 DEBUG os_vif [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:b9:45,bridge_name='br-int',has_traffic_filtering=True,id=9aa4306f-f805-476d-840c-1580581292f0,network=Network(dcb0f91f-b5dc-48b6-805a-0fe3231189f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aa4306f-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.680 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.680 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aa4306f-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.686 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.689 253465 INFO os_vif [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:70:b9:45,bridge_name='br-int',has_traffic_filtering=True,id=9aa4306f-f805-476d-840c-1580581292f0,network=Network(dcb0f91f-b5dc-48b6-805a-0fe3231189f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aa4306f-f8')
Nov 22 03:59:08 compute-0 neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2[275929]: [NOTICE]   (275933) : haproxy version is 2.8.14-c23fe91
Nov 22 03:59:08 compute-0 neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2[275929]: [NOTICE]   (275933) : path to executable is /usr/sbin/haproxy
Nov 22 03:59:08 compute-0 neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2[275929]: [WARNING]  (275933) : Exiting Master process...
Nov 22 03:59:08 compute-0 neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2[275929]: [WARNING]  (275933) : Exiting Master process...
Nov 22 03:59:08 compute-0 neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2[275929]: [ALERT]    (275933) : Current worker (275935) exited with code 143 (Terminated)
Nov 22 03:59:08 compute-0 neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2[275929]: [WARNING]  (275933) : All workers exited. Exiting... (0)
Nov 22 03:59:08 compute-0 systemd[1]: libpod-cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c.scope: Deactivated successfully.
Nov 22 03:59:08 compute-0 conmon[275929]: conmon cfe593d583d77e9b4911 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c.scope/container/memory.events
Nov 22 03:59:08 compute-0 podman[277464]: 2025-11-22 03:59:08.732666233 +0000 UTC m=+0.065188213 container died cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 03:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c-userdata-shm.mount: Deactivated successfully.
Nov 22 03:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-696df3737fcdaa3df0196d9cc309edbc341c5a3f2ba7e99df235a43dd0188f85-merged.mount: Deactivated successfully.
Nov 22 03:59:08 compute-0 podman[277464]: 2025-11-22 03:59:08.780512457 +0000 UTC m=+0.113034437 container cleanup cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 03:59:08 compute-0 systemd[1]: libpod-conmon-cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c.scope: Deactivated successfully.
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.843 253465 DEBUG nova.compute.manager [req-6216bc83-5eeb-4c42-bbec-d9ff2e1016a7 req-2453a2df-eac6-4835-86c3-21b7ee134172 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-unplugged-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.844 253465 DEBUG oslo_concurrency.lockutils [req-6216bc83-5eeb-4c42-bbec-d9ff2e1016a7 req-2453a2df-eac6-4835-86c3-21b7ee134172 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.844 253465 DEBUG oslo_concurrency.lockutils [req-6216bc83-5eeb-4c42-bbec-d9ff2e1016a7 req-2453a2df-eac6-4835-86c3-21b7ee134172 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.845 253465 DEBUG oslo_concurrency.lockutils [req-6216bc83-5eeb-4c42-bbec-d9ff2e1016a7 req-2453a2df-eac6-4835-86c3-21b7ee134172 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.845 253465 DEBUG nova.compute.manager [req-6216bc83-5eeb-4c42-bbec-d9ff2e1016a7 req-2453a2df-eac6-4835-86c3-21b7ee134172 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] No waiting events found dispatching network-vif-unplugged-9aa4306f-f805-476d-840c-1580581292f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.845 253465 DEBUG nova.compute.manager [req-6216bc83-5eeb-4c42-bbec-d9ff2e1016a7 req-2453a2df-eac6-4835-86c3-21b7ee134172 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-unplugged-9aa4306f-f805-476d-840c-1580581292f0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 03:59:08 compute-0 podman[277516]: 2025-11-22 03:59:08.862458524 +0000 UTC m=+0.061858140 container remove cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.873 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc5de82-3b4c-4f62-aa01-ebf9d8cc9648]: (4, ('Sat Nov 22 03:59:08 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2 (cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c)\ncfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c\nSat Nov 22 03:59:08 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2 (cfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c)\ncfe593d583d77e9b49115b309974787eb457889794a6120b6c83bf665e1d898c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.876 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[31d8addb-a9e6-4a38-96ed-5ce03d538d50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.878 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdcb0f91f-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:08 compute-0 kernel: tapdcb0f91f-b0: left promiscuous mode
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.882 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 nova_compute[253461]: 2025-11-22 03:59:08.909 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.913 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[048aa762-b0d0-4a1b-b4f3-99f7a0332ecc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.928 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7b1625c6-a73b-440d-8715-bd6e93302ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.929 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[df879945-3487-447e-a754-c738dbf4ffee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.944 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0ce386-314a-4e1d-961e-1485da01a437]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 415570, 'reachable_time': 20184, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277534, 'error': None, 'target': 'ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.948 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-dcb0f91f-b5dc-48b6-805a-0fe3231189f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.948 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f4dcf0-85f8-46f4-a452-a86747459414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 systemd[1]: run-netns-ovnmeta\x2ddcb0f91f\x2db5dc\x2d48b6\x2d805a\x2d0fe3231189f2.mount: Deactivated successfully.
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.950 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 9aa4306f-f805-476d-840c-1580581292f0 in datapath dcb0f91f-b5dc-48b6-805a-0fe3231189f2 unbound from our chassis
Nov 22 03:59:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.952 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dcb0f91f-b5dc-48b6-805a-0fe3231189f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.953 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[bb783248-65a0-4512-8538-5f733c7e57c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.955 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 9aa4306f-f805-476d-840c-1580581292f0 in datapath dcb0f91f-b5dc-48b6-805a-0fe3231189f2 unbound from our chassis
Nov 22 03:59:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.957 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network dcb0f91f-b5dc-48b6-805a-0fe3231189f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 03:59:08 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:08.958 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[bb12776c-4470-4225-915e-15db5f95774d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:08 compute-0 ceph-mon[75011]: osdmap e281: 3 total, 3 up, 3 in
Nov 22 03:59:08 compute-0 ceph-mon[75011]: pgmap v1307: 305 pgs: 305 active+clean; 167 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.3 KiB/s wr, 74 op/s
Nov 22 03:59:09 compute-0 nova_compute[253461]: 2025-11-22 03:59:09.159 253465 INFO nova.virt.libvirt.driver [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Deleting instance files /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_del
Nov 22 03:59:09 compute-0 nova_compute[253461]: 2025-11-22 03:59:09.160 253465 INFO nova.virt.libvirt.driver [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Deletion of /var/lib/nova/instances/18ad6aa8-f2c4-484c-82c5-d369b6f5af5f_del complete
Nov 22 03:59:09 compute-0 nova_compute[253461]: 2025-11-22 03:59:09.222 253465 INFO nova.compute.manager [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Took 0.82 seconds to destroy the instance on the hypervisor.
Nov 22 03:59:09 compute-0 nova_compute[253461]: 2025-11-22 03:59:09.223 253465 DEBUG oslo.service.loopingcall [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 03:59:09 compute-0 nova_compute[253461]: 2025-11-22 03:59:09.223 253465 DEBUG nova.compute.manager [-] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 03:59:09 compute-0 nova_compute[253461]: 2025-11-22 03:59:09.224 253465 DEBUG nova.network.neutron [-] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 03:59:09 compute-0 ceph-mon[75011]: osdmap e282: 3 total, 3 up, 3 in
Nov 22 03:59:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 131 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 11 KiB/s wr, 77 op/s
Nov 22 03:59:10 compute-0 nova_compute[253461]: 2025-11-22 03:59:10.632 253465 DEBUG nova.network.neutron [-] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:59:10 compute-0 nova_compute[253461]: 2025-11-22 03:59:10.666 253465 INFO nova.compute.manager [-] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Took 1.44 seconds to deallocate network for instance.
Nov 22 03:59:10 compute-0 nova_compute[253461]: 2025-11-22 03:59:10.744 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:10 compute-0 nova_compute[253461]: 2025-11-22 03:59:10.745 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Nov 22 03:59:10 compute-0 ceph-mon[75011]: pgmap v1309: 305 pgs: 305 active+clean; 131 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 11 KiB/s wr, 77 op/s
Nov 22 03:59:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Nov 22 03:59:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.015 253465 DEBUG oslo_concurrency.processutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.070 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.072 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.073 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.073 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.073 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] No waiting events found dispatching network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.073 253465 WARNING nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received unexpected event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 for instance with vm_state deleted and task_state None.
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.074 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.074 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.074 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.074 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.074 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] No waiting events found dispatching network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.075 253465 WARNING nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received unexpected event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 for instance with vm_state deleted and task_state None.
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.075 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.075 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.075 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.075 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.076 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] No waiting events found dispatching network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.076 253465 WARNING nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received unexpected event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 for instance with vm_state deleted and task_state None.
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.076 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-unplugged-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.076 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.076 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.077 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.077 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] No waiting events found dispatching network-vif-unplugged-9aa4306f-f805-476d-840c-1580581292f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.077 253465 WARNING nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received unexpected event network-vif-unplugged-9aa4306f-f805-476d-840c-1580581292f0 for instance with vm_state deleted and task_state None.
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.077 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.077 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.078 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.078 253465 DEBUG oslo_concurrency.lockutils [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.078 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] No waiting events found dispatching network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.078 253465 WARNING nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received unexpected event network-vif-plugged-9aa4306f-f805-476d-840c-1580581292f0 for instance with vm_state deleted and task_state None.
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.078 253465 DEBUG nova.compute.manager [req-7874e5be-1a0d-4364-8fa0-5a70e6958a49 req-b2099fda-2f9d-4e10-9638-0f69ed7e5204 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Received event network-vif-deleted-9aa4306f-f805-476d-840c-1580581292f0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:59:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667927775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.468 253465 DEBUG oslo_concurrency.processutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.475 253465 DEBUG nova.compute.provider_tree [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.501 253465 DEBUG nova.scheduler.client.report [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.547 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.626 253465 INFO nova.scheduler.client.report [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Deleted allocations for instance 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f
Nov 22 03:59:11 compute-0 nova_compute[253461]: 2025-11-22 03:59:11.708 253465 DEBUG oslo_concurrency.lockutils [None req-f85674d3-4335-4ddc-98a1-6e7736b07375 591897fd5144401c810090ba1c0bf667 4426ce11629e407f98cae838e2e3e2cc - - default default] Lock "18ad6aa8-f2c4-484c-82c5-d369b6f5af5f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:11 compute-0 ceph-mon[75011]: osdmap e283: 3 total, 3 up, 3 in
Nov 22 03:59:11 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/667927775' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2313984831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:12 compute-0 nova_compute[253461]: 2025-11-22 03:59:12.200 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 107 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 12 KiB/s wr, 136 op/s
Nov 22 03:59:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Nov 22 03:59:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Nov 22 03:59:12 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Nov 22 03:59:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2313984831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:13 compute-0 ceph-mon[75011]: pgmap v1311: 305 pgs: 305 active+clean; 107 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 12 KiB/s wr, 136 op/s
Nov 22 03:59:13 compute-0 nova_compute[253461]: 2025-11-22 03:59:13.684 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1124659717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1124659717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Nov 22 03:59:14 compute-0 ceph-mon[75011]: osdmap e284: 3 total, 3 up, 3 in
Nov 22 03:59:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1124659717' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1124659717' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Nov 22 03:59:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Nov 22 03:59:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 14 KiB/s wr, 146 op/s
Nov 22 03:59:15 compute-0 ceph-mon[75011]: osdmap e285: 3 total, 3 up, 3 in
Nov 22 03:59:15 compute-0 ceph-mon[75011]: pgmap v1314: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 14 KiB/s wr, 146 op/s
Nov 22 03:59:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Nov 22 03:59:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Nov 22 03:59:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Nov 22 03:59:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.8 KiB/s wr, 138 op/s
Nov 22 03:59:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Nov 22 03:59:17 compute-0 ceph-mon[75011]: osdmap e286: 3 total, 3 up, 3 in
Nov 22 03:59:17 compute-0 ceph-mon[75011]: pgmap v1316: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 6.8 KiB/s wr, 138 op/s
Nov 22 03:59:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Nov 22 03:59:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Nov 22 03:59:17 compute-0 nova_compute[253461]: 2025-11-22 03:59:17.203 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:17 compute-0 nova_compute[253461]: 2025-11-22 03:59:17.783 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:17 compute-0 nova_compute[253461]: 2025-11-22 03:59:17.966 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Nov 22 03:59:18 compute-0 ceph-mon[75011]: osdmap e287: 3 total, 3 up, 3 in
Nov 22 03:59:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Nov 22 03:59:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Nov 22 03:59:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 6.6 KiB/s wr, 129 op/s
Nov 22 03:59:18 compute-0 podman[277559]: 2025-11-22 03:59:18.416577496 +0000 UTC m=+0.085522887 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 03:59:18 compute-0 podman[277560]: 2025-11-22 03:59:18.483502787 +0000 UTC m=+0.150496710 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 03:59:18 compute-0 nova_compute[253461]: 2025-11-22 03:59:18.686 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3742696040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3742696040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:19 compute-0 ceph-mon[75011]: osdmap e288: 3 total, 3 up, 3 in
Nov 22 03:59:19 compute-0 ceph-mon[75011]: pgmap v1319: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 6.6 KiB/s wr, 129 op/s
Nov 22 03:59:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3742696040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3742696040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3683097460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3683097460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3683097460' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3683097460' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 6.3 KiB/s wr, 166 op/s
Nov 22 03:59:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:21 compute-0 ceph-mon[75011]: pgmap v1320: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 6.3 KiB/s wr, 166 op/s
Nov 22 03:59:22 compute-0 nova_compute[253461]: 2025-11-22 03:59:22.207 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 7.6 KiB/s wr, 182 op/s
Nov 22 03:59:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:23.011 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:23.011 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:23.012 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:23 compute-0 ceph-mon[75011]: pgmap v1321: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 7.6 KiB/s wr, 182 op/s
Nov 22 03:59:23 compute-0 nova_compute[253461]: 2025-11-22 03:59:23.648 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763783948.6471117, 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:59:23 compute-0 nova_compute[253461]: 2025-11-22 03:59:23.648 253465 INFO nova.compute.manager [-] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] VM Stopped (Lifecycle Event)
Nov 22 03:59:23 compute-0 nova_compute[253461]: 2025-11-22 03:59:23.666 253465 DEBUG nova.compute.manager [None req-31ca5785-822e-4436-b637-dd8beb5f0090 - - - - - -] [instance: 18ad6aa8-f2c4-484c-82c5-d369b6f5af5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:59:23 compute-0 nova_compute[253461]: 2025-11-22 03:59:23.688 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 6.0 KiB/s wr, 147 op/s
Nov 22 03:59:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:24.913 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:59:24 compute-0 nova_compute[253461]: 2025-11-22 03:59:24.914 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:24.914 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 03:59:25 compute-0 ceph-mon[75011]: pgmap v1322: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 6.0 KiB/s wr, 147 op/s
Nov 22 03:59:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Nov 22 03:59:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Nov 22 03:59:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Nov 22 03:59:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.7 KiB/s wr, 81 op/s
Nov 22 03:59:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2405667057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:26.917 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Nov 22 03:59:26 compute-0 ceph-mon[75011]: osdmap e289: 3 total, 3 up, 3 in
Nov 22 03:59:26 compute-0 ceph-mon[75011]: pgmap v1324: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.7 KiB/s wr, 81 op/s
Nov 22 03:59:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2405667057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Nov 22 03:59:26 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Nov 22 03:59:27 compute-0 nova_compute[253461]: 2025-11-22 03:59:27.208 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Nov 22 03:59:27 compute-0 ceph-mon[75011]: osdmap e290: 3 total, 3 up, 3 in
Nov 22 03:59:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Nov 22 03:59:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Nov 22 03:59:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 170 B/s wr, 8 op/s
Nov 22 03:59:28 compute-0 nova_compute[253461]: 2025-11-22 03:59:28.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:28 compute-0 nova_compute[253461]: 2025-11-22 03:59:28.692 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:28 compute-0 ceph-mon[75011]: osdmap e291: 3 total, 3 up, 3 in
Nov 22 03:59:28 compute-0 ceph-mon[75011]: pgmap v1327: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 170 B/s wr, 8 op/s
Nov 22 03:59:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2907035327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Nov 22 03:59:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2907035327' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Nov 22 03:59:30 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Nov 22 03:59:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.1 KiB/s wr, 15 op/s
Nov 22 03:59:30 compute-0 nova_compute[253461]: 2025-11-22 03:59:30.447 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:30 compute-0 nova_compute[253461]: 2025-11-22 03:59:30.447 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 03:59:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Nov 22 03:59:31 compute-0 ceph-mon[75011]: osdmap e292: 3 total, 3 up, 3 in
Nov 22 03:59:31 compute-0 ceph-mon[75011]: pgmap v1329: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 9.7 KiB/s rd, 2.1 KiB/s wr, 15 op/s
Nov 22 03:59:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Nov 22 03:59:31 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Nov 22 03:59:32 compute-0 ceph-mon[75011]: osdmap e293: 3 total, 3 up, 3 in
Nov 22 03:59:32 compute-0 nova_compute[253461]: 2025-11-22 03:59:32.258 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1331: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.2 KiB/s wr, 51 op/s
Nov 22 03:59:32 compute-0 nova_compute[253461]: 2025-11-22 03:59:32.455 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Nov 22 03:59:33 compute-0 ceph-mon[75011]: pgmap v1331: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 3.2 KiB/s wr, 51 op/s
Nov 22 03:59:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Nov 22 03:59:33 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Nov 22 03:59:33 compute-0 nova_compute[253461]: 2025-11-22 03:59:33.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:33 compute-0 nova_compute[253461]: 2025-11-22 03:59:33.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 03:59:33 compute-0 nova_compute[253461]: 2025-11-22 03:59:33.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 03:59:33 compute-0 podman[277604]: 2025-11-22 03:59:33.431564713 +0000 UTC m=+0.094658246 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 22 03:59:33 compute-0 nova_compute[253461]: 2025-11-22 03:59:33.455 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 03:59:33 compute-0 nova_compute[253461]: 2025-11-22 03:59:33.695 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:34 compute-0 ceph-mon[75011]: osdmap e294: 3 total, 3 up, 3 in
Nov 22 03:59:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.5 KiB/s wr, 96 op/s
Nov 22 03:59:34 compute-0 nova_compute[253461]: 2025-11-22 03:59:34.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:34 compute-0 nova_compute[253461]: 2025-11-22 03:59:34.459 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:34 compute-0 nova_compute[253461]: 2025-11-22 03:59:34.460 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:34 compute-0 nova_compute[253461]: 2025-11-22 03:59:34.460 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:34 compute-0 nova_compute[253461]: 2025-11-22 03:59:34.460 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 03:59:34 compute-0 nova_compute[253461]: 2025-11-22 03:59:34.461 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2979608612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:59:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4213704217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:34 compute-0 nova_compute[253461]: 2025-11-22 03:59:34.939 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:35 compute-0 nova_compute[253461]: 2025-11-22 03:59:35.133 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:59:35 compute-0 nova_compute[253461]: 2025-11-22 03:59:35.134 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4617MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 03:59:35 compute-0 nova_compute[253461]: 2025-11-22 03:59:35.134 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:35 compute-0 nova_compute[253461]: 2025-11-22 03:59:35.135 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Nov 22 03:59:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Nov 22 03:59:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Nov 22 03:59:35 compute-0 ceph-mon[75011]: pgmap v1333: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 5.5 KiB/s wr, 96 op/s
Nov 22 03:59:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2979608612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4213704217' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:35 compute-0 nova_compute[253461]: 2025-11-22 03:59:35.626 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 03:59:35 compute-0 nova_compute[253461]: 2025-11-22 03:59:35.626 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 03:59:35 compute-0 nova_compute[253461]: 2025-11-22 03:59:35.753 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:59:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462017022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:36 compute-0 nova_compute[253461]: 2025-11-22 03:59:36.178 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:36 compute-0 nova_compute[253461]: 2025-11-22 03:59:36.187 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:59:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Nov 22 03:59:36 compute-0 ceph-mon[75011]: osdmap e295: 3 total, 3 up, 3 in
Nov 22 03:59:36 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/462017022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_03:59:36
Nov 22 03:59:36 compute-0 nova_compute[253461]: 2025-11-22 03:59:36.217 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', '.mgr', 'volumes', 'vms']
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 03:59:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Nov 22 03:59:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Nov 22 03:59:36 compute-0 nova_compute[253461]: 2025-11-22 03:59:36.253 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 03:59:36 compute-0 nova_compute[253461]: 2025-11-22 03:59:36.254 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 4.9 KiB/s wr, 105 op/s
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 03:59:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 03:59:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Nov 22 03:59:37 compute-0 ceph-mon[75011]: osdmap e296: 3 total, 3 up, 3 in
Nov 22 03:59:37 compute-0 ceph-mon[75011]: pgmap v1336: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 4.9 KiB/s wr, 105 op/s
Nov 22 03:59:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Nov 22 03:59:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Nov 22 03:59:37 compute-0 nova_compute[253461]: 2025-11-22 03:59:37.260 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2347874788' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2347874788' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:38 compute-0 ceph-mon[75011]: osdmap e297: 3 total, 3 up, 3 in
Nov 22 03:59:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2347874788' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2347874788' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:38 compute-0 nova_compute[253461]: 2025-11-22 03:59:38.252 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:38 compute-0 nova_compute[253461]: 2025-11-22 03:59:38.253 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:38 compute-0 nova_compute[253461]: 2025-11-22 03:59:38.254 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.9 KiB/s wr, 80 op/s
Nov 22 03:59:38 compute-0 nova_compute[253461]: 2025-11-22 03:59:38.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:38 compute-0 nova_compute[253461]: 2025-11-22 03:59:38.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 03:59:38 compute-0 nova_compute[253461]: 2025-11-22 03:59:38.699 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:39 compute-0 ceph-mon[75011]: pgmap v1338: 305 pgs: 305 active+clean; 88 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.9 KiB/s wr, 80 op/s
Nov 22 03:59:39 compute-0 nova_compute[253461]: 2025-11-22 03:59:39.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:39 compute-0 nova_compute[253461]: 2025-11-22 03:59:39.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2791683704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2791683704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 88 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 77 op/s
Nov 22 03:59:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Nov 22 03:59:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Nov 22 03:59:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Nov 22 03:59:41 compute-0 sudo[277669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:41 compute-0 sudo[277669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:41 compute-0 sudo[277669]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:41 compute-0 sudo[277694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:59:41 compute-0 sudo[277694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:41 compute-0 sudo[277694]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:41 compute-0 ceph-mon[75011]: pgmap v1339: 305 pgs: 305 active+clean; 88 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 4.2 KiB/s wr, 77 op/s
Nov 22 03:59:41 compute-0 ceph-mon[75011]: osdmap e298: 3 total, 3 up, 3 in
Nov 22 03:59:41 compute-0 sudo[277719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:41 compute-0 sudo[277719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:41 compute-0 sudo[277719]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:41 compute-0 sudo[277744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 03:59:41 compute-0 sudo[277744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Nov 22 03:59:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Nov 22 03:59:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Nov 22 03:59:42 compute-0 sudo[277744]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:59:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 03:59:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 03:59:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:59:42 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5e2297cc-7ee1-411c-8f05-6e20469c8f27 does not exist
Nov 22 03:59:42 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8f427662-51df-49dd-9f21-e9ffa139aa66 does not exist
Nov 22 03:59:42 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a1f4fb1d-24ff-4c4b-b283-5a83e1146591 does not exist
Nov 22 03:59:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 03:59:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 03:59:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 03:59:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:59:42 compute-0 sudo[277800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:42 compute-0 sudo[277800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:42 compute-0 sudo[277800]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:42 compute-0 nova_compute[253461]: 2025-11-22 03:59:42.262 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 88 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.0 KiB/s wr, 112 op/s
Nov 22 03:59:42 compute-0 sudo[277825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:59:42 compute-0 sudo[277825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:42 compute-0 sudo[277825]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:42 compute-0 sudo[277850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:42 compute-0 sudo[277850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:42 compute-0 sudo[277850]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:42 compute-0 sudo[277875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 03:59:42 compute-0 sudo[277875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:42 compute-0 podman[277942]: 2025-11-22 03:59:42.812401767 +0000 UTC m=+0.067995417 container create 02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 03:59:42 compute-0 systemd[1]: Started libpod-conmon-02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf.scope.
Nov 22 03:59:42 compute-0 podman[277942]: 2025-11-22 03:59:42.785901024 +0000 UTC m=+0.041494724 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:59:42 compute-0 podman[277942]: 2025-11-22 03:59:42.919959162 +0000 UTC m=+0.175552812 container init 02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 03:59:42 compute-0 podman[277942]: 2025-11-22 03:59:42.930944158 +0000 UTC m=+0.186537818 container start 02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 03:59:42 compute-0 podman[277942]: 2025-11-22 03:59:42.935960897 +0000 UTC m=+0.191554597 container attach 02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:59:42 compute-0 zealous_morse[277959]: 167 167
Nov 22 03:59:42 compute-0 systemd[1]: libpod-02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf.scope: Deactivated successfully.
Nov 22 03:59:42 compute-0 podman[277942]: 2025-11-22 03:59:42.939470735 +0000 UTC m=+0.195064385 container died 02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 03:59:42 compute-0 ceph-mon[75011]: osdmap e299: 3 total, 3 up, 3 in
Nov 22 03:59:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:59:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 03:59:42 compute-0 ceph-mon[75011]: pgmap v1342: 305 pgs: 305 active+clean; 88 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.0 KiB/s wr, 112 op/s
Nov 22 03:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d3fa2f4f099d471cc5d83f80b35ac4643983765e126f6623a1ac453b689b755-merged.mount: Deactivated successfully.
Nov 22 03:59:42 compute-0 podman[277942]: 2025-11-22 03:59:42.989888569 +0000 UTC m=+0.245482189 container remove 02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Nov 22 03:59:43 compute-0 systemd[1]: libpod-conmon-02104107e5fda60ff04ce0def953d8335a642e965cd5f04e471a9ff5fb9dcddf.scope: Deactivated successfully.
Nov 22 03:59:43 compute-0 podman[277984]: 2025-11-22 03:59:43.241226512 +0000 UTC m=+0.066258714 container create b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hoover, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 03:59:43 compute-0 systemd[1]: Started libpod-conmon-b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114.scope.
Nov 22 03:59:43 compute-0 podman[277984]: 2025-11-22 03:59:43.215944709 +0000 UTC m=+0.040976982 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac75bea2f0cf23d2a2c3224f17806b6236aa15853d3ab237e9f2c05ad7aea40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac75bea2f0cf23d2a2c3224f17806b6236aa15853d3ab237e9f2c05ad7aea40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac75bea2f0cf23d2a2c3224f17806b6236aa15853d3ab237e9f2c05ad7aea40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac75bea2f0cf23d2a2c3224f17806b6236aa15853d3ab237e9f2c05ad7aea40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ac75bea2f0cf23d2a2c3224f17806b6236aa15853d3ab237e9f2c05ad7aea40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:43 compute-0 podman[277984]: 2025-11-22 03:59:43.351203458 +0000 UTC m=+0.176235681 container init b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hoover, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 03:59:43 compute-0 podman[277984]: 2025-11-22 03:59:43.370098593 +0000 UTC m=+0.195130816 container start b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hoover, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:59:43 compute-0 podman[277984]: 2025-11-22 03:59:43.374461174 +0000 UTC m=+0.199493377 container attach b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 03:59:43 compute-0 nova_compute[253461]: 2025-11-22 03:59:43.701 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Nov 22 03:59:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Nov 22 03:59:44 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Nov 22 03:59:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 88 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 7.2 KiB/s wr, 129 op/s
Nov 22 03:59:44 compute-0 gallant_hoover[278001]: --> passed data devices: 0 physical, 3 LVM
Nov 22 03:59:44 compute-0 gallant_hoover[278001]: --> relative data size: 1.0
Nov 22 03:59:44 compute-0 gallant_hoover[278001]: --> All data devices are unavailable
Nov 22 03:59:44 compute-0 systemd[1]: libpod-b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114.scope: Deactivated successfully.
Nov 22 03:59:44 compute-0 systemd[1]: libpod-b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114.scope: Consumed 1.111s CPU time.
Nov 22 03:59:44 compute-0 podman[277984]: 2025-11-22 03:59:44.550113798 +0000 UTC m=+1.375146021 container died b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hoover, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 03:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ac75bea2f0cf23d2a2c3224f17806b6236aa15853d3ab237e9f2c05ad7aea40-merged.mount: Deactivated successfully.
Nov 22 03:59:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105863094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/105863094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:44 compute-0 podman[277984]: 2025-11-22 03:59:44.621922267 +0000 UTC m=+1.446954460 container remove b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 03:59:44 compute-0 systemd[1]: libpod-conmon-b34a9ebefc7f483668e00cf050c43448f0900b419b95c53ba76789f7b6576114.scope: Deactivated successfully.
Nov 22 03:59:44 compute-0 sudo[277875]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:44 compute-0 nova_compute[253461]: 2025-11-22 03:59:44.696 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:44 compute-0 nova_compute[253461]: 2025-11-22 03:59:44.697 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:44 compute-0 nova_compute[253461]: 2025-11-22 03:59:44.713 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 03:59:44 compute-0 sudo[278044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:44 compute-0 sudo[278044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:44 compute-0 sudo[278044]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:44 compute-0 sudo[278069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:59:44 compute-0 sudo[278069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:44 compute-0 sudo[278069]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:44 compute-0 nova_compute[253461]: 2025-11-22 03:59:44.882 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:44 compute-0 nova_compute[253461]: 2025-11-22 03:59:44.883 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:44 compute-0 nova_compute[253461]: 2025-11-22 03:59:44.890 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 03:59:44 compute-0 nova_compute[253461]: 2025-11-22 03:59:44.890 253465 INFO nova.compute.claims [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Claim successful on node compute-0.ctlplane.example.com
Nov 22 03:59:44 compute-0 sudo[278094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:44 compute-0 sudo[278094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:44 compute-0 sudo[278094]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:44 compute-0 sudo[278119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 03:59:45 compute-0 sudo[278119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.017 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:45 compute-0 ceph-mon[75011]: osdmap e300: 3 total, 3 up, 3 in
Nov 22 03:59:45 compute-0 ceph-mon[75011]: pgmap v1344: 305 pgs: 305 active+clean; 88 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 7.2 KiB/s wr, 129 op/s
Nov 22 03:59:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/105863094' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/105863094' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 03:59:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713707682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:45 compute-0 podman[278204]: 2025-11-22 03:59:45.428783667 +0000 UTC m=+0.071743924 container create 95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.444 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.454 253465 DEBUG nova.compute.provider_tree [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.471 253465 DEBUG nova.scheduler.client.report [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 03:59:45 compute-0 systemd[1]: Started libpod-conmon-95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678.scope.
Nov 22 03:59:45 compute-0 podman[278204]: 2025-11-22 03:59:45.399202083 +0000 UTC m=+0.042162410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.495 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.496 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 03:59:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:59:45 compute-0 podman[278204]: 2025-11-22 03:59:45.517856177 +0000 UTC m=+0.160816454 container init 95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 03:59:45 compute-0 podman[278204]: 2025-11-22 03:59:45.523711992 +0000 UTC m=+0.166672240 container start 95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 03:59:45 compute-0 podman[278204]: 2025-11-22 03:59:45.528025401 +0000 UTC m=+0.170985748 container attach 95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:59:45 compute-0 zen_rhodes[278223]: 167 167
Nov 22 03:59:45 compute-0 systemd[1]: libpod-95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678.scope: Deactivated successfully.
Nov 22 03:59:45 compute-0 podman[278204]: 2025-11-22 03:59:45.531136052 +0000 UTC m=+0.174096289 container died 95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.556 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.557 253465 DEBUG nova.network.neutron [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 03:59:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d2f3346c760fffca88eb6d4e8b82c6473cb834ed2649a44ba1d31b27dab49ec-merged.mount: Deactivated successfully.
Nov 22 03:59:45 compute-0 podman[278204]: 2025-11-22 03:59:45.573773018 +0000 UTC m=+0.216733266 container remove 95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_rhodes, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.583 253465 INFO nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 03:59:45 compute-0 systemd[1]: libpod-conmon-95428d48a46d9bbd839aff4b0f5959a77a9b9b22ec5f07ffbf52bfc1fcc08678.scope: Deactivated successfully.
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.612 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.733 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.736 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.737 253465 INFO nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Creating image(s)
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.771 253465 DEBUG nova.storage.rbd_utils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:59:45 compute-0 podman[278246]: 2025-11-22 03:59:45.80224834 +0000 UTC m=+0.066721848 container create c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.804 253465 DEBUG nova.storage.rbd_utils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.842 253465 DEBUG nova.storage.rbd_utils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.846 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:45 compute-0 systemd[1]: Started libpod-conmon-c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c.scope.
Nov 22 03:59:45 compute-0 podman[278246]: 2025-11-22 03:59:45.78144162 +0000 UTC m=+0.045915197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be562493574b2183a45a33da3acef5ff09a058583fb9e097180dd64905df1fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be562493574b2183a45a33da3acef5ff09a058583fb9e097180dd64905df1fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be562493574b2183a45a33da3acef5ff09a058583fb9e097180dd64905df1fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be562493574b2183a45a33da3acef5ff09a058583fb9e097180dd64905df1fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:45 compute-0 podman[278246]: 2025-11-22 03:59:45.906109143 +0000 UTC m=+0.170582731 container init c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:59:45 compute-0 podman[278246]: 2025-11-22 03:59:45.921009458 +0000 UTC m=+0.185482986 container start c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 03:59:45 compute-0 podman[278246]: 2025-11-22 03:59:45.924504821 +0000 UTC m=+0.188978359 container attach c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 03:59:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Nov 22 03:59:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.945 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.947 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.948 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.949 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.983 253465 DEBUG nova.storage.rbd_utils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:59:45 compute-0 nova_compute[253461]: 2025-11-22 03:59:45.993 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.027 253465 DEBUG nova.policy [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0b246fc3abe648cf93dbdc3bd03c5cbb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a62857fbf8cf446cac9c207ae6750597', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 03:59:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1713707682' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 03:59:46 compute-0 ceph-mon[75011]: osdmap e301: 3 total, 3 up, 3 in
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 88 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 5.1 KiB/s wr, 78 op/s
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.327 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034733244611577784 of space, bias 1.0, pg target 0.10419973383473335 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 5.723163703848408e-07 of space, bias 1.0, pg target 0.00017169491111545225 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 03:59:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.417 253465 DEBUG nova.storage.rbd_utils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] resizing rbd image fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.454 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.455 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.473 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.539 253465 DEBUG nova.objects.instance [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'migration_context' on Instance uuid fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.557 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.557 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Ensure instance console log exists: /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.558 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.558 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:46 compute-0 nova_compute[253461]: 2025-11-22 03:59:46.559 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:46 compute-0 cool_northcutt[278317]: {
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:     "0": [
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:         {
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "devices": [
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "/dev/loop3"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             ],
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_name": "ceph_lv0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_size": "21470642176",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "name": "ceph_lv0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "tags": {
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cluster_name": "ceph",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.crush_device_class": "",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.encrypted": "0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osd_id": "0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.type": "block",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.vdo": "0"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             },
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "type": "block",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "vg_name": "ceph_vg0"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:         }
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:     ],
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:     "1": [
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:         {
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "devices": [
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "/dev/loop4"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             ],
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_name": "ceph_lv1",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_size": "21470642176",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "name": "ceph_lv1",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "tags": {
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cluster_name": "ceph",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.crush_device_class": "",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.encrypted": "0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osd_id": "1",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.type": "block",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.vdo": "0"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             },
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "type": "block",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "vg_name": "ceph_vg1"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:         }
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:     ],
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:     "2": [
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:         {
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "devices": [
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "/dev/loop5"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             ],
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_name": "ceph_lv2",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_size": "21470642176",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "name": "ceph_lv2",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "tags": {
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.cluster_name": "ceph",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.crush_device_class": "",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.encrypted": "0",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osd_id": "2",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.type": "block",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:                 "ceph.vdo": "0"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             },
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "type": "block",
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:             "vg_name": "ceph_vg2"
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:         }
Nov 22 03:59:46 compute-0 cool_northcutt[278317]:     ]
Nov 22 03:59:46 compute-0 cool_northcutt[278317]: }
Nov 22 03:59:46 compute-0 systemd[1]: libpod-c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c.scope: Deactivated successfully.
Nov 22 03:59:46 compute-0 podman[278246]: 2025-11-22 03:59:46.780485355 +0000 UTC m=+1.044958883 container died c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 03:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-6be562493574b2183a45a33da3acef5ff09a058583fb9e097180dd64905df1fb-merged.mount: Deactivated successfully.
Nov 22 03:59:46 compute-0 podman[278246]: 2025-11-22 03:59:46.841653375 +0000 UTC m=+1.106126893 container remove c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:59:46 compute-0 systemd[1]: libpod-conmon-c79ecdc00ac7dbb452fa9b21d83452038eca48c11ad48b976cb5de6d6a929e7c.scope: Deactivated successfully.
Nov 22 03:59:46 compute-0 sudo[278119]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:46 compute-0 sudo[278448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:46 compute-0 sudo[278448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:46 compute-0 sudo[278448]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:47 compute-0 sudo[278473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 03:59:47 compute-0 sudo[278473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:47 compute-0 sudo[278473]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:47 compute-0 sudo[278498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:47 compute-0 sudo[278498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:47 compute-0 sudo[278498]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:47 compute-0 ceph-mon[75011]: pgmap v1346: 305 pgs: 305 active+clean; 88 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 5.1 KiB/s wr, 78 op/s
Nov 22 03:59:47 compute-0 sudo[278523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 03:59:47 compute-0 sudo[278523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:47 compute-0 nova_compute[253461]: 2025-11-22 03:59:47.264 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:47 compute-0 nova_compute[253461]: 2025-11-22 03:59:47.885 253465 DEBUG nova.network.neutron [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Successfully created port: 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 03:59:47 compute-0 podman[278589]: 2025-11-22 03:59:47.664045116 +0000 UTC m=+0.034048826 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:48 compute-0 podman[278589]: 2025-11-22 03:59:48.001623013 +0000 UTC m=+0.371626714 container create 164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:59:48 compute-0 systemd[1]: Started libpod-conmon-164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165.scope.
Nov 22 03:59:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:59:48 compute-0 podman[278589]: 2025-11-22 03:59:48.095974006 +0000 UTC m=+0.465977747 container init 164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:59:48 compute-0 podman[278589]: 2025-11-22 03:59:48.102032458 +0000 UTC m=+0.472036149 container start 164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:59:48 compute-0 podman[278589]: 2025-11-22 03:59:48.106152369 +0000 UTC m=+0.476156120 container attach 164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:59:48 compute-0 nervous_jackson[278605]: 167 167
Nov 22 03:59:48 compute-0 systemd[1]: libpod-164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165.scope: Deactivated successfully.
Nov 22 03:59:48 compute-0 podman[278589]: 2025-11-22 03:59:48.108613183 +0000 UTC m=+0.478616854 container died 164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:59:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-02d36ccf6aa8d27afb8df0bc855aa5dc276b5cd3e3c2665c5db7817736058acd-merged.mount: Deactivated successfully.
Nov 22 03:59:48 compute-0 podman[278589]: 2025-11-22 03:59:48.149975841 +0000 UTC m=+0.519979552 container remove 164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jackson, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 03:59:48 compute-0 systemd[1]: libpod-conmon-164aa80a2fd352e1182323265893049dcd6c5e2a3ab3f84f3b97fd639f7d2165.scope: Deactivated successfully.
Nov 22 03:59:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 95 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 508 KiB/s wr, 59 op/s
Nov 22 03:59:48 compute-0 podman[278627]: 2025-11-22 03:59:48.395741422 +0000 UTC m=+0.074648487 container create be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banach, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:59:48 compute-0 podman[278627]: 2025-11-22 03:59:48.367627413 +0000 UTC m=+0.046534538 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 03:59:48 compute-0 systemd[1]: Started libpod-conmon-be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118.scope.
Nov 22 03:59:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3283895711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e3a79a38f0917255aa92f465e2e2e7a9f736f87a686207e623e183d7e0e8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.510 253465 DEBUG nova.network.neutron [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Successfully updated port: 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 03:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e3a79a38f0917255aa92f465e2e2e7a9f736f87a686207e623e183d7e0e8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e3a79a38f0917255aa92f465e2e2e7a9f736f87a686207e623e183d7e0e8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6e3a79a38f0917255aa92f465e2e2e7a9f736f87a686207e623e183d7e0e8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.530 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.531 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquired lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.531 253465 DEBUG nova.network.neutron [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 03:59:48 compute-0 podman[278627]: 2025-11-22 03:59:48.541303894 +0000 UTC m=+0.220211029 container init be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Nov 22 03:59:48 compute-0 podman[278627]: 2025-11-22 03:59:48.555025804 +0000 UTC m=+0.233932880 container start be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 03:59:48 compute-0 podman[278627]: 2025-11-22 03:59:48.558917211 +0000 UTC m=+0.237824346 container attach be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 03:59:48 compute-0 podman[278646]: 2025-11-22 03:59:48.575918718 +0000 UTC m=+0.092933231 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 03:59:48 compute-0 podman[278658]: 2025-11-22 03:59:48.666597312 +0000 UTC m=+0.133997383 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS)
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.668 253465 DEBUG nova.compute.manager [req-529c6475-477b-418b-b2e2-759fc2973a18 req-e710c4cf-489f-412b-bfb5-787eab8b3954 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-changed-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.668 253465 DEBUG nova.compute.manager [req-529c6475-477b-418b-b2e2-759fc2973a18 req-e710c4cf-489f-412b-bfb5-787eab8b3954 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Refreshing instance network info cache due to event network-changed-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.669 253465 DEBUG oslo_concurrency.lockutils [req-529c6475-477b-418b-b2e2-759fc2973a18 req-e710c4cf-489f-412b-bfb5-787eab8b3954 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.706 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:48 compute-0 nova_compute[253461]: 2025-11-22 03:59:48.715 253465 DEBUG nova.network.neutron [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 03:59:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Nov 22 03:59:49 compute-0 ceph-mon[75011]: pgmap v1347: 305 pgs: 305 active+clean; 95 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 508 KiB/s wr, 59 op/s
Nov 22 03:59:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3283895711' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Nov 22 03:59:49 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Nov 22 03:59:49 compute-0 pensive_banach[278644]: {
Nov 22 03:59:49 compute-0 pensive_banach[278644]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "osd_id": 1,
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "type": "bluestore"
Nov 22 03:59:49 compute-0 pensive_banach[278644]:     },
Nov 22 03:59:49 compute-0 pensive_banach[278644]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "osd_id": 0,
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "type": "bluestore"
Nov 22 03:59:49 compute-0 pensive_banach[278644]:     },
Nov 22 03:59:49 compute-0 pensive_banach[278644]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "osd_id": 2,
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 03:59:49 compute-0 pensive_banach[278644]:         "type": "bluestore"
Nov 22 03:59:49 compute-0 pensive_banach[278644]:     }
Nov 22 03:59:49 compute-0 pensive_banach[278644]: }
Nov 22 03:59:49 compute-0 systemd[1]: libpod-be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118.scope: Deactivated successfully.
Nov 22 03:59:49 compute-0 systemd[1]: libpod-be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118.scope: Consumed 1.100s CPU time.
Nov 22 03:59:49 compute-0 podman[278720]: 2025-11-22 03:59:49.712387729 +0000 UTC m=+0.033318002 container died be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banach, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 03:59:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd6e3a79a38f0917255aa92f465e2e2e7a9f736f87a686207e623e183d7e0e8b-merged.mount: Deactivated successfully.
Nov 22 03:59:49 compute-0 podman[278720]: 2025-11-22 03:59:49.785350032 +0000 UTC m=+0.106280315 container remove be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 03:59:49 compute-0 systemd[1]: libpod-conmon-be78a9495304e2afa67e2da7465927779a88a3413f98149e4d805b5be99b3118.scope: Deactivated successfully.
Nov 22 03:59:49 compute-0 sudo[278523]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 03:59:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:59:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 03:59:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:59:49 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 60329a52-ca0f-4fd6-b28b-9c08ada2ed54 does not exist
Nov 22 03:59:49 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d8b0ad98-4e3d-4d18-828a-c1e247205d3c does not exist
Nov 22 03:59:49 compute-0 sudo[278736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 03:59:49 compute-0 sudo[278736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:49 compute-0 sudo[278736]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:50 compute-0 sudo[278762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 03:59:50 compute-0 sudo[278762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 03:59:50 compute-0 sudo[278762]: pam_unix(sudo:session): session closed for user root
Nov 22 03:59:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 134 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.4 MiB/s wr, 108 op/s
Nov 22 03:59:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Nov 22 03:59:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Nov 22 03:59:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Nov 22 03:59:50 compute-0 ceph-mon[75011]: osdmap e302: 3 total, 3 up, 3 in
Nov 22 03:59:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:59:50 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 03:59:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Nov 22 03:59:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Nov 22 03:59:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Nov 22 03:59:50 compute-0 nova_compute[253461]: 2025-11-22 03:59:50.961 253465 DEBUG nova.network.neutron [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updating instance_info_cache with network_info: [{"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:59:50 compute-0 nova_compute[253461]: 2025-11-22 03:59:50.985 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Releasing lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:59:50 compute-0 nova_compute[253461]: 2025-11-22 03:59:50.985 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Instance network_info: |[{"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 03:59:50 compute-0 nova_compute[253461]: 2025-11-22 03:59:50.986 253465 DEBUG oslo_concurrency.lockutils [req-529c6475-477b-418b-b2e2-759fc2973a18 req-e710c4cf-489f-412b-bfb5-787eab8b3954 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:59:50 compute-0 nova_compute[253461]: 2025-11-22 03:59:50.986 253465 DEBUG nova.network.neutron [req-529c6475-477b-418b-b2e2-759fc2973a18 req-e710c4cf-489f-412b-bfb5-787eab8b3954 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Refreshing network info cache for port 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:59:50 compute-0 nova_compute[253461]: 2025-11-22 03:59:50.989 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Start _get_guest_xml network_info=[{"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 03:59:50 compute-0 nova_compute[253461]: 2025-11-22 03:59:50.995 253465 WARNING nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.000 253465 DEBUG nova.virt.libvirt.host [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.001 253465 DEBUG nova.virt.libvirt.host [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.117 253465 DEBUG nova.virt.libvirt.host [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.118 253465 DEBUG nova.virt.libvirt.host [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.119 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.119 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.119 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.119 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.120 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.120 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.120 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.120 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.120 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.121 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.121 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.121 253465 DEBUG nova.virt.hardware [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.123 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:51 compute-0 ceph-mon[75011]: pgmap v1349: 305 pgs: 305 active+clean; 134 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.4 MiB/s wr, 108 op/s
Nov 22 03:59:51 compute-0 ceph-mon[75011]: osdmap e303: 3 total, 3 up, 3 in
Nov 22 03:59:51 compute-0 ceph-mon[75011]: osdmap e304: 3 total, 3 up, 3 in
Nov 22 03:59:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3035722595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.615 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.634 253465 DEBUG nova.storage.rbd_utils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:59:51 compute-0 nova_compute[253461]: 2025-11-22 03:59:51.638 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860909936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.047 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.048 253465 DEBUG nova.virt.libvirt.vif [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:59:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-696385546',display_name='tempest-TestStampPattern-server-696385546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-696385546',id=12,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYgn9CTDvmfK+9lwizGtXeEZlSZuA1AJsMHGR/6t8oyy2KLeA+NyxTmeE6fCgDUhF1kETDxpPXjj8wfb8eB/z4sjIcgn3I98Rj3v+7eP88Wa0lihBTXU++d2vPdWMcG3w==',key_name='tempest-TestStampPattern-116986255',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a62857fbf8cf446cac9c207ae6750597',ramdisk_id='',reservation_id='r-xe4gsd50',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1055115370',owner_user_name='tempest-TestStampPattern-1055115370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:59:45Z,user_data=None,user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',uuid=fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.049 253465 DEBUG nova.network.os_vif_util [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converting VIF {"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.049 253465 DEBUG nova.network.os_vif_util [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:e3:cf,bridge_name='br-int',has_traffic_filtering=True,id=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8a4fb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.051 253465 DEBUG nova.objects.instance [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.070 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] End _get_guest_xml xml=<domain type="kvm">
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <uuid>fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618</uuid>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <name>instance-0000000c</name>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <metadata>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <nova:name>tempest-TestStampPattern-server-696385546</nova:name>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 03:59:50</nova:creationTime>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <nova:user uuid="0b246fc3abe648cf93dbdc3bd03c5cbb">tempest-TestStampPattern-1055115370-project-member</nova:user>
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <nova:project uuid="a62857fbf8cf446cac9c207ae6750597">tempest-TestStampPattern-1055115370</nova:project>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <nova:port uuid="19b8a4fb-a5a7-4112-8511-2aa985a0ae5a">
Nov 22 03:59:52 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   </metadata>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <system>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <entry name="serial">fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618</entry>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <entry name="uuid">fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618</entry>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </system>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <os>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   </os>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <features>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <apic/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   </features>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   </clock>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   </cpu>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   <devices>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk">
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       </source>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk.config">
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       </source>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 03:59:52 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       </auth>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </disk>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:44:e3:cf"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <target dev="tap19b8a4fb-a5"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </interface>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618/console.log" append="off"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </serial>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <video>
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </video>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </rng>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 03:59:52 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 03:59:52 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 03:59:52 compute-0 nova_compute[253461]:   </devices>
Nov 22 03:59:52 compute-0 nova_compute[253461]: </domain>
Nov 22 03:59:52 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.072 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Preparing to wait for external event network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.072 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.072 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.072 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.073 253465 DEBUG nova.virt.libvirt.vif [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T03:59:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-696385546',display_name='tempest-TestStampPattern-server-696385546',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-696385546',id=12,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYgn9CTDvmfK+9lwizGtXeEZlSZuA1AJsMHGR/6t8oyy2KLeA+NyxTmeE6fCgDUhF1kETDxpPXjj8wfb8eB/z4sjIcgn3I98Rj3v+7eP88Wa0lihBTXU++d2vPdWMcG3w==',key_name='tempest-TestStampPattern-116986255',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a62857fbf8cf446cac9c207ae6750597',ramdisk_id='',reservation_id='r-xe4gsd50',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-1055115370',owner_user_name='tempest-TestStampPattern-1055115370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T03:59:45Z,user_data=None,user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',uuid=fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.073 253465 DEBUG nova.network.os_vif_util [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converting VIF {"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.074 253465 DEBUG nova.network.os_vif_util [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:e3:cf,bridge_name='br-int',has_traffic_filtering=True,id=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8a4fb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.074 253465 DEBUG os_vif [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:e3:cf,bridge_name='br-int',has_traffic_filtering=True,id=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8a4fb-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.074 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.075 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.075 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.077 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.078 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap19b8a4fb-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.078 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap19b8a4fb-a5, col_values=(('external_ids', {'iface-id': '19b8a4fb-a5a7-4112-8511-2aa985a0ae5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:e3:cf', 'vm-uuid': 'fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.080 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:52 compute-0 NetworkManager[48916]: <info>  [1763783992.0816] manager: (tap19b8a4fb-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.082 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.087 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.087 253465 INFO os_vif [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:e3:cf,bridge_name='br-int',has_traffic_filtering=True,id=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8a4fb-a5')
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.140 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.140 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.141 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No VIF found with MAC fa:16:3e:44:e3:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.141 253465 INFO nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Using config drive
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.162 253465 DEBUG nova.storage.rbd_utils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:59:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/154676714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.266 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 147 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Nov 22 03:59:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Nov 22 03:59:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3035722595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/860909936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/154676714' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:52 compute-0 ceph-mon[75011]: pgmap v1352: 305 pgs: 305 active+clean; 147 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Nov 22 03:59:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Nov 22 03:59:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.569 253465 INFO nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Creating config drive at /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618/disk.config
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.574 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_eoklvo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.704 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8_eoklvo" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.730 253465 DEBUG nova.storage.rbd_utils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.733 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618/disk.config fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.825 253465 DEBUG nova.network.neutron [req-529c6475-477b-418b-b2e2-759fc2973a18 req-e710c4cf-489f-412b-bfb5-787eab8b3954 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updated VIF entry in instance network info cache for port 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.826 253465 DEBUG nova.network.neutron [req-529c6475-477b-418b-b2e2-759fc2973a18 req-e710c4cf-489f-412b-bfb5-787eab8b3954 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updating instance_info_cache with network_info: [{"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.842 253465 DEBUG oslo_concurrency.lockutils [req-529c6475-477b-418b-b2e2-759fc2973a18 req-e710c4cf-489f-412b-bfb5-787eab8b3954 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 03:59:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840874535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2840874535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.872 253465 DEBUG oslo_concurrency.processutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618/disk.config fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.873 253465 INFO nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Deleting local config drive /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618/disk.config because it was imported into RBD.
Nov 22 03:59:52 compute-0 kernel: tap19b8a4fb-a5: entered promiscuous mode
Nov 22 03:59:52 compute-0 ovn_controller[152691]: 2025-11-22T03:59:52Z|00141|binding|INFO|Claiming lport 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a for this chassis.
Nov 22 03:59:52 compute-0 ovn_controller[152691]: 2025-11-22T03:59:52Z|00142|binding|INFO|19b8a4fb-a5a7-4112-8511-2aa985a0ae5a: Claiming fa:16:3e:44:e3:cf 10.100.0.5
Nov 22 03:59:52 compute-0 NetworkManager[48916]: <info>  [1763783992.9316] manager: (tap19b8a4fb-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.931 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:52 compute-0 nova_compute[253461]: 2025-11-22 03:59:52.940 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.947 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:e3:cf 10.100.0.5'], port_security=['fa:16:3e:44:e3:cf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a62857fbf8cf446cac9c207ae6750597', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0832384-6d69-4b2e-a587-602048007135', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f9f8761-3ac6-4a72-804a-92d1a0df209a, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.949 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a in datapath 4692d97f-32c5-4a6f-a095-ba8dda0baf05 bound to our chassis
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.952 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4692d97f-32c5-4a6f-a095-ba8dda0baf05
Nov 22 03:59:52 compute-0 systemd-udevd[278920]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.970 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1f920c-1b7c-4d86-9487-f996e2f0b096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.971 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4692d97f-31 in ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 03:59:52 compute-0 systemd-machined[215728]: New machine qemu-12-instance-0000000c.
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.973 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4692d97f-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.973 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfcc6a5-4990-4aa6-b693-1d24899e4f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.975 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[42223c57-d92c-4fb5-8c0f-ccdfc47061fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:52 compute-0 NetworkManager[48916]: <info>  [1763783992.9810] device (tap19b8a4fb-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 03:59:52 compute-0 NetworkManager[48916]: <info>  [1763783992.9824] device (tap19b8a4fb-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 03:59:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:52.991 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[eb710c0a-d4a7-4548-9b28-b560f5a783e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.020 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5ff593-18b6-467d-b1c0-a07c55fc51ff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_controller[152691]: 2025-11-22T03:59:53Z|00143|binding|INFO|Setting lport 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a ovn-installed in OVS
Nov 22 03:59:53 compute-0 ovn_controller[152691]: 2025-11-22T03:59:53Z|00144|binding|INFO|Setting lport 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a up in Southbound
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.039 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.057 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[4eed2df7-c30a-40c3-824e-459f9148e4bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.062 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[89ed6a56-31dc-4daa-b18a-70e6707df434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 NetworkManager[48916]: <info>  [1763783993.0643] manager: (tap4692d97f-30): new Veth device (/org/freedesktop/NetworkManager/Devices/76)
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.093 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[03ef3d25-d8e6-4a24-8c0d-269250338102]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.096 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[dcdd66a4-cdbf-4e88-92a3-e15f03831659]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 NetworkManager[48916]: <info>  [1763783993.1184] device (tap4692d97f-30): carrier: link connected
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.122 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[99f83795-6327-4e00-b128-99b5170365ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.142 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ba69b7b7-5db9-48ab-80f4-d57ac04fa23a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4692d97f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424571, 'reachable_time': 17992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278954, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.157 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4d622049-8393-4011-a4df-5dfe0d89fd43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:6c66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424571, 'tstamp': 424571}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278955, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.175 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d44fff-cd09-4199-9ad6-b7e93f7e88f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4692d97f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424571, 'reachable_time': 17992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278956, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.205 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[65af61a2-e228-4514-8417-9f90943f7b06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.294 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7173303d-8520-435d-acff-523d6a10500f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.296 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4692d97f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.297 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.298 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4692d97f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:53 compute-0 kernel: tap4692d97f-30: entered promiscuous mode
Nov 22 03:59:53 compute-0 NetworkManager[48916]: <info>  [1763783993.3011] manager: (tap4692d97f-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.300 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.307 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4692d97f-30, col_values=(('external_ids', {'iface-id': '30338b02-a11d-4ec7-8237-9f070233f5bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.308 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:53 compute-0 ovn_controller[152691]: 2025-11-22T03:59:53Z|00145|binding|INFO|Releasing lport 30338b02-a11d-4ec7-8237-9f070233f5bd from this chassis (sb_readonly=0)
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.311 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4692d97f-32c5-4a6f-a095-ba8dda0baf05.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4692d97f-32c5-4a6f-a095-ba8dda0baf05.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.313 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[24e26d96-359f-4b7e-b82b-e07fc9185547]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.314 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: global
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-4692d97f-32c5-4a6f-a095-ba8dda0baf05
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/4692d97f-32c5-4a6f-a095-ba8dda0baf05.pid.haproxy
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 4692d97f-32c5-4a6f-a095-ba8dda0baf05
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 03:59:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 03:59:53.315 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'env', 'PROCESS_TAG=haproxy-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4692d97f-32c5-4a6f-a095-ba8dda0baf05.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.325 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Nov 22 03:59:53 compute-0 ceph-mon[75011]: osdmap e305: 3 total, 3 up, 3 in
Nov 22 03:59:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2840874535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2840874535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Nov 22 03:59:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.449 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783993.448649, fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.449 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] VM Started (Lifecycle Event)
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.478 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.484 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783993.4487686, fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.485 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] VM Paused (Lifecycle Event)
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.514 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.519 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.562 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:59:53 compute-0 podman[279029]: 2025-11-22 03:59:53.771266066 +0000 UTC m=+0.086640848 container create af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.783 253465 DEBUG nova.compute.manager [req-8cfe6377-5ed3-4e8e-9e88-2536a51fa6cd req-05c29e5e-a59a-49cf-82cc-aef7ec22b664 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.784 253465 DEBUG oslo_concurrency.lockutils [req-8cfe6377-5ed3-4e8e-9e88-2536a51fa6cd req-05c29e5e-a59a-49cf-82cc-aef7ec22b664 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.784 253465 DEBUG oslo_concurrency.lockutils [req-8cfe6377-5ed3-4e8e-9e88-2536a51fa6cd req-05c29e5e-a59a-49cf-82cc-aef7ec22b664 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.784 253465 DEBUG oslo_concurrency.lockutils [req-8cfe6377-5ed3-4e8e-9e88-2536a51fa6cd req-05c29e5e-a59a-49cf-82cc-aef7ec22b664 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.785 253465 DEBUG nova.compute.manager [req-8cfe6377-5ed3-4e8e-9e88-2536a51fa6cd req-05c29e5e-a59a-49cf-82cc-aef7ec22b664 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Processing event network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.785 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.790 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763783993.7888207, fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.791 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] VM Resumed (Lifecycle Event)
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.795 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.799 253465 INFO nova.virt.libvirt.driver [-] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Instance spawned successfully.
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.800 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 03:59:53 compute-0 systemd[1]: Started libpod-conmon-af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8.scope.
Nov 22 03:59:53 compute-0 podman[279029]: 2025-11-22 03:59:53.714227334 +0000 UTC m=+0.029602106 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.822 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:59:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 03:59:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9be3d2b946117ffc4bc9d01158ab12cf31ed33fb5cee30dc990788e710e1ccc/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.830 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.836 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.837 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.838 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.838 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.839 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:59:53 compute-0 podman[279029]: 2025-11-22 03:59:53.839918193 +0000 UTC m=+0.155293015 container init af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.840 253465 DEBUG nova.virt.libvirt.driver [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 03:59:53 compute-0 podman[279029]: 2025-11-22 03:59:53.844758503 +0000 UTC m=+0.160133286 container start af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.852 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 03:59:53 compute-0 neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05[279044]: [NOTICE]   (279048) : New worker (279050) forked
Nov 22 03:59:53 compute-0 neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05[279044]: [NOTICE]   (279048) : Loading success.
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.929 253465 INFO nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Took 8.20 seconds to spawn the instance on the hypervisor.
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.930 253465 DEBUG nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 03:59:53 compute-0 nova_compute[253461]: 2025-11-22 03:59:53.996 253465 INFO nova.compute.manager [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Took 9.24 seconds to build instance.
Nov 22 03:59:54 compute-0 nova_compute[253461]: 2025-11-22 03:59:54.017 253465 DEBUG oslo_concurrency.lockutils [None req-cb2b4443-178e-4e98-a47b-f39a9e6ea915 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 207 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 7.8 MiB/s wr, 273 op/s
Nov 22 03:59:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Nov 22 03:59:54 compute-0 ceph-mon[75011]: osdmap e306: 3 total, 3 up, 3 in
Nov 22 03:59:54 compute-0 ceph-mon[75011]: pgmap v1355: 305 pgs: 305 active+clean; 207 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 7.8 MiB/s wr, 273 op/s
Nov 22 03:59:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Nov 22 03:59:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Nov 22 03:59:55 compute-0 ceph-mon[75011]: osdmap e307: 3 total, 3 up, 3 in
Nov 22 03:59:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:55 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346097851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:55 compute-0 nova_compute[253461]: 2025-11-22 03:59:55.846 253465 DEBUG nova.compute.manager [req-f2c0d767-9533-4a11-b934-f8a9b1e9bae1 req-4e3223e4-4c53-441c-b4b5-2f650ec457cf f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:55 compute-0 nova_compute[253461]: 2025-11-22 03:59:55.846 253465 DEBUG oslo_concurrency.lockutils [req-f2c0d767-9533-4a11-b934-f8a9b1e9bae1 req-4e3223e4-4c53-441c-b4b5-2f650ec457cf f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 03:59:55 compute-0 nova_compute[253461]: 2025-11-22 03:59:55.846 253465 DEBUG oslo_concurrency.lockutils [req-f2c0d767-9533-4a11-b934-f8a9b1e9bae1 req-4e3223e4-4c53-441c-b4b5-2f650ec457cf f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 03:59:55 compute-0 nova_compute[253461]: 2025-11-22 03:59:55.847 253465 DEBUG oslo_concurrency.lockutils [req-f2c0d767-9533-4a11-b934-f8a9b1e9bae1 req-4e3223e4-4c53-441c-b4b5-2f650ec457cf f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 03:59:55 compute-0 nova_compute[253461]: 2025-11-22 03:59:55.847 253465 DEBUG nova.compute.manager [req-f2c0d767-9533-4a11-b934-f8a9b1e9bae1 req-4e3223e4-4c53-441c-b4b5-2f650ec457cf f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] No waiting events found dispatching network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 03:59:55 compute-0 nova_compute[253461]: 2025-11-22 03:59:55.847 253465 WARNING nova.compute.manager [req-f2c0d767-9533-4a11-b934-f8a9b1e9bae1 req-4e3223e4-4c53-441c-b4b5-2f650ec457cf f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received unexpected event network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a for instance with vm_state active and task_state None.
Nov 22 03:59:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 03:59:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 207 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 205 op/s
Nov 22 03:59:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Nov 22 03:59:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/346097851' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:56 compute-0 ceph-mon[75011]: pgmap v1357: 305 pgs: 305 active+clean; 207 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 205 op/s
Nov 22 03:59:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Nov 22 03:59:56 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Nov 22 03:59:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 03:59:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1470444445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:57 compute-0 nova_compute[253461]: 2025-11-22 03:59:57.082 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:57 compute-0 nova_compute[253461]: 2025-11-22 03:59:57.268 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:57 compute-0 ceph-mon[75011]: osdmap e308: 3 total, 3 up, 3 in
Nov 22 03:59:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1470444445' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 03:59:57 compute-0 NetworkManager[48916]: <info>  [1763783997.8375] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Nov 22 03:59:57 compute-0 NetworkManager[48916]: <info>  [1763783997.8384] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Nov 22 03:59:57 compute-0 nova_compute[253461]: 2025-11-22 03:59:57.837 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:58 compute-0 nova_compute[253461]: 2025-11-22 03:59:58.057 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:58 compute-0 ovn_controller[152691]: 2025-11-22T03:59:58Z|00146|binding|INFO|Releasing lport 30338b02-a11d-4ec7-8237-9f070233f5bd from this chassis (sb_readonly=0)
Nov 22 03:59:58 compute-0 nova_compute[253461]: 2025-11-22 03:59:58.064 253465 DEBUG nova.compute.manager [req-a2f3ab66-2550-4cdc-9605-6c215508204a req-fd5dd5c4-6d9f-4203-8d28-4b3840025d4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-changed-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 03:59:58 compute-0 nova_compute[253461]: 2025-11-22 03:59:58.065 253465 DEBUG nova.compute.manager [req-a2f3ab66-2550-4cdc-9605-6c215508204a req-fd5dd5c4-6d9f-4203-8d28-4b3840025d4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Refreshing instance network info cache due to event network-changed-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 03:59:58 compute-0 nova_compute[253461]: 2025-11-22 03:59:58.066 253465 DEBUG oslo_concurrency.lockutils [req-a2f3ab66-2550-4cdc-9605-6c215508204a req-fd5dd5c4-6d9f-4203-8d28-4b3840025d4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 03:59:58 compute-0 nova_compute[253461]: 2025-11-22 03:59:58.066 253465 DEBUG oslo_concurrency.lockutils [req-a2f3ab66-2550-4cdc-9605-6c215508204a req-fd5dd5c4-6d9f-4203-8d28-4b3840025d4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 03:59:58 compute-0 nova_compute[253461]: 2025-11-22 03:59:58.067 253465 DEBUG nova.network.neutron [req-a2f3ab66-2550-4cdc-9605-6c215508204a req-fd5dd5c4-6d9f-4203-8d28-4b3840025d4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Refreshing network info cache for port 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 03:59:58 compute-0 nova_compute[253461]: 2025-11-22 03:59:58.088 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 03:59:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.9 MiB/s wr, 208 op/s
Nov 22 03:59:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Nov 22 03:59:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Nov 22 03:59:58 compute-0 ceph-mon[75011]: pgmap v1359: 305 pgs: 305 active+clean; 227 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 6.9 MiB/s wr, 208 op/s
Nov 22 03:59:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Nov 22 03:59:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 03:59:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2742628523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 03:59:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2742628523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:59 compute-0 ceph-mon[75011]: osdmap e309: 3 total, 3 up, 3 in
Nov 22 03:59:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2742628523' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 03:59:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2742628523' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 03:59:59 compute-0 nova_compute[253461]: 2025-11-22 03:59:59.585 253465 DEBUG nova.network.neutron [req-a2f3ab66-2550-4cdc-9605-6c215508204a req-fd5dd5c4-6d9f-4203-8d28-4b3840025d4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updated VIF entry in instance network info cache for port 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 03:59:59 compute-0 nova_compute[253461]: 2025-11-22 03:59:59.586 253465 DEBUG nova.network.neutron [req-a2f3ab66-2550-4cdc-9605-6c215508204a req-fd5dd5c4-6d9f-4203-8d28-4b3840025d4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updating instance_info_cache with network_info: [{"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 03:59:59 compute-0 nova_compute[253461]: 2025-11-22 03:59:59.618 253465 DEBUG oslo_concurrency.lockutils [req-a2f3ab66-2550-4cdc-9605-6c215508204a req-fd5dd5c4-6d9f-4203-8d28-4b3840025d4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 259 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.9 MiB/s wr, 279 op/s
Nov 22 04:00:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854452458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2854452458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Nov 22 04:00:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Nov 22 04:00:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Nov 22 04:00:00 compute-0 ceph-mon[75011]: pgmap v1361: 305 pgs: 305 active+clean; 259 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.9 MiB/s wr, 279 op/s
Nov 22 04:00:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2854452458' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2854452458' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Nov 22 04:00:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Nov 22 04:00:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Nov 22 04:00:01 compute-0 ceph-mon[75011]: osdmap e310: 3 total, 3 up, 3 in
Nov 22 04:00:01 compute-0 ceph-mon[75011]: osdmap e311: 3 total, 3 up, 3 in
Nov 22 04:00:02 compute-0 nova_compute[253461]: 2025-11-22 04:00:02.085 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:02 compute-0 nova_compute[253461]: 2025-11-22 04:00:02.271 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 273 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 5.6 MiB/s wr, 318 op/s
Nov 22 04:00:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Nov 22 04:00:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Nov 22 04:00:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Nov 22 04:00:02 compute-0 ceph-mon[75011]: pgmap v1364: 305 pgs: 305 active+clean; 273 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 8.1 MiB/s rd, 5.6 MiB/s wr, 318 op/s
Nov 22 04:00:03 compute-0 ceph-mon[75011]: osdmap e312: 3 total, 3 up, 3 in
Nov 22 04:00:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 273 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 3.6 MiB/s wr, 316 op/s
Nov 22 04:00:04 compute-0 podman[279060]: 2025-11-22 04:00:04.421022762 +0000 UTC m=+0.084551198 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:00:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Nov 22 04:00:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Nov 22 04:00:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Nov 22 04:00:04 compute-0 ceph-mon[75011]: pgmap v1366: 305 pgs: 305 active+clean; 273 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 3.6 MiB/s wr, 316 op/s
Nov 22 04:00:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Nov 22 04:00:05 compute-0 ceph-mon[75011]: osdmap e313: 3 total, 3 up, 3 in
Nov 22 04:00:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Nov 22 04:00:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Nov 22 04:00:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Nov 22 04:00:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Nov 22 04:00:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Nov 22 04:00:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 273 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1022 B/s wr, 73 op/s
Nov 22 04:00:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1271245206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1271245206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:06 compute-0 ceph-mon[75011]: osdmap e314: 3 total, 3 up, 3 in
Nov 22 04:00:06 compute-0 ceph-mon[75011]: osdmap e315: 3 total, 3 up, 3 in
Nov 22 04:00:06 compute-0 ceph-mon[75011]: pgmap v1370: 305 pgs: 305 active+clean; 273 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1022 B/s wr, 73 op/s
Nov 22 04:00:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1271245206' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1271245206' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:06 compute-0 nova_compute[253461]: 2025-11-22 04:00:06.701 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquiring lock "c003c606-bec0-4664-9493-bbac2142d827" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:06 compute-0 nova_compute[253461]: 2025-11-22 04:00:06.702 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:06 compute-0 nova_compute[253461]: 2025-11-22 04:00:06.721 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:00:06 compute-0 nova_compute[253461]: 2025-11-22 04:00:06.796 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:06 compute-0 nova_compute[253461]: 2025-11-22 04:00:06.797 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:06 compute-0 nova_compute[253461]: 2025-11-22 04:00:06.806 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:00:06 compute-0 nova_compute[253461]: 2025-11-22 04:00:06.806 253465 INFO nova.compute.claims [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:00:06 compute-0 nova_compute[253461]: 2025-11-22 04:00:06.918 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.089 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.274 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:00:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976371780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.371 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.380 253465 DEBUG nova.compute.provider_tree [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.403 253465 DEBUG nova.scheduler.client.report [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.438 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.440 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:00:07 compute-0 ovn_controller[152691]: 2025-11-22T04:00:07Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:44:e3:cf 10.100.0.5
Nov 22 04:00:07 compute-0 ovn_controller[152691]: 2025-11-22T04:00:07Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:44:e3:cf 10.100.0.5
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.527 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.528 253465 DEBUG nova.network.neutron [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.564 253465 INFO nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.588 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.639 253465 INFO nova.virt.block_device [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Booting with volume ab3b18b3-953b-4676-b875-dd714dee82f1 at /dev/vda
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.795 253465 DEBUG nova.policy [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5a6e905db660471e9190f5745dec10b2', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c7aa9a08e9ab49c898386171f066f40e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.900 253465 DEBUG os_brick.utils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.901 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.914 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.914 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[20386bac-b31c-4ffe-b200-de5b19d81019]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.916 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.929 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.929 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[944fa42a-92d7-4942-86b1-b6d885ba8728]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.932 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.941 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.941 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[4d63309d-c551-44e8-81dc-f9c5ed69948b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.943 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a063b1-7808-411f-9dbd-ff81dd911c1c]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.944 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/976371780' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.967 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.970 253465 DEBUG os_brick.initiator.connectors.lightos [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.970 253465 DEBUG os_brick.initiator.connectors.lightos [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.970 253465 DEBUG os_brick.initiator.connectors.lightos [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.971 253465 DEBUG os_brick.utils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:00:07 compute-0 nova_compute[253461]: 2025-11-22 04:00:07.971 253465 DEBUG nova.virt.block_device [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updating existing volume attachment record: 6ab7d7c2-d172-4b48-88b3-6bd35c3c2a7a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:00:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 285 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.3 MiB/s wr, 156 op/s
Nov 22 04:00:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:00:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/911000376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:08 compute-0 nova_compute[253461]: 2025-11-22 04:00:08.662 253465 DEBUG nova.network.neutron [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Successfully created port: b369549e-8f2c-4d18-a73d-818e42cab65d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:00:08 compute-0 ceph-mon[75011]: pgmap v1371: 305 pgs: 305 active+clean; 285 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 116 KiB/s rd, 1.3 MiB/s wr, 156 op/s
Nov 22 04:00:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/911000376' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.103 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.105 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.106 253465 INFO nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Creating image(s)
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.107 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.107 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Ensure instance console log exists: /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.108 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.109 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.109 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.615 253465 DEBUG nova.network.neutron [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Successfully updated port: b369549e-8f2c-4d18-a73d-818e42cab65d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.639 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquiring lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.640 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquired lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.640 253465 DEBUG nova.network.neutron [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.873 253465 DEBUG nova.compute.manager [req-0ca3e191-556e-4e17-b2c7-c6fae413aa8b req-d342f01f-07c7-4699-9071-ad22f42ff23f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.874 253465 DEBUG nova.compute.manager [req-0ca3e191-556e-4e17-b2c7-c6fae413aa8b req-d342f01f-07c7-4699-9071-ad22f42ff23f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing instance network info cache due to event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.874 253465 DEBUG oslo_concurrency.lockutils [req-0ca3e191-556e-4e17-b2c7-c6fae413aa8b req-d342f01f-07c7-4699-9071-ad22f42ff23f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:09 compute-0 nova_compute[253461]: 2025-11-22 04:00:09.916 253465 DEBUG nova.network.neutron [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:00:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 728 KiB/s rd, 4.3 MiB/s wr, 227 op/s
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.935 253465 DEBUG nova.network.neutron [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updating instance_info_cache with network_info: [{"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Nov 22 04:00:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.959 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Releasing lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.960 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Instance network_info: |[{"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.960 253465 DEBUG oslo_concurrency.lockutils [req-0ca3e191-556e-4e17-b2c7-c6fae413aa8b req-d342f01f-07c7-4699-9071-ad22f42ff23f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.961 253465 DEBUG nova.network.neutron [req-0ca3e191-556e-4e17-b2c7-c6fae413aa8b req-d342f01f-07c7-4699-9071-ad22f42ff23f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:00:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.967 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Start _get_guest_xml network_info=[{"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': '6ab7d7c2-d172-4b48-88b3-6bd35c3c2a7a', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ab3b18b3-953b-4676-b875-dd714dee82f1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ab3b18b3-953b-4676-b875-dd714dee82f1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'c003c606-bec0-4664-9493-bbac2142d827', 'attached_at': '', 'detached_at': '', 'volume_id': 'ab3b18b3-953b-4676-b875-dd714dee82f1', 'serial': 'ab3b18b3-953b-4676-b875-dd714dee82f1'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.973 253465 WARNING nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.986 253465 DEBUG nova.virt.libvirt.host [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.986 253465 DEBUG nova.virt.libvirt.host [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.991 253465 DEBUG nova.virt.libvirt.host [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.991 253465 DEBUG nova.virt.libvirt.host [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.992 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.993 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.993 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.994 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.994 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.995 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.995 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.995 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.996 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.996 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.997 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:00:10 compute-0 nova_compute[253461]: 2025-11-22 04:00:10.997 253465 DEBUG nova.virt.hardware [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.049 253465 DEBUG nova.storage.rbd_utils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] rbd image c003c606-bec0-4664-9493-bbac2142d827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.054 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:11 compute-0 ceph-mon[75011]: pgmap v1372: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 728 KiB/s rd, 4.3 MiB/s wr, 227 op/s
Nov 22 04:00:11 compute-0 ceph-mon[75011]: osdmap e316: 3 total, 3 up, 3 in
Nov 22 04:00:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:00:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/644181417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.527 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.561 253465 DEBUG nova.virt.libvirt.vif [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:00:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1890259389',display_name='tempest-TestVolumeBackupRestore-server-1890259389',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1890259389',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3yp8PHjZNGxHIN49Bfx05O7cy5ohfH4jwWdedDKM0Ty5Fh44zWvCRr7B+x0pJKE+uA4rDYohra35NWqYR1IDyRfGmb6U6v6fBZ+bSCfyE4FrBM3a5ioizkhQNkNvq2uQ==',key_name='tempest-TestVolumeBackupRestore-445028672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7aa9a08e9ab49c898386171f066f40e',ramdisk_id='',reservation_id='r-6babejzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-936651426',owner_user_name='tempest-TestVolumeBackupRestore-936651426-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:00:07Z,user_data=None,user_id='5a6e905db660471e9190f5745dec10b2',uuid=c003c606-bec0-4664-9493-bbac2142d827,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.562 253465 DEBUG nova.network.os_vif_util [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Converting VIF {"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.563 253465 DEBUG nova.network.os_vif_util [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:06:dd,bridge_name='br-int',has_traffic_filtering=True,id=b369549e-8f2c-4d18-a73d-818e42cab65d,network=Network(d2b5d417-be92-4509-961a-e3d3cc2055a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb369549e-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.565 253465 DEBUG nova.objects.instance [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lazy-loading 'pci_devices' on Instance uuid c003c606-bec0-4664-9493-bbac2142d827 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.587 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <uuid>c003c606-bec0-4664-9493-bbac2142d827</uuid>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <name>instance-0000000d</name>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <nova:name>tempest-TestVolumeBackupRestore-server-1890259389</nova:name>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:00:10</nova:creationTime>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <nova:user uuid="5a6e905db660471e9190f5745dec10b2">tempest-TestVolumeBackupRestore-936651426-project-member</nova:user>
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <nova:project uuid="c7aa9a08e9ab49c898386171f066f40e">tempest-TestVolumeBackupRestore-936651426</nova:project>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <nova:port uuid="b369549e-8f2c-4d18-a73d-818e42cab65d">
Nov 22 04:00:11 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <system>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <entry name="serial">c003c606-bec0-4664-9493-bbac2142d827</entry>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <entry name="uuid">c003c606-bec0-4664-9493-bbac2142d827</entry>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </system>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <os>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   </os>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <features>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   </features>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/c003c606-bec0-4664-9493-bbac2142d827_disk.config">
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       </source>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-ab3b18b3-953b-4676-b875-dd714dee82f1">
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       </source>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:00:11 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <serial>ab3b18b3-953b-4676-b875-dd714dee82f1</serial>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:ba:06:dd"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <target dev="tapb369549e-8f"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827/console.log" append="off"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <video>
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </video>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:00:11 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:00:11 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:00:11 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:00:11 compute-0 nova_compute[253461]: </domain>
Nov 22 04:00:11 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.588 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Preparing to wait for external event network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.588 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquiring lock "c003c606-bec0-4664-9493-bbac2142d827-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.589 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.589 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.589 253465 DEBUG nova.virt.libvirt.vif [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:00:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1890259389',display_name='tempest-TestVolumeBackupRestore-server-1890259389',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1890259389',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3yp8PHjZNGxHIN49Bfx05O7cy5ohfH4jwWdedDKM0Ty5Fh44zWvCRr7B+x0pJKE+uA4rDYohra35NWqYR1IDyRfGmb6U6v6fBZ+bSCfyE4FrBM3a5ioizkhQNkNvq2uQ==',key_name='tempest-TestVolumeBackupRestore-445028672',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c7aa9a08e9ab49c898386171f066f40e',ramdisk_id='',reservation_id='r-6babejzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-936651426',owner_user_name='tempest-TestVolumeBackupRestore-936651426-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:00:07Z,user_data=None,user_id='5a6e905db660471e9190f5745dec10b2',uuid=c003c606-bec0-4664-9493-bbac2142d827,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.590 253465 DEBUG nova.network.os_vif_util [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Converting VIF {"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.590 253465 DEBUG nova.network.os_vif_util [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:06:dd,bridge_name='br-int',has_traffic_filtering=True,id=b369549e-8f2c-4d18-a73d-818e42cab65d,network=Network(d2b5d417-be92-4509-961a-e3d3cc2055a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb369549e-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.591 253465 DEBUG os_vif [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:06:dd,bridge_name='br-int',has_traffic_filtering=True,id=b369549e-8f2c-4d18-a73d-818e42cab65d,network=Network(d2b5d417-be92-4509-961a-e3d3cc2055a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb369549e-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.591 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.592 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.592 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.594 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.595 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb369549e-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.595 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb369549e-8f, col_values=(('external_ids', {'iface-id': 'b369549e-8f2c-4d18-a73d-818e42cab65d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:06:dd', 'vm-uuid': 'c003c606-bec0-4664-9493-bbac2142d827'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:11 compute-0 NetworkManager[48916]: <info>  [1763784011.5979] manager: (tapb369549e-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.598 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.606 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.607 253465 INFO os_vif [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:06:dd,bridge_name='br-int',has_traffic_filtering=True,id=b369549e-8f2c-4d18-a73d-818e42cab65d,network=Network(d2b5d417-be92-4509-961a-e3d3cc2055a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb369549e-8f')
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.684 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.685 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.686 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] No VIF found with MAC fa:16:3e:ba:06:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.687 253465 INFO nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Using config drive
Nov 22 04:00:11 compute-0 nova_compute[253461]: 2025-11-22 04:00:11.720 253465 DEBUG nova.storage.rbd_utils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] rbd image c003c606-bec0-4664-9493-bbac2142d827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.089 253465 DEBUG nova.network.neutron [req-0ca3e191-556e-4e17-b2c7-c6fae413aa8b req-d342f01f-07c7-4699-9071-ad22f42ff23f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updated VIF entry in instance network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.090 253465 DEBUG nova.network.neutron [req-0ca3e191-556e-4e17-b2c7-c6fae413aa8b req-d342f01f-07c7-4699-9071-ad22f42ff23f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updating instance_info_cache with network_info: [{"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.103 253465 INFO nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Creating config drive at /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827/disk.config
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.111 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7x1caega execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.144 253465 DEBUG oslo_concurrency.lockutils [req-0ca3e191-556e-4e17-b2c7-c6fae413aa8b req-d342f01f-07c7-4699-9071-ad22f42ff23f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.255 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7x1caega" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.288 253465 DEBUG nova.storage.rbd_utils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] rbd image c003c606-bec0-4664-9493-bbac2142d827_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.292 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827/disk.config c003c606-bec0-4664-9493-bbac2142d827_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 647 KiB/s rd, 3.8 MiB/s wr, 202 op/s
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.319 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/644181417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.479 253465 DEBUG oslo_concurrency.processutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827/disk.config c003c606-bec0-4664-9493-bbac2142d827_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.480 253465 INFO nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Deleting local config drive /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827/disk.config because it was imported into RBD.
Nov 22 04:00:12 compute-0 kernel: tapb369549e-8f: entered promiscuous mode
Nov 22 04:00:12 compute-0 NetworkManager[48916]: <info>  [1763784012.5462] manager: (tapb369549e-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/81)
Nov 22 04:00:12 compute-0 ovn_controller[152691]: 2025-11-22T04:00:12Z|00147|binding|INFO|Claiming lport b369549e-8f2c-4d18-a73d-818e42cab65d for this chassis.
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.548 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:12 compute-0 ovn_controller[152691]: 2025-11-22T04:00:12Z|00148|binding|INFO|b369549e-8f2c-4d18-a73d-818e42cab65d: Claiming fa:16:3e:ba:06:dd 10.100.0.4
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.559 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:06:dd 10.100.0.4'], port_security=['fa:16:3e:ba:06:dd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c003c606-bec0-4664-9493-bbac2142d827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2b5d417-be92-4509-961a-e3d3cc2055a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7aa9a08e9ab49c898386171f066f40e', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bacad159-6eb3-42d5-9393-7fa5c31fe12a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10c4262f-940a-4ebd-9163-43e228216ff2, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b369549e-8f2c-4d18-a73d-818e42cab65d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.561 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b369549e-8f2c-4d18-a73d-818e42cab65d in datapath d2b5d417-be92-4509-961a-e3d3cc2055a5 bound to our chassis
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.564 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2b5d417-be92-4509-961a-e3d3cc2055a5
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.583 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef9b123-0a82-4b32-9714-e5ca33ef257d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.584 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2b5d417-b1 in ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.586 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2b5d417-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:00:12 compute-0 ovn_controller[152691]: 2025-11-22T04:00:12Z|00149|binding|INFO|Setting lport b369549e-8f2c-4d18-a73d-818e42cab65d ovn-installed in OVS
Nov 22 04:00:12 compute-0 ovn_controller[152691]: 2025-11-22T04:00:12Z|00150|binding|INFO|Setting lport b369549e-8f2c-4d18-a73d-818e42cab65d up in Southbound
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.587 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[51986b97-2e6c-450e-b6f1-42b9431b3324]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.589 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.588 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f706a720-d708-474b-96f4-5db4c3c316df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.596 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:12 compute-0 systemd-machined[215728]: New machine qemu-13-instance-0000000d.
Nov 22 04:00:12 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.609 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[c39858c6-90eb-4802-8844-f5e2c8015251]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 systemd-udevd[279224]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.636 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[57d99c51-2a1b-4b19-8801-586744292067]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 NetworkManager[48916]: <info>  [1763784012.6539] device (tapb369549e-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:00:12 compute-0 NetworkManager[48916]: <info>  [1763784012.6559] device (tapb369549e-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.683 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[49d86a17-852a-4375-af69-6d77edb551e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.690 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[586caf7d-76b6-4387-9750-8984f92eed8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 systemd-udevd[279227]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:00:12 compute-0 NetworkManager[48916]: <info>  [1763784012.6917] manager: (tapd2b5d417-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/82)
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.736 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[e75c8558-3b92-4b96-b93e-f6485ee859fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.739 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[4dfa6872-e08b-4d01-bd09-812780b00ffa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 NetworkManager[48916]: <info>  [1763784012.7723] device (tapd2b5d417-b0): carrier: link connected
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.780 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[7305e748-a792-41f6-a481-0de8a19a5f9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.806 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e8116c-37d8-4f77-866c-bfd1b4502fa3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2b5d417-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:81:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426537, 'reachable_time': 25343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279254, 'error': None, 'target': 'ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.830 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6163d28e-1009-416e-be2e-def6d68120bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febe:81cc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 426537, 'tstamp': 426537}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279255, 'error': None, 'target': 'ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.846 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[61aa14b3-09f7-40cb-9350-b7e02d46cd42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2b5d417-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:be:81:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426537, 'reachable_time': 25343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279256, 'error': None, 'target': 'ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.860 253465 DEBUG nova.compute.manager [req-3aa8195c-8e12-4656-a01b-a82aee4c0dda req-9fe0b232-06ba-4e2d-861f-8330468206d5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.861 253465 DEBUG oslo_concurrency.lockutils [req-3aa8195c-8e12-4656-a01b-a82aee4c0dda req-9fe0b232-06ba-4e2d-861f-8330468206d5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "c003c606-bec0-4664-9493-bbac2142d827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.861 253465 DEBUG oslo_concurrency.lockutils [req-3aa8195c-8e12-4656-a01b-a82aee4c0dda req-9fe0b232-06ba-4e2d-861f-8330468206d5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.862 253465 DEBUG oslo_concurrency.lockutils [req-3aa8195c-8e12-4656-a01b-a82aee4c0dda req-9fe0b232-06ba-4e2d-861f-8330468206d5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.862 253465 DEBUG nova.compute.manager [req-3aa8195c-8e12-4656-a01b-a82aee4c0dda req-9fe0b232-06ba-4e2d-861f-8330468206d5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Processing event network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.882 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[41bae4f8-ea90-4237-b7f9-c66978c41ab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.956 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f0efafb9-c007-4f36-864c-d04910fe3b4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.958 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2b5d417-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.958 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.958 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2b5d417-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.960 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:12 compute-0 NetworkManager[48916]: <info>  [1763784012.9608] manager: (tapd2b5d417-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Nov 22 04:00:12 compute-0 kernel: tapd2b5d417-b0: entered promiscuous mode
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.962 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.969 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2b5d417-b0, col_values=(('external_ids', {'iface-id': 'c19b7b76-183b-4325-aae0-c86f4a927a3d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.970 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:12 compute-0 ovn_controller[152691]: 2025-11-22T04:00:12Z|00151|binding|INFO|Releasing lport c19b7b76-183b-4325-aae0-c86f4a927a3d from this chassis (sb_readonly=0)
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.971 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.972 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2b5d417-be92-4509-961a-e3d3cc2055a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2b5d417-be92-4509-961a-e3d3cc2055a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.974 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b4fae2-bd84-427f-abfc-dc117cacc5dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.976 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-d2b5d417-be92-4509-961a-e3d3cc2055a5
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/d2b5d417-be92-4509-961a-e3d3cc2055a5.pid.haproxy
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID d2b5d417-be92-4509-961a-e3d3cc2055a5
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:00:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:12.977 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5', 'env', 'PROCESS_TAG=haproxy-d2b5d417-be92-4509-961a-e3d3cc2055a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2b5d417-be92-4509-961a-e3d3cc2055a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:00:12 compute-0 nova_compute[253461]: 2025-11-22 04:00:12.987 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.103 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784013.1028728, c003c606-bec0-4664-9493-bbac2142d827 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.104 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] VM Started (Lifecycle Event)
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.107 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.114 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.118 253465 INFO nova.virt.libvirt.driver [-] [instance: c003c606-bec0-4664-9493-bbac2142d827] Instance spawned successfully.
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.119 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.131 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.136 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.147 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.148 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.148 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.148 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.149 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.149 253465 DEBUG nova.virt.libvirt.driver [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.157 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.157 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784013.1070642, c003c606-bec0-4664-9493-bbac2142d827 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.157 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] VM Paused (Lifecycle Event)
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.194 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.198 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784013.11141, c003c606-bec0-4664-9493-bbac2142d827 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.199 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] VM Resumed (Lifecycle Event)
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.230 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.235 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.240 253465 INFO nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Took 4.14 seconds to spawn the instance on the hypervisor.
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.240 253465 DEBUG nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.366 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:00:13 compute-0 ceph-mon[75011]: pgmap v1374: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 647 KiB/s rd, 3.8 MiB/s wr, 202 op/s
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.431 253465 INFO nova.compute.manager [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Took 6.66 seconds to build instance.
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.449 253465 DEBUG oslo_concurrency.lockutils [None req-5257ecf1-7cef-41e3-b025-999b03161ec1 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:13 compute-0 podman[279330]: 2025-11-22 04:00:13.496346458 +0000 UTC m=+0.087516522 container create be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:13 compute-0 systemd[1]: Started libpod-conmon-be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901.scope.
Nov 22 04:00:13 compute-0 podman[279330]: 2025-11-22 04:00:13.457329759 +0000 UTC m=+0.048499844 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:00:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:00:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43e7922e68bc475cf39daa7bc45f422e47047b9a016b0ef765b9d150a2993516/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:13 compute-0 podman[279330]: 2025-11-22 04:00:13.579387536 +0000 UTC m=+0.170557590 container init be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:00:13 compute-0 podman[279330]: 2025-11-22 04:00:13.590016728 +0000 UTC m=+0.181186782 container start be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118)
Nov 22 04:00:13 compute-0 neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5[279345]: [NOTICE]   (279349) : New worker (279351) forked
Nov 22 04:00:13 compute-0 neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5[279345]: [NOTICE]   (279349) : Loading success.
Nov 22 04:00:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:00:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/20582252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.795 253465 DEBUG oslo_concurrency.lockutils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.799 253465 DEBUG oslo_concurrency.lockutils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.815 253465 DEBUG nova.objects.instance [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'flavor' on Instance uuid fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:00:13 compute-0 nova_compute[253461]: 2025-11-22 04:00:13.859 253465 DEBUG oslo_concurrency.lockutils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.097 253465 DEBUG oslo_concurrency.lockutils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.098 253465 DEBUG oslo_concurrency.lockutils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.098 253465 INFO nova.compute.manager [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Attaching volume 0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5 to /dev/vdb
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.247 253465 DEBUG os_brick.utils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.249 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.275 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.275 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[04e266f1-3f74-420f-a9eb-b964a3183f2f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.277 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.291 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.292 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[f11401c8-ab6c-4b05-862e-a5fedd61df39]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.294 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 198 op/s
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.308 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.308 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b0c7e6-538f-4176-910e-aca445be21f8]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.310 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[49075d26-ef74-4d9e-a568-bc715797b3b8]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.311 253465 DEBUG oslo_concurrency.processutils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.348 253465 DEBUG oslo_concurrency.processutils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.352 253465 DEBUG os_brick.initiator.connectors.lightos [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.353 253465 DEBUG os_brick.initiator.connectors.lightos [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.353 253465 DEBUG os_brick.initiator.connectors.lightos [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.354 253465 DEBUG os_brick.utils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] <== get_connector_properties: return (105ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.355 253465 DEBUG nova.virt.block_device [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updating existing volume attachment record: 51448437-67c8-486b-abbc-4452ec2a4eb5 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:00:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Nov 22 04:00:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Nov 22 04:00:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/20582252' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:14 compute-0 ceph-mon[75011]: pgmap v1375: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.1 MiB/s wr, 198 op/s
Nov 22 04:00:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Nov 22 04:00:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:00:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3603926141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.995 253465 DEBUG nova.compute.manager [req-27354398-635a-4f8e-9a8f-ae683ab6e1a9 req-76e3eca3-65ea-4c65-a58d-709bff457a5b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.996 253465 DEBUG oslo_concurrency.lockutils [req-27354398-635a-4f8e-9a8f-ae683ab6e1a9 req-76e3eca3-65ea-4c65-a58d-709bff457a5b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "c003c606-bec0-4664-9493-bbac2142d827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.996 253465 DEBUG oslo_concurrency.lockutils [req-27354398-635a-4f8e-9a8f-ae683ab6e1a9 req-76e3eca3-65ea-4c65-a58d-709bff457a5b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.997 253465 DEBUG oslo_concurrency.lockutils [req-27354398-635a-4f8e-9a8f-ae683ab6e1a9 req-76e3eca3-65ea-4c65-a58d-709bff457a5b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.997 253465 DEBUG nova.compute.manager [req-27354398-635a-4f8e-9a8f-ae683ab6e1a9 req-76e3eca3-65ea-4c65-a58d-709bff457a5b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] No waiting events found dispatching network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:00:14 compute-0 nova_compute[253461]: 2025-11-22 04:00:14.998 253465 WARNING nova.compute.manager [req-27354398-635a-4f8e-9a8f-ae683ab6e1a9 req-76e3eca3-65ea-4c65-a58d-709bff457a5b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received unexpected event network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d for instance with vm_state active and task_state None.
Nov 22 04:00:15 compute-0 nova_compute[253461]: 2025-11-22 04:00:15.070 253465 DEBUG nova.objects.instance [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'flavor' on Instance uuid fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:00:15 compute-0 nova_compute[253461]: 2025-11-22 04:00:15.097 253465 DEBUG nova.virt.libvirt.driver [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Attempting to attach volume 0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 04:00:15 compute-0 nova_compute[253461]: 2025-11-22 04:00:15.103 253465 DEBUG nova.virt.libvirt.guest [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 04:00:15 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:00:15 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5">
Nov 22 04:00:15 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:00:15 compute-0 nova_compute[253461]:   </source>
Nov 22 04:00:15 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 04:00:15 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:00:15 compute-0 nova_compute[253461]:   </auth>
Nov 22 04:00:15 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:00:15 compute-0 nova_compute[253461]:   <serial>0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5</serial>
Nov 22 04:00:15 compute-0 nova_compute[253461]: </disk>
Nov 22 04:00:15 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 04:00:15 compute-0 nova_compute[253461]: 2025-11-22 04:00:15.267 253465 DEBUG nova.virt.libvirt.driver [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:00:15 compute-0 nova_compute[253461]: 2025-11-22 04:00:15.267 253465 DEBUG nova.virt.libvirt.driver [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:00:15 compute-0 nova_compute[253461]: 2025-11-22 04:00:15.267 253465 DEBUG nova.virt.libvirt.driver [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:00:15 compute-0 nova_compute[253461]: 2025-11-22 04:00:15.267 253465 DEBUG nova.virt.libvirt.driver [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No VIF found with MAC fa:16:3e:44:e3:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:00:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Nov 22 04:00:15 compute-0 ceph-mon[75011]: osdmap e317: 3 total, 3 up, 3 in
Nov 22 04:00:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3603926141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Nov 22 04:00:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Nov 22 04:00:15 compute-0 nova_compute[253461]: 2025-11-22 04:00:15.561 253465 DEBUG oslo_concurrency.lockutils [None req-a8ba6f0e-d779-47b7-a87b-dc406b8eee85 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 46 KiB/s wr, 47 op/s
Nov 22 04:00:16 compute-0 ceph-mon[75011]: osdmap e318: 3 total, 3 up, 3 in
Nov 22 04:00:16 compute-0 ceph-mon[75011]: pgmap v1378: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 46 KiB/s wr, 47 op/s
Nov 22 04:00:16 compute-0 nova_compute[253461]: 2025-11-22 04:00:16.598 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:17 compute-0 nova_compute[253461]: 2025-11-22 04:00:17.035 253465 DEBUG nova.compute.manager [req-0c031464-e501-4279-b9e5-5b65ed609ca7 req-bf156def-3a7c-4577-87dc-08fcb4b27a63 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:17 compute-0 nova_compute[253461]: 2025-11-22 04:00:17.035 253465 DEBUG nova.compute.manager [req-0c031464-e501-4279-b9e5-5b65ed609ca7 req-bf156def-3a7c-4577-87dc-08fcb4b27a63 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing instance network info cache due to event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:00:17 compute-0 nova_compute[253461]: 2025-11-22 04:00:17.036 253465 DEBUG oslo_concurrency.lockutils [req-0c031464-e501-4279-b9e5-5b65ed609ca7 req-bf156def-3a7c-4577-87dc-08fcb4b27a63 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:17 compute-0 nova_compute[253461]: 2025-11-22 04:00:17.036 253465 DEBUG oslo_concurrency.lockutils [req-0c031464-e501-4279-b9e5-5b65ed609ca7 req-bf156def-3a7c-4577-87dc-08fcb4b27a63 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:17 compute-0 nova_compute[253461]: 2025-11-22 04:00:17.037 253465 DEBUG nova.network.neutron [req-0c031464-e501-4279-b9e5-5b65ed609ca7 req-bf156def-3a7c-4577-87dc-08fcb4b27a63 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:00:17 compute-0 nova_compute[253461]: 2025-11-22 04:00:17.280 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Nov 22 04:00:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Nov 22 04:00:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Nov 22 04:00:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 49 KiB/s wr, 92 op/s
Nov 22 04:00:18 compute-0 ceph-mon[75011]: osdmap e319: 3 total, 3 up, 3 in
Nov 22 04:00:18 compute-0 ceph-mon[75011]: pgmap v1380: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 49 KiB/s wr, 92 op/s
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.526 253465 DEBUG oslo_concurrency.lockutils [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.527 253465 DEBUG oslo_concurrency.lockutils [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.545 253465 INFO nova.compute.manager [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Detaching volume 0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.685 253465 INFO nova.virt.block_device [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Attempting to driver detach volume 0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5 from mountpoint /dev/vdb
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.697 253465 DEBUG nova.virt.libvirt.driver [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Attempting to detach device vdb from instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.698 253465 DEBUG nova.virt.libvirt.guest [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5">
Nov 22 04:00:18 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   </source>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <serial>0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5</serial>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:00:18 compute-0 nova_compute[253461]: </disk>
Nov 22 04:00:18 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.705 253465 INFO nova.virt.libvirt.driver [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Successfully detached device vdb from instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 from the persistent domain config.
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.705 253465 DEBUG nova.virt.libvirt.driver [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.706 253465 DEBUG nova.virt.libvirt.guest [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5">
Nov 22 04:00:18 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   </source>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <serial>0c37e3dc-130f-4ad0-a4a2-6e95a34c99c5</serial>
Nov 22 04:00:18 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:00:18 compute-0 nova_compute[253461]: </disk>
Nov 22 04:00:18 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.757 253465 DEBUG nova.network.neutron [req-0c031464-e501-4279-b9e5-5b65ed609ca7 req-bf156def-3a7c-4577-87dc-08fcb4b27a63 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updated VIF entry in instance network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.757 253465 DEBUG nova.network.neutron [req-0c031464-e501-4279-b9e5-5b65ed609ca7 req-bf156def-3a7c-4577-87dc-08fcb4b27a63 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updating instance_info_cache with network_info: [{"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.766 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763784018.766678, fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.769 253465 DEBUG nova.virt.libvirt.driver [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.770 253465 INFO nova.virt.libvirt.driver [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Successfully detached device vdb from instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 from the live domain config.
Nov 22 04:00:18 compute-0 nova_compute[253461]: 2025-11-22 04:00:18.775 253465 DEBUG oslo_concurrency.lockutils [req-0c031464-e501-4279-b9e5-5b65ed609ca7 req-bf156def-3a7c-4577-87dc-08fcb4b27a63 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:19 compute-0 nova_compute[253461]: 2025-11-22 04:00:19.019 253465 DEBUG nova.objects.instance [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'flavor' on Instance uuid fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:00:19 compute-0 nova_compute[253461]: 2025-11-22 04:00:19.059 253465 DEBUG oslo_concurrency.lockutils [None req-d7d9c478-6dde-4f84-89b3-135d04fdb203 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:19 compute-0 nova_compute[253461]: 2025-11-22 04:00:19.136 253465 DEBUG nova.compute.manager [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:19 compute-0 nova_compute[253461]: 2025-11-22 04:00:19.137 253465 DEBUG nova.compute.manager [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing instance network info cache due to event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:00:19 compute-0 nova_compute[253461]: 2025-11-22 04:00:19.137 253465 DEBUG oslo_concurrency.lockutils [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:19 compute-0 nova_compute[253461]: 2025-11-22 04:00:19.137 253465 DEBUG oslo_concurrency.lockutils [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:19 compute-0 nova_compute[253461]: 2025-11-22 04:00:19.138 253465 DEBUG nova.network.neutron [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:00:19 compute-0 podman[279389]: 2025-11-22 04:00:19.393076346 +0000 UTC m=+0.075903421 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:19 compute-0 podman[279390]: 2025-11-22 04:00:19.432212257 +0000 UTC m=+0.110552396 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:00:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Nov 22 04:00:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Nov 22 04:00:19 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Nov 22 04:00:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1159165753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1159165753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 49 KiB/s wr, 167 op/s
Nov 22 04:00:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Nov 22 04:00:20 compute-0 ceph-mon[75011]: osdmap e320: 3 total, 3 up, 3 in
Nov 22 04:00:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1159165753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1159165753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:20 compute-0 ceph-mon[75011]: pgmap v1382: 305 pgs: 305 active+clean; 306 MiB data, 434 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 49 KiB/s wr, 167 op/s
Nov 22 04:00:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Nov 22 04:00:20 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Nov 22 04:00:20 compute-0 nova_compute[253461]: 2025-11-22 04:00:20.592 253465 DEBUG nova.network.neutron [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updated VIF entry in instance network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:00:20 compute-0 nova_compute[253461]: 2025-11-22 04:00:20.593 253465 DEBUG nova.network.neutron [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updating instance_info_cache with network_info: [{"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:20 compute-0 nova_compute[253461]: 2025-11-22 04:00:20.609 253465 DEBUG oslo_concurrency.lockutils [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:20 compute-0 nova_compute[253461]: 2025-11-22 04:00:20.609 253465 DEBUG nova.compute.manager [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:20 compute-0 nova_compute[253461]: 2025-11-22 04:00:20.610 253465 DEBUG nova.compute.manager [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing instance network info cache due to event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:00:20 compute-0 nova_compute[253461]: 2025-11-22 04:00:20.610 253465 DEBUG oslo_concurrency.lockutils [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:20 compute-0 nova_compute[253461]: 2025-11-22 04:00:20.611 253465 DEBUG oslo_concurrency.lockutils [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:20 compute-0 nova_compute[253461]: 2025-11-22 04:00:20.611 253465 DEBUG nova.network.neutron [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:00:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:21 compute-0 ceph-mon[75011]: osdmap e321: 3 total, 3 up, 3 in
Nov 22 04:00:21 compute-0 nova_compute[253461]: 2025-11-22 04:00:21.602 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:22 compute-0 nova_compute[253461]: 2025-11-22 04:00:22.046 253465 DEBUG nova.compute.manager [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:22 compute-0 nova_compute[253461]: 2025-11-22 04:00:22.104 253465 INFO nova.compute.manager [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] instance snapshotting
Nov 22 04:00:22 compute-0 nova_compute[253461]: 2025-11-22 04:00:22.282 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 308 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 395 KiB/s wr, 192 op/s
Nov 22 04:00:22 compute-0 nova_compute[253461]: 2025-11-22 04:00:22.395 253465 DEBUG nova.network.neutron [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updated VIF entry in instance network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:00:22 compute-0 nova_compute[253461]: 2025-11-22 04:00:22.396 253465 DEBUG nova.network.neutron [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updating instance_info_cache with network_info: [{"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:22 compute-0 nova_compute[253461]: 2025-11-22 04:00:22.415 253465 DEBUG oslo_concurrency.lockutils [req-a3be65de-1cfd-4075-89ae-6fa5493d7da7 req-7e03be68-cb8e-4431-b5e5-f1ab1a68f654 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:22 compute-0 ceph-mon[75011]: pgmap v1384: 305 pgs: 305 active+clean; 308 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 395 KiB/s wr, 192 op/s
Nov 22 04:00:22 compute-0 nova_compute[253461]: 2025-11-22 04:00:22.475 253465 INFO nova.virt.libvirt.driver [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Beginning live snapshot process
Nov 22 04:00:22 compute-0 nova_compute[253461]: 2025-11-22 04:00:22.677 253465 DEBUG nova.virt.libvirt.imagebackend [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No parent info for feac2ecd-89f4-4e45-b9fb-68cb0cf353c3; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Nov 22 04:00:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:23.012 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:23.013 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:23.015 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:23 compute-0 nova_compute[253461]: 2025-11-22 04:00:23.042 253465 DEBUG nova.storage.rbd_utils [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] creating snapshot(5433865b868348dca89ed8740e099e9b) on rbd image(fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 04:00:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Nov 22 04:00:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Nov 22 04:00:23 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Nov 22 04:00:23 compute-0 nova_compute[253461]: 2025-11-22 04:00:23.559 253465 DEBUG nova.storage.rbd_utils [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] cloning vms/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk@5433865b868348dca89ed8740e099e9b to images/d0ee314f-72f8-4728-88e7-429472591834 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 04:00:23 compute-0 nova_compute[253461]: 2025-11-22 04:00:23.704 253465 DEBUG nova.storage.rbd_utils [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] flattening images/d0ee314f-72f8-4728-88e7-429472591834 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Nov 22 04:00:24 compute-0 nova_compute[253461]: 2025-11-22 04:00:24.237 253465 DEBUG nova.storage.rbd_utils [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] removing snapshot(5433865b868348dca89ed8740e099e9b) on rbd image(fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Nov 22 04:00:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 342 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 226 op/s
Nov 22 04:00:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Nov 22 04:00:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Nov 22 04:00:24 compute-0 ceph-mon[75011]: osdmap e322: 3 total, 3 up, 3 in
Nov 22 04:00:24 compute-0 ceph-mon[75011]: pgmap v1386: 305 pgs: 305 active+clean; 342 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 226 op/s
Nov 22 04:00:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Nov 22 04:00:24 compute-0 nova_compute[253461]: 2025-11-22 04:00:24.534 253465 DEBUG nova.storage.rbd_utils [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] creating snapshot(snap) on rbd image(d0ee314f-72f8-4728-88e7-429472591834) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Nov 22 04:00:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Nov 22 04:00:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Nov 22 04:00:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Nov 22 04:00:25 compute-0 ceph-mon[75011]: osdmap e323: 3 total, 3 up, 3 in
Nov 22 04:00:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e324 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Nov 22 04:00:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Nov 22 04:00:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Nov 22 04:00:26 compute-0 ovn_controller[152691]: 2025-11-22T04:00:26Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:06:dd 10.100.0.4
Nov 22 04:00:26 compute-0 ovn_controller[152691]: 2025-11-22T04:00:26Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:06:dd 10.100.0.4
Nov 22 04:00:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 342 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 119 op/s
Nov 22 04:00:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:26.399 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:00:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:26.400 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:00:26 compute-0 nova_compute[253461]: 2025-11-22 04:00:26.450 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:26 compute-0 ceph-mon[75011]: osdmap e324: 3 total, 3 up, 3 in
Nov 22 04:00:26 compute-0 ceph-mon[75011]: osdmap e325: 3 total, 3 up, 3 in
Nov 22 04:00:26 compute-0 ceph-mon[75011]: pgmap v1390: 305 pgs: 305 active+clean; 342 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 119 op/s
Nov 22 04:00:26 compute-0 nova_compute[253461]: 2025-11-22 04:00:26.603 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:27 compute-0 nova_compute[253461]: 2025-11-22 04:00:27.131 253465 INFO nova.virt.libvirt.driver [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Snapshot image upload complete
Nov 22 04:00:27 compute-0 nova_compute[253461]: 2025-11-22 04:00:27.131 253465 INFO nova.compute.manager [None req-de2ab28c-4dad-495d-aa3e-d0baf4b42b4d 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Took 5.03 seconds to snapshot the instance on the hypervisor.
Nov 22 04:00:27 compute-0 nova_compute[253461]: 2025-11-22 04:00:27.286 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 374 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.6 MiB/s wr, 228 op/s
Nov 22 04:00:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:28.403 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:29 compute-0 ceph-mon[75011]: pgmap v1391: 305 pgs: 305 active+clean; 374 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 7.6 MiB/s wr, 228 op/s
Nov 22 04:00:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 426 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 9.4 MiB/s wr, 303 op/s
Nov 22 04:00:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Nov 22 04:00:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Nov 22 04:00:30 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Nov 22 04:00:31 compute-0 ceph-mon[75011]: pgmap v1392: 305 pgs: 305 active+clean; 426 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 9.4 MiB/s wr, 303 op/s
Nov 22 04:00:31 compute-0 ceph-mon[75011]: osdmap e326: 3 total, 3 up, 3 in
Nov 22 04:00:31 compute-0 nova_compute[253461]: 2025-11-22 04:00:31.606 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.192 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.193 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.217 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.290 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 429 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.3 MiB/s wr, 270 op/s
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.336 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.337 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.347 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.348 253465 INFO nova.compute.claims [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:00:32 compute-0 ceph-mon[75011]: pgmap v1394: 305 pgs: 305 active+clean; 429 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 8.3 MiB/s wr, 270 op/s
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.425 253465 DEBUG nova.scheduler.client.report [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Refreshing inventories for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.444 253465 DEBUG nova.scheduler.client.report [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Updating ProviderTree inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.445 253465 DEBUG nova.compute.provider_tree [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.475 253465 DEBUG nova.scheduler.client.report [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Refreshing aggregate associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.501 253465 DEBUG nova.scheduler.client.report [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Refreshing trait associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 04:00:32 compute-0 nova_compute[253461]: 2025-11-22 04:00:32.583 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:00:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232616079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.057 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.064 253465 DEBUG nova.compute.provider_tree [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.084 253465 DEBUG nova.scheduler.client.report [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.121 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.121 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.176 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.176 253465 DEBUG nova.network.neutron [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.198 253465 INFO nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.216 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.299 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.300 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.301 253465 INFO nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Creating image(s)
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.322 253465 DEBUG nova.storage.rbd_utils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image 45b06598-5fca-47e2-962e-824755f52a2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.344 253465 DEBUG nova.storage.rbd_utils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image 45b06598-5fca-47e2-962e-824755f52a2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.374 253465 DEBUG nova.storage.rbd_utils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image 45b06598-5fca-47e2-962e-824755f52a2b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.378 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "b5159c7fc25ae6a231e9255be15fae5015e99080" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.379 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "b5159c7fc25ae6a231e9255be15fae5015e99080" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2232616079' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.546 253465 DEBUG nova.policy [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0b246fc3abe648cf93dbdc3bd03c5cbb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a62857fbf8cf446cac9c207ae6750597', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.701 253465 DEBUG nova.compute.manager [req-4b695e8b-cc65-43d6-8d95-9e49d914f3a0 req-123693f7-c3a9-45c9-81ae-1c56d59eb663 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.702 253465 DEBUG nova.compute.manager [req-4b695e8b-cc65-43d6-8d95-9e49d914f3a0 req-123693f7-c3a9-45c9-81ae-1c56d59eb663 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing instance network info cache due to event network-changed-b369549e-8f2c-4d18-a73d-818e42cab65d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.703 253465 DEBUG oslo_concurrency.lockutils [req-4b695e8b-cc65-43d6-8d95-9e49d914f3a0 req-123693f7-c3a9-45c9-81ae-1c56d59eb663 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.703 253465 DEBUG oslo_concurrency.lockutils [req-4b695e8b-cc65-43d6-8d95-9e49d914f3a0 req-123693f7-c3a9-45c9-81ae-1c56d59eb663 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.704 253465 DEBUG nova.network.neutron [req-4b695e8b-cc65-43d6-8d95-9e49d914f3a0 req-123693f7-c3a9-45c9-81ae-1c56d59eb663 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Refreshing network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.766 253465 DEBUG nova.virt.libvirt.imagebackend [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Image locations are: [{'url': 'rbd://7adcc38b-6484-5de6-b879-33a0309153df/images/d0ee314f-72f8-4728-88e7-429472591834/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://7adcc38b-6484-5de6-b879-33a0309153df/images/d0ee314f-72f8-4728-88e7-429472591834/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.820 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquiring lock "c003c606-bec0-4664-9493-bbac2142d827" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.820 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.820 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquiring lock "c003c606-bec0-4664-9493-bbac2142d827-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.821 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.821 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.822 253465 INFO nova.compute.manager [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Terminating instance
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.823 253465 DEBUG nova.compute.manager [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.827 253465 DEBUG nova.virt.libvirt.imagebackend [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Selected location: {'url': 'rbd://7adcc38b-6484-5de6-b879-33a0309153df/images/d0ee314f-72f8-4728-88e7-429472591834/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.827 253465 DEBUG nova.storage.rbd_utils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] cloning images/d0ee314f-72f8-4728-88e7-429472591834@snap to None/45b06598-5fca-47e2-962e-824755f52a2b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Nov 22 04:00:33 compute-0 kernel: tapb369549e-8f (unregistering): left promiscuous mode
Nov 22 04:00:33 compute-0 NetworkManager[48916]: <info>  [1763784033.9298] device (tapb369549e-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:00:33 compute-0 ovn_controller[152691]: 2025-11-22T04:00:33Z|00152|binding|INFO|Releasing lport b369549e-8f2c-4d18-a73d-818e42cab65d from this chassis (sb_readonly=0)
Nov 22 04:00:33 compute-0 ovn_controller[152691]: 2025-11-22T04:00:33Z|00153|binding|INFO|Setting lport b369549e-8f2c-4d18-a73d-818e42cab65d down in Southbound
Nov 22 04:00:33 compute-0 ovn_controller[152691]: 2025-11-22T04:00:33Z|00154|binding|INFO|Removing iface tapb369549e-8f ovn-installed in OVS
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.944 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.947 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:33.955 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:06:dd 10.100.0.4'], port_security=['fa:16:3e:ba:06:dd 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c003c606-bec0-4664-9493-bbac2142d827', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2b5d417-be92-4509-961a-e3d3cc2055a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c7aa9a08e9ab49c898386171f066f40e', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bacad159-6eb3-42d5-9393-7fa5c31fe12a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10c4262f-940a-4ebd-9163-43e228216ff2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b369549e-8f2c-4d18-a73d-818e42cab65d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:00:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:33.956 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b369549e-8f2c-4d18-a73d-818e42cab65d in datapath d2b5d417-be92-4509-961a-e3d3cc2055a5 unbound from our chassis
Nov 22 04:00:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:33.958 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2b5d417-be92-4509-961a-e3d3cc2055a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:00:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:33.959 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[73bf65fa-b15f-4e3a-9450-41eaee57760e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:33.960 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5 namespace which is not needed anymore
Nov 22 04:00:33 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 22 04:00:33 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 14.162s CPU time.
Nov 22 04:00:33 compute-0 systemd-machined[215728]: Machine qemu-13-instance-0000000d terminated.
Nov 22 04:00:33 compute-0 nova_compute[253461]: 2025-11-22 04:00:33.984 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "b5159c7fc25ae6a231e9255be15fae5015e99080" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.605s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.010 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.091 253465 DEBUG nova.objects.instance [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'migration_context' on Instance uuid 45b06598-5fca-47e2-962e-824755f52a2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.094 253465 INFO nova.virt.libvirt.driver [-] [instance: c003c606-bec0-4664-9493-bbac2142d827] Instance destroyed successfully.
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.094 253465 DEBUG nova.objects.instance [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lazy-loading 'resources' on Instance uuid c003c606-bec0-4664-9493-bbac2142d827 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:00:34 compute-0 neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5[279345]: [NOTICE]   (279349) : haproxy version is 2.8.14-c23fe91
Nov 22 04:00:34 compute-0 neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5[279345]: [NOTICE]   (279349) : path to executable is /usr/sbin/haproxy
Nov 22 04:00:34 compute-0 neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5[279345]: [WARNING]  (279349) : Exiting Master process...
Nov 22 04:00:34 compute-0 neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5[279345]: [ALERT]    (279349) : Current worker (279351) exited with code 143 (Terminated)
Nov 22 04:00:34 compute-0 neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5[279345]: [WARNING]  (279349) : All workers exited. Exiting... (0)
Nov 22 04:00:34 compute-0 systemd[1]: libpod-be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901.scope: Deactivated successfully.
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.105 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.106 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Ensure instance console log exists: /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.107 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.107 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.107 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:34 compute-0 podman[279778]: 2025-11-22 04:00:34.110407547 +0000 UTC m=+0.053223373 container died be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.113 253465 DEBUG nova.virt.libvirt.vif [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:00:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1890259389',display_name='tempest-TestVolumeBackupRestore-server-1890259389',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1890259389',id=13,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA3yp8PHjZNGxHIN49Bfx05O7cy5ohfH4jwWdedDKM0Ty5Fh44zWvCRr7B+x0pJKE+uA4rDYohra35NWqYR1IDyRfGmb6U6v6fBZ+bSCfyE4FrBM3a5ioizkhQNkNvq2uQ==',key_name='tempest-TestVolumeBackupRestore-445028672',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:00:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c7aa9a08e9ab49c898386171f066f40e',ramdisk_id='',reservation_id='r-6babejzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-936651426',owner_user_name='tempest-TestVolumeBackupRestore-936651426-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:00:13Z,user_data=None,user_id='5a6e905db660471e9190f5745dec10b2',uuid=c003c606-bec0-4664-9493-bbac2142d827,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.114 253465 DEBUG nova.network.os_vif_util [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Converting VIF {"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.115 253465 DEBUG nova.network.os_vif_util [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:06:dd,bridge_name='br-int',has_traffic_filtering=True,id=b369549e-8f2c-4d18-a73d-818e42cab65d,network=Network(d2b5d417-be92-4509-961a-e3d3cc2055a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb369549e-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.115 253465 DEBUG os_vif [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:06:dd,bridge_name='br-int',has_traffic_filtering=True,id=b369549e-8f2c-4d18-a73d-818e42cab65d,network=Network(d2b5d417-be92-4509-961a-e3d3cc2055a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb369549e-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.117 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.118 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb369549e-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.120 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.121 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.125 253465 INFO os_vif [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:06:dd,bridge_name='br-int',has_traffic_filtering=True,id=b369549e-8f2c-4d18-a73d-818e42cab65d,network=Network(d2b5d417-be92-4509-961a-e3d3cc2055a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb369549e-8f')
Nov 22 04:00:34 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:00:34 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901-userdata-shm.mount: Deactivated successfully.
Nov 22 04:00:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-43e7922e68bc475cf39daa7bc45f422e47047b9a016b0ef765b9d150a2993516-merged.mount: Deactivated successfully.
Nov 22 04:00:34 compute-0 podman[279778]: 2025-11-22 04:00:34.16891896 +0000 UTC m=+0.111734796 container cleanup be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:00:34 compute-0 systemd[1]: libpod-conmon-be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901.scope: Deactivated successfully.
Nov 22 04:00:34 compute-0 podman[279855]: 2025-11-22 04:00:34.267379903 +0000 UTC m=+0.074893281 container remove be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.278 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9010af-f554-4847-a9e2-dd056f862f94]: (4, ('Sat Nov 22 04:00:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5 (be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901)\nbe849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901\nSat Nov 22 04:00:34 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5 (be849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901)\nbe849190cf4dd39d62e264a335d310d05855993cee2aff47c84460b4015d7901\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.280 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3d5eec46-baf4-4f1f-a61c-23a0f15c5a79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.281 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2b5d417-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.283 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:34 compute-0 kernel: tapd2b5d417-b0: left promiscuous mode
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.305 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.307 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 430 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.8 MiB/s wr, 245 op/s
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.309 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[006db529-8fa5-43e6-b499-c68df7c4ff12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.322 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8b343cf3-0575-471f-9910-cda9d4cac529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.323 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[311fa9a0-93f8-4327-a07b-18fa7301db14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.338 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[44a69b89-061e-41c4-a523-691c733d6a76]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 426527, 'reachable_time': 41071, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279870, 'error': None, 'target': 'ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.341 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2b5d417-be92-4509-961a-e3d3cc2055a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:00:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:34.342 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[00f72485-acf8-4f3a-b944-5ab903337ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:34 compute-0 systemd[1]: run-netns-ovnmeta\x2dd2b5d417\x2dbe92\x2d4509\x2d961a\x2de3d3cc2055a5.mount: Deactivated successfully.
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.421 253465 DEBUG nova.compute.manager [req-f36de14a-e3b6-4e42-910f-ddec83d682ad req-7d33c603-e61d-48a5-bc18-884934557135 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-vif-unplugged-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.422 253465 DEBUG oslo_concurrency.lockutils [req-f36de14a-e3b6-4e42-910f-ddec83d682ad req-7d33c603-e61d-48a5-bc18-884934557135 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "c003c606-bec0-4664-9493-bbac2142d827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.422 253465 DEBUG oslo_concurrency.lockutils [req-f36de14a-e3b6-4e42-910f-ddec83d682ad req-7d33c603-e61d-48a5-bc18-884934557135 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.423 253465 DEBUG oslo_concurrency.lockutils [req-f36de14a-e3b6-4e42-910f-ddec83d682ad req-7d33c603-e61d-48a5-bc18-884934557135 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.423 253465 DEBUG nova.compute.manager [req-f36de14a-e3b6-4e42-910f-ddec83d682ad req-7d33c603-e61d-48a5-bc18-884934557135 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] No waiting events found dispatching network-vif-unplugged-b369549e-8f2c-4d18-a73d-818e42cab65d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.424 253465 DEBUG nova.compute.manager [req-f36de14a-e3b6-4e42-910f-ddec83d682ad req-7d33c603-e61d-48a5-bc18-884934557135 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-vif-unplugged-b369549e-8f2c-4d18-a73d-818e42cab65d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.447 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:34 compute-0 ceph-mon[75011]: pgmap v1395: 305 pgs: 305 active+clean; 430 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.8 MiB/s wr, 245 op/s
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.616 253465 INFO nova.virt.libvirt.driver [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Deleting instance files /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827_del
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.618 253465 INFO nova.virt.libvirt.driver [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Deletion of /var/lib/nova/instances/c003c606-bec0-4664-9493-bbac2142d827_del complete
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.681 253465 INFO nova.compute.manager [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Took 0.86 seconds to destroy the instance on the hypervisor.
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.682 253465 DEBUG oslo.service.loopingcall [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.683 253465 DEBUG nova.compute.manager [-] [instance: c003c606-bec0-4664-9493-bbac2142d827] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.684 253465 DEBUG nova.network.neutron [-] [instance: c003c606-bec0-4664-9493-bbac2142d827] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:00:34 compute-0 nova_compute[253461]: 2025-11-22 04:00:34.907 253465 DEBUG nova.network.neutron [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Successfully created port: 2f6b03ee-33c1-4a13-813c-b794d61056dd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:00:35 compute-0 podman[279872]: 2025-11-22 04:00:35.407191314 +0000 UTC m=+0.087344120 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.621 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.621 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.696 253465 DEBUG nova.network.neutron [req-4b695e8b-cc65-43d6-8d95-9e49d914f3a0 req-123693f7-c3a9-45c9-81ae-1c56d59eb663 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updated VIF entry in instance network info cache for port b369549e-8f2c-4d18-a73d-818e42cab65d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.696 253465 DEBUG nova.network.neutron [req-4b695e8b-cc65-43d6-8d95-9e49d914f3a0 req-123693f7-c3a9-45c9-81ae-1c56d59eb663 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updating instance_info_cache with network_info: [{"id": "b369549e-8f2c-4d18-a73d-818e42cab65d", "address": "fa:16:3e:ba:06:dd", "network": {"id": "d2b5d417-be92-4509-961a-e3d3cc2055a5", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-1052380862-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c7aa9a08e9ab49c898386171f066f40e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb369549e-8f", "ovs_interfaceid": "b369549e-8f2c-4d18-a73d-818e42cab65d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.751 253465 DEBUG oslo_concurrency.lockutils [req-4b695e8b-cc65-43d6-8d95-9e49d914f3a0 req-123693f7-c3a9-45c9-81ae-1c56d59eb663 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-c003c606-bec0-4664-9493-bbac2142d827" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.803 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.804 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.804 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.804 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.830 253465 DEBUG nova.network.neutron [-] [instance: c003c606-bec0-4664-9493-bbac2142d827] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.853 253465 INFO nova.compute.manager [-] [instance: c003c606-bec0-4664-9493-bbac2142d827] Took 1.17 seconds to deallocate network for instance.
Nov 22 04:00:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:35 compute-0 nova_compute[253461]: 2025-11-22 04:00:35.996 253465 DEBUG nova.network.neutron [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Successfully updated port: 2f6b03ee-33c1-4a13-813c-b794d61056dd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.019 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.020 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquired lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.020 253465 DEBUG nova.network.neutron [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.022 253465 INFO nova.compute.manager [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Took 0.17 seconds to detach 1 volumes for instance.
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.055 253465 DEBUG nova.compute.manager [req-cb75fb2d-2b44-4865-8cc5-ced4b43b99e7 req-299adacb-a41b-4a94-b06f-d8ea65e8a2d2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-changed-2f6b03ee-33c1-4a13-813c-b794d61056dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.055 253465 DEBUG nova.compute.manager [req-cb75fb2d-2b44-4865-8cc5-ced4b43b99e7 req-299adacb-a41b-4a94-b06f-d8ea65e8a2d2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Refreshing instance network info cache due to event network-changed-2f6b03ee-33c1-4a13-813c-b794d61056dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.056 253465 DEBUG oslo_concurrency.lockutils [req-cb75fb2d-2b44-4865-8cc5-ced4b43b99e7 req-299adacb-a41b-4a94-b06f-d8ea65e8a2d2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.088 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.089 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.175 253465 DEBUG oslo_concurrency.processutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:00:36
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['vms', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', '.mgr', 'default.rgw.control']
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 430 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 204 op/s
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:00:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.515 253465 DEBUG nova.compute.manager [req-b35fe840-9978-41e1-ab61-5fb7633e29a1 req-bd54584c-4d3e-4130-b633-0f7380b5707f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.516 253465 DEBUG oslo_concurrency.lockutils [req-b35fe840-9978-41e1-ab61-5fb7633e29a1 req-bd54584c-4d3e-4130-b633-0f7380b5707f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "c003c606-bec0-4664-9493-bbac2142d827-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.517 253465 DEBUG oslo_concurrency.lockutils [req-b35fe840-9978-41e1-ab61-5fb7633e29a1 req-bd54584c-4d3e-4130-b633-0f7380b5707f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.517 253465 DEBUG oslo_concurrency.lockutils [req-b35fe840-9978-41e1-ab61-5fb7633e29a1 req-bd54584c-4d3e-4130-b633-0f7380b5707f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.518 253465 DEBUG nova.compute.manager [req-b35fe840-9978-41e1-ab61-5fb7633e29a1 req-bd54584c-4d3e-4130-b633-0f7380b5707f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] No waiting events found dispatching network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.518 253465 WARNING nova.compute.manager [req-b35fe840-9978-41e1-ab61-5fb7633e29a1 req-bd54584c-4d3e-4130-b633-0f7380b5707f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received unexpected event network-vif-plugged-b369549e-8f2c-4d18-a73d-818e42cab65d for instance with vm_state deleted and task_state None.
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.519 253465 DEBUG nova.compute.manager [req-b35fe840-9978-41e1-ab61-5fb7633e29a1 req-bd54584c-4d3e-4130-b633-0f7380b5707f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: c003c606-bec0-4664-9493-bbac2142d827] Received event network-vif-deleted-b369549e-8f2c-4d18-a73d-818e42cab65d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.548 253465 DEBUG nova.network.neutron [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:00:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:00:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1212874508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.614 253465 DEBUG oslo_concurrency.processutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.621 253465 DEBUG nova.compute.provider_tree [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.645 253465 DEBUG nova.scheduler.client.report [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.672 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.693 253465 INFO nova.scheduler.client.report [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Deleted allocations for instance c003c606-bec0-4664-9493-bbac2142d827
Nov 22 04:00:36 compute-0 nova_compute[253461]: 2025-11-22 04:00:36.754 253465 DEBUG oslo_concurrency.lockutils [None req-f8bdb5ec-3f84-4a76-a0ad-cbbaf8692222 5a6e905db660471e9190f5745dec10b2 c7aa9a08e9ab49c898386171f066f40e - - default default] Lock "c003c606-bec0-4664-9493-bbac2142d827" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.290 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Nov 22 04:00:37 compute-0 ceph-mon[75011]: pgmap v1396: 305 pgs: 305 active+clean; 430 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 5.7 MiB/s wr, 204 op/s
Nov 22 04:00:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1212874508' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Nov 22 04:00:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.575 253465 DEBUG nova.network.neutron [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updating instance_info_cache with network_info: [{"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.860 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updating instance_info_cache with network_info: [{"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.887 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.888 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.889 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.890 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.890 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Releasing lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.890 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Instance network_info: |[{"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.891 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.891 253465 DEBUG oslo_concurrency.lockutils [req-cb75fb2d-2b44-4865-8cc5-ced4b43b99e7 req-299adacb-a41b-4a94-b06f-d8ea65e8a2d2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.891 253465 DEBUG nova.network.neutron [req-cb75fb2d-2b44-4865-8cc5-ced4b43b99e7 req-299adacb-a41b-4a94-b06f-d8ea65e8a2d2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Refreshing network info cache for port 2f6b03ee-33c1-4a13-813c-b794d61056dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.895 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Start _get_guest_xml network_info=[{"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T04:00:21Z,direct_url=<?>,disk_format='raw',id=d0ee314f-72f8-4728-88e7-429472591834,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-271021257',owner='a62857fbf8cf446cac9c207ae6750597',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T04:00:26Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'd0ee314f-72f8-4728-88e7-429472591834'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.900 253465 WARNING nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.905 253465 DEBUG nova.virt.libvirt.host [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.905 253465 DEBUG nova.virt.libvirt.host [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.911 253465 DEBUG nova.virt.libvirt.host [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.912 253465 DEBUG nova.virt.libvirt.host [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.912 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.912 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-11-22T04:00:21Z,direct_url=<?>,disk_format='raw',id=d0ee314f-72f8-4728-88e7-429472591834,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-271021257',owner='a62857fbf8cf446cac9c207ae6750597',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-11-22T04:00:26Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.913 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.913 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.914 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.914 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.914 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.914 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.915 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.915 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.915 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.916 253465 DEBUG nova.virt.hardware [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.919 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.952 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.953 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.953 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.953 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:00:37 compute-0 nova_compute[253461]: 2025-11-22 04:00:37.954 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 430 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 28 KiB/s wr, 59 op/s
Nov 22 04:00:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:00:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818960288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:38 compute-0 ceph-mon[75011]: osdmap e327: 3 total, 3 up, 3 in
Nov 22 04:00:38 compute-0 ceph-mon[75011]: pgmap v1398: 305 pgs: 305 active+clean; 430 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 28 KiB/s wr, 59 op/s
Nov 22 04:00:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2818960288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.396 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237413640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.420 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.456 253465 DEBUG nova.storage.rbd_utils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image 45b06598-5fca-47e2-962e-824755f52a2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.461 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.569 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.570 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.750 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.751 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4276MB free_disk=59.94248962402344GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.751 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.752 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.820 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.820 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 45b06598-5fca-47e2-962e-824755f52a2b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.821 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.821 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.885 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:00:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260690091' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.934 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.937 253465 DEBUG nova.virt.libvirt.vif [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:00:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1926980836',display_name='tempest-TestStampPattern-server-1926980836',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1926980836',id=14,image_ref='d0ee314f-72f8-4728-88e7-429472591834',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYgn9CTDvmfK+9lwizGtXeEZlSZuA1AJsMHGR/6t8oyy2KLeA+NyxTmeE6fCgDUhF1kETDxpPXjj8wfb8eB/z4sjIcgn3I98Rj3v+7eP88Wa0lihBTXU++d2vPdWMcG3w==',key_name='tempest-TestStampPattern-116986255',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a62857fbf8cf446cac9c207ae6750597',ramdisk_id='',reservation_id='r-ocxnc1nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618',image_min_disk='1',image_min_ram='0',image_owner_id='a62857fbf8cf446cac9c207ae6750597',image_owner_project_name='tempest-TestStampPattern-1055115370',image_owner_user_name='tempest-TestStampPattern-1055115370-project-member',image_user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',network_allocated='True',owner_project_name='tempest-TestStampPattern-1055115370',owner_user_name='tempest-TestStampPattern-1055115370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:00:33Z,user_data=None,user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',uuid=45b06598-5fca-47e2-962e-824755f52a2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.937 253465 DEBUG nova.network.os_vif_util [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converting VIF {"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.939 253465 DEBUG nova.network.os_vif_util [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:e4:3f,bridge_name='br-int',has_traffic_filtering=True,id=2f6b03ee-33c1-4a13-813c-b794d61056dd,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6b03ee-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.940 253465 DEBUG nova.objects.instance [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'pci_devices' on Instance uuid 45b06598-5fca-47e2-962e-824755f52a2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.969 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <uuid>45b06598-5fca-47e2-962e-824755f52a2b</uuid>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <name>instance-0000000e</name>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <nova:name>tempest-TestStampPattern-server-1926980836</nova:name>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:00:37</nova:creationTime>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <nova:user uuid="0b246fc3abe648cf93dbdc3bd03c5cbb">tempest-TestStampPattern-1055115370-project-member</nova:user>
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <nova:project uuid="a62857fbf8cf446cac9c207ae6750597">tempest-TestStampPattern-1055115370</nova:project>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="d0ee314f-72f8-4728-88e7-429472591834"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <nova:port uuid="2f6b03ee-33c1-4a13-813c-b794d61056dd">
Nov 22 04:00:38 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <system>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <entry name="serial">45b06598-5fca-47e2-962e-824755f52a2b</entry>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <entry name="uuid">45b06598-5fca-47e2-962e-824755f52a2b</entry>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </system>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <os>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   </os>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <features>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   </features>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/45b06598-5fca-47e2-962e-824755f52a2b_disk">
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       </source>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/45b06598-5fca-47e2-962e-824755f52a2b_disk.config">
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       </source>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:00:38 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:c1:e4:3f"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <target dev="tap2f6b03ee-33"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b/console.log" append="off"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <video>
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </video>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <input type="keyboard" bus="usb"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:00:38 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:00:38 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:00:38 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:00:38 compute-0 nova_compute[253461]: </domain>
Nov 22 04:00:38 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.969 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Preparing to wait for external event network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.970 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.970 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.970 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.971 253465 DEBUG nova.virt.libvirt.vif [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:00:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-1926980836',display_name='tempest-TestStampPattern-server-1926980836',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1926980836',id=14,image_ref='d0ee314f-72f8-4728-88e7-429472591834',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYgn9CTDvmfK+9lwizGtXeEZlSZuA1AJsMHGR/6t8oyy2KLeA+NyxTmeE6fCgDUhF1kETDxpPXjj8wfb8eB/z4sjIcgn3I98Rj3v+7eP88Wa0lihBTXU++d2vPdWMcG3w==',key_name='tempest-TestStampPattern-116986255',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a62857fbf8cf446cac9c207ae6750597',ramdisk_id='',reservation_id='r-ocxnc1nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618',image_min_disk='1',image_min_ram='0',image_owner_id='a62857fbf8cf446cac9c207ae6750597',image_owner_project_name='tempest-TestStampPattern-1055115370',image_owner_user_name='tempest-TestStampPattern-1055115370-project-member',image_user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',network_allocated='True',owner_project_name='tempest-TestStampPattern-1055115370',owner_user_name='tempest-TestStampPattern-1055115370-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:00:33Z,user_data=None,user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',uuid=45b06598-5fca-47e2-962e-824755f52a2b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.971 253465 DEBUG nova.network.os_vif_util [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converting VIF {"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.972 253465 DEBUG nova.network.os_vif_util [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:e4:3f,bridge_name='br-int',has_traffic_filtering=True,id=2f6b03ee-33c1-4a13-813c-b794d61056dd,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6b03ee-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.972 253465 DEBUG os_vif [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:e4:3f,bridge_name='br-int',has_traffic_filtering=True,id=2f6b03ee-33c1-4a13-813c-b794d61056dd,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6b03ee-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.973 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.973 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.974 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.976 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.976 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f6b03ee-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.977 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2f6b03ee-33, col_values=(('external_ids', {'iface-id': '2f6b03ee-33c1-4a13-813c-b794d61056dd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:e4:3f', 'vm-uuid': '45b06598-5fca-47e2-962e-824755f52a2b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.978 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:38 compute-0 NetworkManager[48916]: <info>  [1763784038.9797] manager: (tap2f6b03ee-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.982 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.985 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:38 compute-0 nova_compute[253461]: 2025-11-22 04:00:38.986 253465 INFO os_vif [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:e4:3f,bridge_name='br-int',has_traffic_filtering=True,id=2f6b03ee-33c1-4a13-813c-b794d61056dd,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6b03ee-33')
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.037 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.037 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.037 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No VIF found with MAC fa:16:3e:c1:e4:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.038 253465 INFO nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Using config drive
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.057 253465 DEBUG nova.storage.rbd_utils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image 45b06598-5fca-47e2-962e-824755f52a2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:00:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2887308632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.348 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.356 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.380 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:00:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.431 253465 DEBUG nova.network.neutron [req-cb75fb2d-2b44-4865-8cc5-ced4b43b99e7 req-299adacb-a41b-4a94-b06f-d8ea65e8a2d2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updated VIF entry in instance network info cache for port 2f6b03ee-33c1-4a13-813c-b794d61056dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:00:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1237413640' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.431 253465 DEBUG nova.network.neutron [req-cb75fb2d-2b44-4865-8cc5-ced4b43b99e7 req-299adacb-a41b-4a94-b06f-d8ea65e8a2d2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updating instance_info_cache with network_info: [{"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2260690091' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:00:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2887308632' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.468 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.468 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.483 253465 DEBUG oslo_concurrency.lockutils [req-cb75fb2d-2b44-4865-8cc5-ced4b43b99e7 req-299adacb-a41b-4a94-b06f-d8ea65e8a2d2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Nov 22 04:00:39 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Nov 22 04:00:39 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.565 253465 INFO nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Creating config drive at /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b/disk.config
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.571 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqeo35l6b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.698 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqeo35l6b" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.725 253465 DEBUG nova.storage.rbd_utils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] rbd image 45b06598-5fca-47e2-962e-824755f52a2b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:00:39 compute-0 nova_compute[253461]: 2025-11-22 04:00:39.728 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b/disk.config 45b06598-5fca-47e2-962e-824755f52a2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.057 253465 DEBUG oslo_concurrency.processutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b/disk.config 45b06598-5fca-47e2-962e-824755f52a2b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.329s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.058 253465 INFO nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Deleting local config drive /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b/disk.config because it was imported into RBD.
Nov 22 04:00:40 compute-0 kernel: tap2f6b03ee-33: entered promiscuous mode
Nov 22 04:00:40 compute-0 NetworkManager[48916]: <info>  [1763784040.1156] manager: (tap2f6b03ee-33): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Nov 22 04:00:40 compute-0 ovn_controller[152691]: 2025-11-22T04:00:40Z|00155|binding|INFO|Claiming lport 2f6b03ee-33c1-4a13-813c-b794d61056dd for this chassis.
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.116 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:40 compute-0 ovn_controller[152691]: 2025-11-22T04:00:40Z|00156|binding|INFO|2f6b03ee-33c1-4a13-813c-b794d61056dd: Claiming fa:16:3e:c1:e4:3f 10.100.0.10
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.128 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:e4:3f 10.100.0.10'], port_security=['fa:16:3e:c1:e4:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '45b06598-5fca-47e2-962e-824755f52a2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a62857fbf8cf446cac9c207ae6750597', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b0832384-6d69-4b2e-a587-602048007135', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f9f8761-3ac6-4a72-804a-92d1a0df209a, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=2f6b03ee-33c1-4a13-813c-b794d61056dd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.130 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6b03ee-33c1-4a13-813c-b794d61056dd in datapath 4692d97f-32c5-4a6f-a095-ba8dda0baf05 bound to our chassis
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.132 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4692d97f-32c5-4a6f-a095-ba8dda0baf05
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.145 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9916ec-7768-4538-ba4b-e51e60ad591a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:40 compute-0 systemd-udevd[280095]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:00:40 compute-0 ovn_controller[152691]: 2025-11-22T04:00:40Z|00157|binding|INFO|Setting lport 2f6b03ee-33c1-4a13-813c-b794d61056dd ovn-installed in OVS
Nov 22 04:00:40 compute-0 ovn_controller[152691]: 2025-11-22T04:00:40Z|00158|binding|INFO|Setting lport 2f6b03ee-33c1-4a13-813c-b794d61056dd up in Southbound
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.154 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.159 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:40 compute-0 systemd-machined[215728]: New machine qemu-14-instance-0000000e.
Nov 22 04:00:40 compute-0 NetworkManager[48916]: <info>  [1763784040.1715] device (tap2f6b03ee-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:00:40 compute-0 NetworkManager[48916]: <info>  [1763784040.1733] device (tap2f6b03ee-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:00:40 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.180 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[2aa8b193-cdfd-46bf-97c6-745fe492a352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.183 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a257cd9b-9443-4769-b716-2dc418094170]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.211 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a6595b2a-4ddc-45a6-b1b8-b7360741d5bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.230 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[02d2d2fd-4902-444e-8d45-5e844e17bd98]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4692d97f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424571, 'reachable_time': 17992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280106, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3546435636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3546435636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.250 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[93a29e7f-017f-4f91-b6a2-04f5ea17d785]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4692d97f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424583, 'tstamp': 424583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280109, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4692d97f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424588, 'tstamp': 424588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280109, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.252 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4692d97f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.253 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.255 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.256 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4692d97f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.257 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.258 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4692d97f-30, col_values=(('external_ids', {'iface-id': '30338b02-a11d-4ec7-8237-9f070233f5bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:00:40 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:00:40.258 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:00:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 429 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 22 KiB/s wr, 82 op/s
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.324 253465 DEBUG nova.compute.manager [req-95ec8738-cfd5-4fce-b793-354db02d9721 req-3613ebfc-0313-4cad-9534-f71f06948d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.325 253465 DEBUG oslo_concurrency.lockutils [req-95ec8738-cfd5-4fce-b793-354db02d9721 req-3613ebfc-0313-4cad-9534-f71f06948d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.325 253465 DEBUG oslo_concurrency.lockutils [req-95ec8738-cfd5-4fce-b793-354db02d9721 req-3613ebfc-0313-4cad-9534-f71f06948d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.326 253465 DEBUG oslo_concurrency.lockutils [req-95ec8738-cfd5-4fce-b793-354db02d9721 req-3613ebfc-0313-4cad-9534-f71f06948d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.326 253465 DEBUG nova.compute.manager [req-95ec8738-cfd5-4fce-b793-354db02d9721 req-3613ebfc-0313-4cad-9534-f71f06948d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Processing event network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:00:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1101588525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1101588525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.465 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.469 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.470 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:00:40 compute-0 ceph-mon[75011]: osdmap e328: 3 total, 3 up, 3 in
Nov 22 04:00:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3546435636' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3546435636' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:40 compute-0 ceph-mon[75011]: pgmap v1400: 305 pgs: 305 active+clean; 429 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 22 KiB/s wr, 82 op/s
Nov 22 04:00:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1101588525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1101588525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.575 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784040.5746944, 45b06598-5fca-47e2-962e-824755f52a2b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.575 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] VM Started (Lifecycle Event)
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.579 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.584 253465 DEBUG nova.virt.libvirt.driver [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.588 253465 INFO nova.virt.libvirt.driver [-] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Instance spawned successfully.
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.589 253465 INFO nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Took 7.29 seconds to spawn the instance on the hypervisor.
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.589 253465 DEBUG nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.599 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.603 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.627 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.628 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784040.5748317, 45b06598-5fca-47e2-962e-824755f52a2b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.628 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] VM Paused (Lifecycle Event)
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.651 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.656 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784040.5833192, 45b06598-5fca-47e2-962e-824755f52a2b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.656 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] VM Resumed (Lifecycle Event)
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.672 253465 INFO nova.compute.manager [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Took 8.39 seconds to build instance.
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.676 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.680 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:00:40 compute-0 nova_compute[253461]: 2025-11-22 04:00:40.699 253465 DEBUG oslo_concurrency.lockutils [None req-cea1c6a2-0634-49ae-9973-ea08147a766e 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1545392156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1545392156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:41 compute-0 nova_compute[253461]: 2025-11-22 04:00:41.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:41 compute-0 nova_compute[253461]: 2025-11-22 04:00:41.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1545392156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1545392156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 369 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 704 KiB/s rd, 7.2 KiB/s wr, 142 op/s
Nov 22 04:00:42 compute-0 nova_compute[253461]: 2025-11-22 04:00:42.332 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:42 compute-0 nova_compute[253461]: 2025-11-22 04:00:42.609 253465 DEBUG nova.compute.manager [req-87436c14-61db-4852-89f0-6ac5d3ce7b17 req-d12078c0-f52c-4435-8082-ff357e455236 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:42 compute-0 nova_compute[253461]: 2025-11-22 04:00:42.610 253465 DEBUG oslo_concurrency.lockutils [req-87436c14-61db-4852-89f0-6ac5d3ce7b17 req-d12078c0-f52c-4435-8082-ff357e455236 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:00:42 compute-0 nova_compute[253461]: 2025-11-22 04:00:42.611 253465 DEBUG oslo_concurrency.lockutils [req-87436c14-61db-4852-89f0-6ac5d3ce7b17 req-d12078c0-f52c-4435-8082-ff357e455236 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:00:42 compute-0 nova_compute[253461]: 2025-11-22 04:00:42.612 253465 DEBUG oslo_concurrency.lockutils [req-87436c14-61db-4852-89f0-6ac5d3ce7b17 req-d12078c0-f52c-4435-8082-ff357e455236 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:00:42 compute-0 nova_compute[253461]: 2025-11-22 04:00:42.612 253465 DEBUG nova.compute.manager [req-87436c14-61db-4852-89f0-6ac5d3ce7b17 req-d12078c0-f52c-4435-8082-ff357e455236 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] No waiting events found dispatching network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:00:42 compute-0 nova_compute[253461]: 2025-11-22 04:00:42.613 253465 WARNING nova.compute.manager [req-87436c14-61db-4852-89f0-6ac5d3ce7b17 req-d12078c0-f52c-4435-8082-ff357e455236 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received unexpected event network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd for instance with vm_state active and task_state None.
Nov 22 04:00:42 compute-0 ceph-mon[75011]: pgmap v1401: 305 pgs: 305 active+clean; 369 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 704 KiB/s rd, 7.2 KiB/s wr, 142 op/s
Nov 22 04:00:43 compute-0 nova_compute[253461]: 2025-11-22 04:00:43.425 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:00:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Nov 22 04:00:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Nov 22 04:00:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Nov 22 04:00:43 compute-0 nova_compute[253461]: 2025-11-22 04:00:43.978 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 248 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 30 KiB/s wr, 299 op/s
Nov 22 04:00:44 compute-0 ovn_controller[152691]: 2025-11-22T04:00:44Z|00159|binding|INFO|Releasing lport 30338b02-a11d-4ec7-8237-9f070233f5bd from this chassis (sb_readonly=0)
Nov 22 04:00:44 compute-0 nova_compute[253461]: 2025-11-22 04:00:44.464 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:44 compute-0 ceph-mon[75011]: osdmap e329: 3 total, 3 up, 3 in
Nov 22 04:00:44 compute-0 ceph-mon[75011]: pgmap v1403: 305 pgs: 305 active+clean; 248 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 30 KiB/s wr, 299 op/s
Nov 22 04:00:45 compute-0 nova_compute[253461]: 2025-11-22 04:00:45.375 253465 DEBUG nova.compute.manager [req-142e3c11-0a18-4bcf-b701-d3453ec4771f req-edc0b501-5e76-45ee-b4cc-d36ea5779491 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-changed-2f6b03ee-33c1-4a13-813c-b794d61056dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:00:45 compute-0 nova_compute[253461]: 2025-11-22 04:00:45.376 253465 DEBUG nova.compute.manager [req-142e3c11-0a18-4bcf-b701-d3453ec4771f req-edc0b501-5e76-45ee-b4cc-d36ea5779491 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Refreshing instance network info cache due to event network-changed-2f6b03ee-33c1-4a13-813c-b794d61056dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:00:45 compute-0 nova_compute[253461]: 2025-11-22 04:00:45.376 253465 DEBUG oslo_concurrency.lockutils [req-142e3c11-0a18-4bcf-b701-d3453ec4771f req-edc0b501-5e76-45ee-b4cc-d36ea5779491 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:00:45 compute-0 nova_compute[253461]: 2025-11-22 04:00:45.377 253465 DEBUG oslo_concurrency.lockutils [req-142e3c11-0a18-4bcf-b701-d3453ec4771f req-edc0b501-5e76-45ee-b4cc-d36ea5779491 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:00:45 compute-0 nova_compute[253461]: 2025-11-22 04:00:45.377 253465 DEBUG nova.network.neutron [req-142e3c11-0a18-4bcf-b701-d3453ec4771f req-edc0b501-5e76-45ee-b4cc-d36ea5779491 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Refreshing network info cache for port 2f6b03ee-33c1-4a13-813c-b794d61056dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:00:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Nov 22 04:00:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Nov 22 04:00:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Nov 22 04:00:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Nov 22 04:00:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Nov 22 04:00:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 248 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 312 op/s
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007634700380933777 of space, bias 1.0, pg target 0.2290410114280133 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003798908885198932 of space, bias 1.0, pg target 0.11396726655596796 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014246862180113304 of space, bias 1.0, pg target 0.42740586540339914 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:00:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:00:46 compute-0 nova_compute[253461]: 2025-11-22 04:00:46.437 253465 DEBUG nova.network.neutron [req-142e3c11-0a18-4bcf-b701-d3453ec4771f req-edc0b501-5e76-45ee-b4cc-d36ea5779491 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updated VIF entry in instance network info cache for port 2f6b03ee-33c1-4a13-813c-b794d61056dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:00:46 compute-0 nova_compute[253461]: 2025-11-22 04:00:46.438 253465 DEBUG nova.network.neutron [req-142e3c11-0a18-4bcf-b701-d3453ec4771f req-edc0b501-5e76-45ee-b4cc-d36ea5779491 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updating instance_info_cache with network_info: [{"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:00:46 compute-0 nova_compute[253461]: 2025-11-22 04:00:46.593 253465 DEBUG oslo_concurrency.lockutils [req-142e3c11-0a18-4bcf-b701-d3453ec4771f req-edc0b501-5e76-45ee-b4cc-d36ea5779491 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:00:46 compute-0 ceph-mon[75011]: osdmap e330: 3 total, 3 up, 3 in
Nov 22 04:00:46 compute-0 ceph-mon[75011]: osdmap e331: 3 total, 3 up, 3 in
Nov 22 04:00:46 compute-0 ceph-mon[75011]: pgmap v1406: 305 pgs: 305 active+clean; 248 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 32 KiB/s wr, 312 op/s
Nov 22 04:00:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946102261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946102261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:47 compute-0 nova_compute[253461]: 2025-11-22 04:00:47.333 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:47 compute-0 ovn_controller[152691]: 2025-11-22T04:00:47Z|00160|binding|INFO|Releasing lport 30338b02-a11d-4ec7-8237-9f070233f5bd from this chassis (sb_readonly=0)
Nov 22 04:00:47 compute-0 nova_compute[253461]: 2025-11-22 04:00:47.669 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/946102261' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/946102261' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 248 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 29 KiB/s wr, 247 op/s
Nov 22 04:00:48 compute-0 nova_compute[253461]: 2025-11-22 04:00:48.981 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Nov 22 04:00:48 compute-0 ceph-mon[75011]: pgmap v1407: 305 pgs: 305 active+clean; 248 MiB data, 437 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 29 KiB/s wr, 247 op/s
Nov 22 04:00:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Nov 22 04:00:49 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Nov 22 04:00:49 compute-0 nova_compute[253461]: 2025-11-22 04:00:49.084 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784034.0822842, c003c606-bec0-4664-9493-bbac2142d827 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:00:49 compute-0 nova_compute[253461]: 2025-11-22 04:00:49.085 253465 INFO nova.compute.manager [-] [instance: c003c606-bec0-4664-9493-bbac2142d827] VM Stopped (Lifecycle Event)
Nov 22 04:00:49 compute-0 nova_compute[253461]: 2025-11-22 04:00:49.109 253465 DEBUG nova.compute.manager [None req-c2fc1d53-bf6f-4aa8-9c33-8febc9365525 - - - - - -] [instance: c003c606-bec0-4664-9493-bbac2142d827] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:00:50 compute-0 ceph-mon[75011]: osdmap e332: 3 total, 3 up, 3 in
Nov 22 04:00:50 compute-0 sudo[280154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:50 compute-0 sudo[280154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:50 compute-0 sudo[280154]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:50 compute-0 sudo[280191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:00:50 compute-0 sudo[280191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:50 compute-0 sudo[280191]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:50 compute-0 podman[280178]: 2025-11-22 04:00:50.265235581 +0000 UTC m=+0.087583641 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:00:50 compute-0 podman[280179]: 2025-11-22 04:00:50.271117031 +0000 UTC m=+0.090358155 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:00:50 compute-0 sudo[280243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:50 compute-0 sudo[280243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:50 compute-0 sudo[280243]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 248 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Nov 22 04:00:50 compute-0 sudo[280273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:00:50 compute-0 sudo[280273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:50 compute-0 sudo[280273]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Nov 22 04:00:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:00:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:00:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:00:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:00:51 compute-0 ceph-mon[75011]: pgmap v1409: 305 pgs: 305 active+clean; 248 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.3 KiB/s wr, 55 op/s
Nov 22 04:00:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:00:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Nov 22 04:00:51 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Nov 22 04:00:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:00:51 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev eceb6232-0f7a-4731-8568-b4ade55efa62 does not exist
Nov 22 04:00:51 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7a0686d8-8d0a-4ff0-af48-5ef644132342 does not exist
Nov 22 04:00:51 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7b8ca352-8cb1-4f11-a24a-9a7b90fe6998 does not exist
Nov 22 04:00:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:00:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:00:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:00:51 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:00:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:00:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:00:51 compute-0 sudo[280329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:51 compute-0 sudo[280329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:51 compute-0 sudo[280329]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:51 compute-0 sudo[280354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:00:51 compute-0 sudo[280354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:51 compute-0 sudo[280354]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:51 compute-0 sudo[280379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:51 compute-0 sudo[280379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:51 compute-0 sudo[280379]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:51 compute-0 sudo[280404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:00:51 compute-0 sudo[280404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:51 compute-0 podman[280467]: 2025-11-22 04:00:51.719269282 +0000 UTC m=+0.043420396 container create c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:00:51 compute-0 systemd[1]: Started libpod-conmon-c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476.scope.
Nov 22 04:00:51 compute-0 podman[280467]: 2025-11-22 04:00:51.695800891 +0000 UTC m=+0.019952026 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:00:51 compute-0 podman[280467]: 2025-11-22 04:00:51.831413365 +0000 UTC m=+0.155564510 container init c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:00:51 compute-0 podman[280467]: 2025-11-22 04:00:51.839991345 +0000 UTC m=+0.164142460 container start c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:00:51 compute-0 podman[280467]: 2025-11-22 04:00:51.845157379 +0000 UTC m=+0.169308514 container attach c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:00:51 compute-0 confident_panini[280483]: 167 167
Nov 22 04:00:51 compute-0 systemd[1]: libpod-c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476.scope: Deactivated successfully.
Nov 22 04:00:51 compute-0 podman[280467]: 2025-11-22 04:00:51.847095513 +0000 UTC m=+0.171246668 container died c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:00:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-335fbfb20a8032f9e08ff20d318d5da6b52deec3925507f75b6209956f5cbaee-merged.mount: Deactivated successfully.
Nov 22 04:00:51 compute-0 podman[280467]: 2025-11-22 04:00:51.908632983 +0000 UTC m=+0.232784098 container remove c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:00:51 compute-0 systemd[1]: libpod-conmon-c53cb638e116d9f536775523a5f80be1e96a1d56e056b9cf8d7c2ddd24023476.scope: Deactivated successfully.
Nov 22 04:00:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:00:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:00:52 compute-0 ceph-mon[75011]: osdmap e333: 3 total, 3 up, 3 in
Nov 22 04:00:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:00:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:00:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:00:52 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:00:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/378544224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/378544224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:52 compute-0 podman[280506]: 2025-11-22 04:00:52.139910851 +0000 UTC m=+0.070943609 container create 3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:00:52 compute-0 podman[280506]: 2025-11-22 04:00:52.09870002 +0000 UTC m=+0.029732778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:52 compute-0 systemd[1]: Started libpod-conmon-3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc.scope.
Nov 22 04:00:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a232b91198ea55d1647eb836c762677a159b1928d7713811e792e9239c0c9554/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a232b91198ea55d1647eb836c762677a159b1928d7713811e792e9239c0c9554/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a232b91198ea55d1647eb836c762677a159b1928d7713811e792e9239c0c9554/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a232b91198ea55d1647eb836c762677a159b1928d7713811e792e9239c0c9554/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a232b91198ea55d1647eb836c762677a159b1928d7713811e792e9239c0c9554/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:52 compute-0 podman[280506]: 2025-11-22 04:00:52.239450629 +0000 UTC m=+0.170483377 container init 3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:00:52 compute-0 podman[280506]: 2025-11-22 04:00:52.24935078 +0000 UTC m=+0.180383508 container start 3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:00:52 compute-0 podman[280506]: 2025-11-22 04:00:52.25440482 +0000 UTC m=+0.185437558 container attach 3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:00:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 248 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.4 KiB/s wr, 63 op/s
Nov 22 04:00:52 compute-0 nova_compute[253461]: 2025-11-22 04:00:52.336 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/378544224' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/378544224' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:53 compute-0 ceph-mon[75011]: pgmap v1411: 305 pgs: 305 active+clean; 248 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 4.4 KiB/s wr, 63 op/s
Nov 22 04:00:53 compute-0 suspicious_pare[280523]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:00:53 compute-0 suspicious_pare[280523]: --> relative data size: 1.0
Nov 22 04:00:53 compute-0 suspicious_pare[280523]: --> All data devices are unavailable
Nov 22 04:00:53 compute-0 systemd[1]: libpod-3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc.scope: Deactivated successfully.
Nov 22 04:00:53 compute-0 podman[280552]: 2025-11-22 04:00:53.311002333 +0000 UTC m=+0.024208369 container died 3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-a232b91198ea55d1647eb836c762677a159b1928d7713811e792e9239c0c9554-merged.mount: Deactivated successfully.
Nov 22 04:00:53 compute-0 podman[280552]: 2025-11-22 04:00:53.558300771 +0000 UTC m=+0.271506847 container remove 3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pare, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:00:53 compute-0 systemd[1]: libpod-conmon-3d0348a5831895056ba0779ca0a20d64ed9cf891a6e086d3acb462e0caf036cc.scope: Deactivated successfully.
Nov 22 04:00:53 compute-0 sudo[280404]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:53 compute-0 sudo[280567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:53 compute-0 sudo[280567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:53 compute-0 sudo[280567]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:53 compute-0 sudo[280592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:00:53 compute-0 sudo[280592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:53 compute-0 sudo[280592]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:53 compute-0 sudo[280617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:53 compute-0 sudo[280617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:53 compute-0 sudo[280617]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:53 compute-0 ovn_controller[152691]: 2025-11-22T04:00:53Z|00022|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.10
Nov 22 04:00:53 compute-0 ovn_controller[152691]: 2025-11-22T04:00:53Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c1:e4:3f 10.100.0.10
Nov 22 04:00:53 compute-0 sudo[280642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:00:53 compute-0 sudo[280642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:53 compute-0 nova_compute[253461]: 2025-11-22 04:00:53.985 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 258 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 364 KiB/s wr, 147 op/s
Nov 22 04:00:54 compute-0 podman[280705]: 2025-11-22 04:00:54.285890201 +0000 UTC m=+0.032853429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:54 compute-0 podman[280705]: 2025-11-22 04:00:54.420621232 +0000 UTC m=+0.167584370 container create 7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:00:54 compute-0 ceph-mon[75011]: pgmap v1412: 305 pgs: 305 active+clean; 258 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 364 KiB/s wr, 147 op/s
Nov 22 04:00:54 compute-0 systemd[1]: Started libpod-conmon-7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7.scope.
Nov 22 04:00:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:00:54 compute-0 podman[280705]: 2025-11-22 04:00:54.546851529 +0000 UTC m=+0.293814687 container init 7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:54 compute-0 podman[280705]: 2025-11-22 04:00:54.555128366 +0000 UTC m=+0.302091514 container start 7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:00:54 compute-0 great_grothendieck[280721]: 167 167
Nov 22 04:00:54 compute-0 systemd[1]: libpod-7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7.scope: Deactivated successfully.
Nov 22 04:00:54 compute-0 conmon[280721]: conmon 7f056a59f900a87429ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7.scope/container/memory.events
Nov 22 04:00:54 compute-0 podman[280705]: 2025-11-22 04:00:54.574117196 +0000 UTC m=+0.321080343 container attach 7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:00:54 compute-0 podman[280705]: 2025-11-22 04:00:54.57730081 +0000 UTC m=+0.324263958 container died 7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 04:00:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a7e985a097bdf7456bde63a6bbce78c0bbadc30128be375c8b4d815f15907e6-merged.mount: Deactivated successfully.
Nov 22 04:00:54 compute-0 podman[280705]: 2025-11-22 04:00:54.658820336 +0000 UTC m=+0.405783494 container remove 7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 04:00:54 compute-0 systemd[1]: libpod-conmon-7f056a59f900a87429ca0383684ecd1e1eeab11808192ba1be00efdf58c079e7.scope: Deactivated successfully.
Nov 22 04:00:54 compute-0 podman[280748]: 2025-11-22 04:00:54.863542674 +0000 UTC m=+0.068305159 container create b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:00:54 compute-0 podman[280748]: 2025-11-22 04:00:54.819736432 +0000 UTC m=+0.024498937 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:54 compute-0 systemd[1]: Started libpod-conmon-b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3.scope.
Nov 22 04:00:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab74659c835ed83d27419ac65ddd3b2acb1ee21cdeeeda6bda886ffc314c8527/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab74659c835ed83d27419ac65ddd3b2acb1ee21cdeeeda6bda886ffc314c8527/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab74659c835ed83d27419ac65ddd3b2acb1ee21cdeeeda6bda886ffc314c8527/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab74659c835ed83d27419ac65ddd3b2acb1ee21cdeeeda6bda886ffc314c8527/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:55 compute-0 podman[280748]: 2025-11-22 04:00:55.028104651 +0000 UTC m=+0.232867226 container init b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bohr, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:00:55 compute-0 podman[280748]: 2025-11-22 04:00:55.036364962 +0000 UTC m=+0.241127477 container start b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:00:55 compute-0 podman[280748]: 2025-11-22 04:00:55.043903187 +0000 UTC m=+0.248665771 container attach b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:00:55 compute-0 nova_compute[253461]: 2025-11-22 04:00:55.395 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Nov 22 04:00:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Nov 22 04:00:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Nov 22 04:00:55 compute-0 exciting_bohr[280764]: {
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:     "0": [
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:         {
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "devices": [
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "/dev/loop3"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             ],
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_name": "ceph_lv0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_size": "21470642176",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "name": "ceph_lv0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "tags": {
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cluster_name": "ceph",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.crush_device_class": "",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.encrypted": "0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osd_id": "0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.type": "block",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.vdo": "0"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             },
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "type": "block",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "vg_name": "ceph_vg0"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:         }
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:     ],
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:     "1": [
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:         {
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "devices": [
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "/dev/loop4"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             ],
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_name": "ceph_lv1",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_size": "21470642176",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "name": "ceph_lv1",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "tags": {
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cluster_name": "ceph",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.crush_device_class": "",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.encrypted": "0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osd_id": "1",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.type": "block",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.vdo": "0"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             },
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "type": "block",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "vg_name": "ceph_vg1"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:         }
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:     ],
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:     "2": [
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:         {
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "devices": [
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "/dev/loop5"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             ],
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_name": "ceph_lv2",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_size": "21470642176",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "name": "ceph_lv2",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "tags": {
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.cluster_name": "ceph",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.crush_device_class": "",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.encrypted": "0",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osd_id": "2",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.type": "block",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:                 "ceph.vdo": "0"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             },
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "type": "block",
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:             "vg_name": "ceph_vg2"
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:         }
Nov 22 04:00:55 compute-0 exciting_bohr[280764]:     ]
Nov 22 04:00:55 compute-0 exciting_bohr[280764]: }
Nov 22 04:00:55 compute-0 systemd[1]: libpod-b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3.scope: Deactivated successfully.
Nov 22 04:00:55 compute-0 podman[280748]: 2025-11-22 04:00:55.870247774 +0000 UTC m=+1.075010269 container died b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bohr, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:00:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab74659c835ed83d27419ac65ddd3b2acb1ee21cdeeeda6bda886ffc314c8527-merged.mount: Deactivated successfully.
Nov 22 04:00:55 compute-0 podman[280748]: 2025-11-22 04:00:55.955162025 +0000 UTC m=+1.159924500 container remove b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_bohr, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:00:55 compute-0 systemd[1]: libpod-conmon-b8af094d6b78bda5fbf8bc35eb418d4becbd318d0a664c72414fa75cc68be9a3.scope: Deactivated successfully.
Nov 22 04:00:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:00:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Nov 22 04:00:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Nov 22 04:00:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Nov 22 04:00:55 compute-0 sudo[280642]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:56 compute-0 sudo[280787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:56 compute-0 sudo[280787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:56 compute-0 sudo[280787]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:56 compute-0 sudo[280812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:00:56 compute-0 sudo[280812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:56 compute-0 sudo[280812]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:56 compute-0 sudo[280837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:56 compute-0 sudo[280837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:56 compute-0 sudo[280837]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 258 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 482 KiB/s wr, 141 op/s
Nov 22 04:00:56 compute-0 sudo[280862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:00:56 compute-0 sudo[280862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:56 compute-0 ceph-mon[75011]: osdmap e334: 3 total, 3 up, 3 in
Nov 22 04:00:56 compute-0 ceph-mon[75011]: osdmap e335: 3 total, 3 up, 3 in
Nov 22 04:00:56 compute-0 ceph-mon[75011]: pgmap v1415: 305 pgs: 305 active+clean; 258 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 482 KiB/s wr, 141 op/s
Nov 22 04:00:56 compute-0 podman[280927]: 2025-11-22 04:00:56.707089757 +0000 UTC m=+0.027261689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:56 compute-0 podman[280927]: 2025-11-22 04:00:56.948161467 +0000 UTC m=+0.268333338 container create 58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Nov 22 04:00:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Nov 22 04:00:57 compute-0 systemd[1]: Started libpod-conmon-58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545.scope.
Nov 22 04:00:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Nov 22 04:00:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:00:57 compute-0 podman[280927]: 2025-11-22 04:00:57.184184427 +0000 UTC m=+0.504356359 container init 58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:57 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:00:57 compute-0 podman[280927]: 2025-11-22 04:00:57.192655618 +0000 UTC m=+0.512827499 container start 58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:00:57 compute-0 charming_ishizaka[280943]: 167 167
Nov 22 04:00:57 compute-0 systemd[1]: libpod-58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545.scope: Deactivated successfully.
Nov 22 04:00:57 compute-0 podman[280927]: 2025-11-22 04:00:57.242486781 +0000 UTC m=+0.562658713 container attach 58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:00:57 compute-0 podman[280927]: 2025-11-22 04:00:57.242980803 +0000 UTC m=+0.563152685 container died 58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:00:57 compute-0 nova_compute[253461]: 2025-11-22 04:00:57.338 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-1415785a793eb014e3421989ab85a2684565ff8cf0812fb3ad8f871c03b04848-merged.mount: Deactivated successfully.
Nov 22 04:00:57 compute-0 podman[280927]: 2025-11-22 04:00:57.643193948 +0000 UTC m=+0.963365830 container remove 58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:00:57 compute-0 systemd[1]: libpod-conmon-58ed87335dcc51a9aea08bd016948e2189025add67ab095198f0cb545e585545.scope: Deactivated successfully.
Nov 22 04:00:57 compute-0 podman[280967]: 2025-11-22 04:00:57.886442797 +0000 UTC m=+0.036457009 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:00:57 compute-0 podman[280967]: 2025-11-22 04:00:57.9891572 +0000 UTC m=+0.139171401 container create e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:00:58 compute-0 systemd[1]: Started libpod-conmon-e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1.scope.
Nov 22 04:00:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c852a4c7e0622d27f0d092b128aba7eb7a175b94e0fab784f4eabca2620b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c852a4c7e0622d27f0d092b128aba7eb7a175b94e0fab784f4eabca2620b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c852a4c7e0622d27f0d092b128aba7eb7a175b94e0fab784f4eabca2620b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/028c852a4c7e0622d27f0d092b128aba7eb7a175b94e0fab784f4eabca2620b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:00:58 compute-0 ceph-mon[75011]: osdmap e336: 3 total, 3 up, 3 in
Nov 22 04:00:58 compute-0 podman[280967]: 2025-11-22 04:00:58.225004755 +0000 UTC m=+0.375018966 container init e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:58 compute-0 podman[280967]: 2025-11-22 04:00:58.239072887 +0000 UTC m=+0.389087089 container start e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:00:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 262 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1007 KiB/s wr, 170 op/s
Nov 22 04:00:58 compute-0 podman[280967]: 2025-11-22 04:00:58.358667743 +0000 UTC m=+0.508681965 container attach e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:00:58 compute-0 ovn_controller[152691]: 2025-11-22T04:00:58Z|00024|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.5 does not match offer 10.100.0.10
Nov 22 04:00:58 compute-0 ovn_controller[152691]: 2025-11-22T04:00:58Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:c1:e4:3f 10.100.0.10
Nov 22 04:00:58 compute-0 ovn_controller[152691]: 2025-11-22T04:00:58Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:e4:3f 10.100.0.10
Nov 22 04:00:58 compute-0 ovn_controller[152691]: 2025-11-22T04:00:58Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:e4:3f 10.100.0.10
Nov 22 04:00:58 compute-0 nova_compute[253461]: 2025-11-22 04:00:58.987 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]: {
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "osd_id": 1,
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "type": "bluestore"
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:     },
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "osd_id": 0,
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "type": "bluestore"
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:     },
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "osd_id": 2,
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:         "type": "bluestore"
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]:     }
Nov 22 04:00:59 compute-0 flamboyant_wiles[280984]: }
Nov 22 04:00:59 compute-0 systemd[1]: libpod-e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1.scope: Deactivated successfully.
Nov 22 04:00:59 compute-0 podman[280967]: 2025-11-22 04:00:59.331284497 +0000 UTC m=+1.481298699 container died e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:00:59 compute-0 ceph-mon[75011]: pgmap v1417: 305 pgs: 305 active+clean; 262 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1007 KiB/s wr, 170 op/s
Nov 22 04:00:59 compute-0 systemd[1]: libpod-e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1.scope: Consumed 1.096s CPU time.
Nov 22 04:00:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-028c852a4c7e0622d27f0d092b128aba7eb7a175b94e0fab784f4eabca2620b5-merged.mount: Deactivated successfully.
Nov 22 04:00:59 compute-0 podman[280967]: 2025-11-22 04:00:59.447780531 +0000 UTC m=+1.597794713 container remove e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wiles, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:00:59 compute-0 systemd[1]: libpod-conmon-e25c268f2acf85f6ca5828c74db5c9e68abbf42ebcc303c6bac48d01777711c1.scope: Deactivated successfully.
Nov 22 04:00:59 compute-0 sudo[280862]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:00:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:00:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:00:59 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:00:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e773e8a4-9857-402e-937a-406dbb3111f5 does not exist
Nov 22 04:00:59 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3a56d850-2da9-409f-a1af-af832a960f25 does not exist
Nov 22 04:00:59 compute-0 sudo[281031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:00:59 compute-0 sudo[281031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:59 compute-0 sudo[281031]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:59 compute-0 sudo[281056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:00:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:00:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783811917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:00:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:00:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1783811917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:00:59 compute-0 sudo[281056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:00:59 compute-0 sudo[281056]: pam_unix(sudo:session): session closed for user root
Nov 22 04:00:59 compute-0 nova_compute[253461]: 2025-11-22 04:00:59.810 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 262 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 832 KiB/s rd, 548 KiB/s wr, 80 op/s
Nov 22 04:01:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:01:00 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:01:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1783811917' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1783811917' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:00 compute-0 ceph-mon[75011]: pgmap v1418: 305 pgs: 305 active+clean; 262 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 832 KiB/s rd, 548 KiB/s wr, 80 op/s
Nov 22 04:01:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e336 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:01 compute-0 CROND[281082]: (root) CMD (run-parts /etc/cron.hourly)
Nov 22 04:01:01 compute-0 run-parts[281085]: (/etc/cron.hourly) starting 0anacron
Nov 22 04:01:01 compute-0 run-parts[281091]: (/etc/cron.hourly) finished 0anacron
Nov 22 04:01:01 compute-0 CROND[281081]: (root) CMDEND (run-parts /etc/cron.hourly)
Nov 22 04:01:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 266 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 736 KiB/s rd, 548 KiB/s wr, 78 op/s
Nov 22 04:01:02 compute-0 nova_compute[253461]: 2025-11-22 04:01:02.341 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Nov 22 04:01:03 compute-0 ceph-mon[75011]: pgmap v1419: 305 pgs: 305 active+clean; 266 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 736 KiB/s rd, 548 KiB/s wr, 78 op/s
Nov 22 04:01:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Nov 22 04:01:03 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Nov 22 04:01:03 compute-0 nova_compute[253461]: 2025-11-22 04:01:03.990 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 266 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 467 KiB/s wr, 87 op/s
Nov 22 04:01:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Nov 22 04:01:04 compute-0 ceph-mon[75011]: osdmap e337: 3 total, 3 up, 3 in
Nov 22 04:01:04 compute-0 ceph-mon[75011]: pgmap v1421: 305 pgs: 305 active+clean; 266 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 642 KiB/s rd, 467 KiB/s wr, 87 op/s
Nov 22 04:01:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Nov 22 04:01:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Nov 22 04:01:04 compute-0 nova_compute[253461]: 2025-11-22 04:01:04.492 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:05 compute-0 ceph-mon[75011]: osdmap e338: 3 total, 3 up, 3 in
Nov 22 04:01:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655817767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655817767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Nov 22 04:01:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Nov 22 04:01:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Nov 22 04:01:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 266 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 75 KiB/s wr, 36 op/s
Nov 22 04:01:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3655817767' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3655817767' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:06 compute-0 ceph-mon[75011]: osdmap e339: 3 total, 3 up, 3 in
Nov 22 04:01:06 compute-0 ceph-mon[75011]: pgmap v1424: 305 pgs: 305 active+clean; 266 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 75 KiB/s wr, 36 op/s
Nov 22 04:01:06 compute-0 podman[281092]: 2025-11-22 04:01:06.436443919 +0000 UTC m=+0.093803413 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible)
Nov 22 04:01:07 compute-0 nova_compute[253461]: 2025-11-22 04:01:07.324 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:07 compute-0 nova_compute[253461]: 2025-11-22 04:01:07.343 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Nov 22 04:01:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Nov 22 04:01:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Nov 22 04:01:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 266 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.8 KiB/s wr, 38 op/s
Nov 22 04:01:09 compute-0 nova_compute[253461]: 2025-11-22 04:01:09.040 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:09 compute-0 ceph-mon[75011]: osdmap e340: 3 total, 3 up, 3 in
Nov 22 04:01:09 compute-0 ceph-mon[75011]: pgmap v1426: 305 pgs: 305 active+clean; 266 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.8 KiB/s wr, 38 op/s
Nov 22 04:01:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Nov 22 04:01:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Nov 22 04:01:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Nov 22 04:01:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 412 KiB/s rd, 395 KiB/s wr, 56 op/s
Nov 22 04:01:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:11 compute-0 ceph-mon[75011]: osdmap e341: 3 total, 3 up, 3 in
Nov 22 04:01:11 compute-0 ceph-mon[75011]: pgmap v1428: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 412 KiB/s rd, 395 KiB/s wr, 56 op/s
Nov 22 04:01:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356354133' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1356354133' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1356354133' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1356354133' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 421 KiB/s rd, 379 KiB/s wr, 101 op/s
Nov 22 04:01:12 compute-0 nova_compute[253461]: 2025-11-22 04:01:12.345 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:13 compute-0 ceph-mon[75011]: pgmap v1429: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 421 KiB/s rd, 379 KiB/s wr, 101 op/s
Nov 22 04:01:14 compute-0 nova_compute[253461]: 2025-11-22 04:01:14.092 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Nov 22 04:01:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1430: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 338 KiB/s rd, 302 KiB/s wr, 88 op/s
Nov 22 04:01:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Nov 22 04:01:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Nov 22 04:01:14 compute-0 ceph-mon[75011]: pgmap v1430: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 338 KiB/s rd, 302 KiB/s wr, 88 op/s
Nov 22 04:01:15 compute-0 ceph-mon[75011]: osdmap e342: 3 total, 3 up, 3 in
Nov 22 04:01:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e342 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Nov 22 04:01:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Nov 22 04:01:16 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Nov 22 04:01:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 14 KiB/s wr, 61 op/s
Nov 22 04:01:17 compute-0 ceph-mon[75011]: osdmap e343: 3 total, 3 up, 3 in
Nov 22 04:01:17 compute-0 ceph-mon[75011]: pgmap v1433: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 14 KiB/s wr, 61 op/s
Nov 22 04:01:17 compute-0 nova_compute[253461]: 2025-11-22 04:01:17.348 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:17 compute-0 nova_compute[253461]: 2025-11-22 04:01:17.958 253465 DEBUG oslo_concurrency.lockutils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:17 compute-0 nova_compute[253461]: 2025-11-22 04:01:17.958 253465 DEBUG oslo_concurrency.lockutils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:18 compute-0 nova_compute[253461]: 2025-11-22 04:01:18.063 253465 DEBUG nova.objects.instance [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'flavor' on Instance uuid 45b06598-5fca-47e2-962e-824755f52a2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:01:18 compute-0 nova_compute[253461]: 2025-11-22 04:01:18.288 253465 DEBUG oslo_concurrency.lockutils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 12 KiB/s wr, 68 op/s
Nov 22 04:01:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Nov 22 04:01:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Nov 22 04:01:18 compute-0 ceph-mon[75011]: pgmap v1434: 305 pgs: 305 active+clean; 269 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 12 KiB/s wr, 68 op/s
Nov 22 04:01:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.095 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.493 253465 DEBUG oslo_concurrency.lockutils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.493 253465 DEBUG oslo_concurrency.lockutils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.494 253465 INFO nova.compute.manager [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Attaching volume 19d0e76e-a3d0-421f-a3ac-433d4e318c8e to /dev/vdb
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.641 253465 DEBUG os_brick.utils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.642 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.658 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.659 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac9b3cf-e287-4415-aa79-2efda00e9b39]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.660 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.670 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.670 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[69a6d604-9df0-4857-a397-c0365b807baf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.671 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.681 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.681 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[5261ad47-5605-49d1-92a8-2e4cd9795393]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.682 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a03c4f-2b4b-47c9-8d9b-8a3ce9e34422]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.682 253465 DEBUG oslo_concurrency.processutils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.705 253465 DEBUG oslo_concurrency.processutils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.708 253465 DEBUG os_brick.initiator.connectors.lightos [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.708 253465 DEBUG os_brick.initiator.connectors.lightos [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.708 253465 DEBUG os_brick.initiator.connectors.lightos [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.709 253465 DEBUG os_brick.utils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:01:19 compute-0 nova_compute[253461]: 2025-11-22 04:01:19.709 253465 DEBUG nova.virt.block_device [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updating existing volume attachment record: 23ab82a9-0463-4e0c-9852-b7126688d239 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:01:19 compute-0 ceph-mon[75011]: osdmap e344: 3 total, 3 up, 3 in
Nov 22 04:01:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 270 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 45 KiB/s wr, 37 op/s
Nov 22 04:01:20 compute-0 podman[281120]: 2025-11-22 04:01:20.429368156 +0000 UTC m=+0.092652497 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:01:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:01:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4113898265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:20 compute-0 podman[281121]: 2025-11-22 04:01:20.465394967 +0000 UTC m=+0.123239686 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:01:20 compute-0 nova_compute[253461]: 2025-11-22 04:01:20.843 253465 DEBUG nova.objects.instance [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'flavor' on Instance uuid 45b06598-5fca-47e2-962e-824755f52a2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:01:20 compute-0 ceph-mon[75011]: pgmap v1436: 305 pgs: 305 active+clean; 270 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 45 KiB/s wr, 37 op/s
Nov 22 04:01:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4113898265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:20 compute-0 nova_compute[253461]: 2025-11-22 04:01:20.934 253465 DEBUG nova.virt.libvirt.driver [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Attempting to attach volume 19d0e76e-a3d0-421f-a3ac-433d4e318c8e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 04:01:20 compute-0 nova_compute[253461]: 2025-11-22 04:01:20.940 253465 DEBUG nova.virt.libvirt.guest [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 04:01:20 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:01:20 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-19d0e76e-a3d0-421f-a3ac-433d4e318c8e">
Nov 22 04:01:20 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:01:20 compute-0 nova_compute[253461]:   </source>
Nov 22 04:01:20 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 04:01:20 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:01:20 compute-0 nova_compute[253461]:   </auth>
Nov 22 04:01:20 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:01:20 compute-0 nova_compute[253461]:   <serial>19d0e76e-a3d0-421f-a3ac-433d4e318c8e</serial>
Nov 22 04:01:20 compute-0 nova_compute[253461]: </disk>
Nov 22 04:01:20 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 04:01:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:21 compute-0 nova_compute[253461]: 2025-11-22 04:01:21.351 253465 DEBUG nova.virt.libvirt.driver [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:01:21 compute-0 nova_compute[253461]: 2025-11-22 04:01:21.353 253465 DEBUG nova.virt.libvirt.driver [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:01:21 compute-0 nova_compute[253461]: 2025-11-22 04:01:21.354 253465 DEBUG nova.virt.libvirt.driver [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:01:21 compute-0 nova_compute[253461]: 2025-11-22 04:01:21.354 253465 DEBUG nova.virt.libvirt.driver [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] No VIF found with MAC fa:16:3e:c1:e4:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.101 253465 DEBUG oslo_concurrency.lockutils [None req-42101bf4-28a6-451c-a54f-499e24f8273a 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 270 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 35 KiB/s wr, 38 op/s
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.351 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.417 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.418 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.470 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:01:22 compute-0 ceph-mon[75011]: pgmap v1437: 305 pgs: 305 active+clean; 270 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 35 KiB/s wr, 38 op/s
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.585 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.586 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.594 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.594 253465 INFO nova.compute.claims [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:01:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1360294168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1360294168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:22 compute-0 nova_compute[253461]: 2025-11-22 04:01:22.751 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:23.012 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:23.013 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:23.014 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557853751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.191 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.197 253465 DEBUG nova.compute.provider_tree [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.226 253465 DEBUG nova.scheduler.client.report [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.254 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.255 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.393 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.393 253465 DEBUG nova.network.neutron [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.422 253465 INFO nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.457 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:01:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1360294168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1360294168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1557853751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.588 253465 DEBUG nova.policy [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45ccef35c0c843a59c9dfd0eb67190a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '83cc5de7368b40b984b51f781e85343c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.592 253465 INFO nova.virt.block_device [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Booting with volume 202b4f7c-2f66-480b-8c4a-883e86f01bb9 at /dev/vda
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.738 253465 DEBUG os_brick.utils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.741 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.759 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.759 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[b971c945-83bc-44f0-a40f-0dfae5788e38]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.762 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.771 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.772 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[cded886a-af12-4648-b8d6-36c16ce1a2ba]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.773 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.785 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.785 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[3d476d22-6809-4aa8-b62f-dea2ed08e22f]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.787 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[67660e11-9205-4b5b-b21f-4925b7455a3f]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.788 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.817 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.819 253465 DEBUG os_brick.initiator.connectors.lightos [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.819 253465 DEBUG os_brick.initiator.connectors.lightos [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.819 253465 DEBUG os_brick.initiator.connectors.lightos [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.820 253465 DEBUG os_brick.utils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] <== get_connector_properties: return (80ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:01:23 compute-0 nova_compute[253461]: 2025-11-22 04:01:23.820 253465 DEBUG nova.virt.block_device [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Updating existing volume attachment record: 6e0b141f-680f-4669-9618-df9b99fc1101 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.099 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.303 253465 DEBUG oslo_concurrency.lockutils [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.303 253465 DEBUG oslo_concurrency.lockutils [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.317 253465 INFO nova.compute.manager [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Detaching volume 19d0e76e-a3d0-421f-a3ac-433d4e318c8e
Nov 22 04:01:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 270 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 35 KiB/s wr, 54 op/s
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.432 253465 DEBUG nova.network.neutron [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Successfully created port: 03db0410-4a38-4cb4-ac6c-c8112237d93f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.459 253465 INFO nova.virt.block_device [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Attempting to driver detach volume 19d0e76e-a3d0-421f-a3ac-433d4e318c8e from mountpoint /dev/vdb
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.472 253465 DEBUG nova.virt.libvirt.driver [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Attempting to detach device vdb from instance 45b06598-5fca-47e2-962e-824755f52a2b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.473 253465 DEBUG nova.virt.libvirt.guest [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-19d0e76e-a3d0-421f-a3ac-433d4e318c8e">
Nov 22 04:01:24 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   </source>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <serial>19d0e76e-a3d0-421f-a3ac-433d4e318c8e</serial>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:01:24 compute-0 nova_compute[253461]: </disk>
Nov 22 04:01:24 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.492 253465 INFO nova.virt.libvirt.driver [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Successfully detached device vdb from instance 45b06598-5fca-47e2-962e-824755f52a2b from the persistent domain config.
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.493 253465 DEBUG nova.virt.libvirt.driver [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 45b06598-5fca-47e2-962e-824755f52a2b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.494 253465 DEBUG nova.virt.libvirt.guest [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-19d0e76e-a3d0-421f-a3ac-433d4e318c8e">
Nov 22 04:01:24 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   </source>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <serial>19d0e76e-a3d0-421f-a3ac-433d4e318c8e</serial>
Nov 22 04:01:24 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:01:24 compute-0 nova_compute[253461]: </disk>
Nov 22 04:01:24 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:01:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:01:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3761826388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:24 compute-0 ceph-mon[75011]: pgmap v1438: 305 pgs: 305 active+clean; 270 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 35 KiB/s wr, 54 op/s
Nov 22 04:01:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3761826388' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.610 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763784084.6094785, 45b06598-5fca-47e2-962e-824755f52a2b => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.611 253465 DEBUG nova.virt.libvirt.driver [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 45b06598-5fca-47e2-962e-824755f52a2b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.614 253465 INFO nova.virt.libvirt.driver [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Successfully detached device vdb from instance 45b06598-5fca-47e2-962e-824755f52a2b from the live domain config.
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.762 253465 DEBUG nova.objects.instance [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'flavor' on Instance uuid 45b06598-5fca-47e2-962e-824755f52a2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.814 253465 DEBUG oslo_concurrency.lockutils [None req-d30611a1-3f1f-481c-ac06-c3fcec8051ec 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.836 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.837 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.838 253465 INFO nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Creating image(s)
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.838 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.839 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Ensure instance console log exists: /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.839 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.839 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:24 compute-0 nova_compute[253461]: 2025-11-22 04:01:24.839 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:25 compute-0 nova_compute[253461]: 2025-11-22 04:01:25.331 253465 DEBUG nova.network.neutron [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Successfully updated port: 03db0410-4a38-4cb4-ac6c-c8112237d93f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:01:25 compute-0 nova_compute[253461]: 2025-11-22 04:01:25.355 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "refresh_cache-e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:01:25 compute-0 nova_compute[253461]: 2025-11-22 04:01:25.355 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquired lock "refresh_cache-e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:01:25 compute-0 nova_compute[253461]: 2025-11-22 04:01:25.355 253465 DEBUG nova.network.neutron [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:01:25 compute-0 nova_compute[253461]: 2025-11-22 04:01:25.447 253465 DEBUG nova.compute.manager [req-11b2f31c-5391-4ba3-bf17-ae445b1e92bd req-21a81c67-118e-435e-8b50-aed75df69e4f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received event network-changed-03db0410-4a38-4cb4-ac6c-c8112237d93f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:25 compute-0 nova_compute[253461]: 2025-11-22 04:01:25.447 253465 DEBUG nova.compute.manager [req-11b2f31c-5391-4ba3-bf17-ae445b1e92bd req-21a81c67-118e-435e-8b50-aed75df69e4f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Refreshing instance network info cache due to event network-changed-03db0410-4a38-4cb4-ac6c-c8112237d93f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:01:25 compute-0 nova_compute[253461]: 2025-11-22 04:01:25.447 253465 DEBUG oslo_concurrency.lockutils [req-11b2f31c-5391-4ba3-bf17-ae445b1e92bd req-21a81c67-118e-435e-8b50-aed75df69e4f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:01:25 compute-0 nova_compute[253461]: 2025-11-22 04:01:25.497 253465 DEBUG nova.network.neutron [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:01:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Nov 22 04:01:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Nov 22 04:01:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Nov 22 04:01:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e345 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Nov 22 04:01:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Nov 22 04:01:26 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.212 253465 DEBUG nova.compute.manager [req-eca42a3f-f45b-40d9-bc01-2fb382395253 req-d689b750-6f70-4ea5-989d-09d364797919 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-changed-2f6b03ee-33c1-4a13-813c-b794d61056dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.213 253465 DEBUG nova.compute.manager [req-eca42a3f-f45b-40d9-bc01-2fb382395253 req-d689b750-6f70-4ea5-989d-09d364797919 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Refreshing instance network info cache due to event network-changed-2f6b03ee-33c1-4a13-813c-b794d61056dd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.213 253465 DEBUG oslo_concurrency.lockutils [req-eca42a3f-f45b-40d9-bc01-2fb382395253 req-d689b750-6f70-4ea5-989d-09d364797919 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.213 253465 DEBUG oslo_concurrency.lockutils [req-eca42a3f-f45b-40d9-bc01-2fb382395253 req-d689b750-6f70-4ea5-989d-09d364797919 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.213 253465 DEBUG nova.network.neutron [req-eca42a3f-f45b-40d9-bc01-2fb382395253 req-d689b750-6f70-4ea5-989d-09d364797919 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Refreshing network info cache for port 2f6b03ee-33c1-4a13-813c-b794d61056dd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.318 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.318 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.319 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.319 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.319 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.321 253465 INFO nova.compute.manager [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Terminating instance
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.323 253465 DEBUG nova.compute.manager [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:01:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 270 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 37 KiB/s wr, 37 op/s
Nov 22 04:01:26 compute-0 kernel: tap2f6b03ee-33 (unregistering): left promiscuous mode
Nov 22 04:01:26 compute-0 NetworkManager[48916]: <info>  [1763784086.3924] device (tap2f6b03ee-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.405 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:26 compute-0 ovn_controller[152691]: 2025-11-22T04:01:26Z|00161|binding|INFO|Releasing lport 2f6b03ee-33c1-4a13-813c-b794d61056dd from this chassis (sb_readonly=0)
Nov 22 04:01:26 compute-0 ovn_controller[152691]: 2025-11-22T04:01:26Z|00162|binding|INFO|Setting lport 2f6b03ee-33c1-4a13-813c-b794d61056dd down in Southbound
Nov 22 04:01:26 compute-0 ovn_controller[152691]: 2025-11-22T04:01:26Z|00163|binding|INFO|Removing iface tap2f6b03ee-33 ovn-installed in OVS
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.410 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.416 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:e4:3f 10.100.0.10'], port_security=['fa:16:3e:c1:e4:3f 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '45b06598-5fca-47e2-962e-824755f52a2b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a62857fbf8cf446cac9c207ae6750597', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0832384-6d69-4b2e-a587-602048007135', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f9f8761-3ac6-4a72-804a-92d1a0df209a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=2f6b03ee-33c1-4a13-813c-b794d61056dd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.418 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 2f6b03ee-33c1-4a13-813c-b794d61056dd in datapath 4692d97f-32c5-4a6f-a095-ba8dda0baf05 unbound from our chassis
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.419 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4692d97f-32c5-4a6f-a095-ba8dda0baf05
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.428 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.443 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b14310b7-f69a-414a-a41d-f02db744c4ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.456 253465 DEBUG nova.network.neutron [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Updating instance_info_cache with network_info: [{"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:26 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 22 04:01:26 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 15.753s CPU time.
Nov 22 04:01:26 compute-0 systemd-machined[215728]: Machine qemu-14-instance-0000000e terminated.
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.477 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[8cd8f5d5-235d-4b19-8bd8-d114bf20f190]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.479 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[62095951-5cc4-4bed-8605-ebbd5f5fd7ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.490 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Releasing lock "refresh_cache-e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.490 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Instance network_info: |[{"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.491 253465 DEBUG oslo_concurrency.lockutils [req-11b2f31c-5391-4ba3-bf17-ae445b1e92bd req-21a81c67-118e-435e-8b50-aed75df69e4f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.491 253465 DEBUG nova.network.neutron [req-11b2f31c-5391-4ba3-bf17-ae445b1e92bd req-21a81c67-118e-435e-8b50-aed75df69e4f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Refreshing network info cache for port 03db0410-4a38-4cb4-ac6c-c8112237d93f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.494 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Start _get_guest_xml network_info=[{"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': '6e0b141f-680f-4669-9618-df9b99fc1101', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-202b4f7c-2f66-480b-8c4a-883e86f01bb9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '202b4f7c-2f66-480b-8c4a-883e86f01bb9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e6504c9f-4e62-4cc8-9bb0-de2af483fe9e', 'attached_at': '', 'detached_at': '', 'volume_id': '202b4f7c-2f66-480b-8c4a-883e86f01bb9', 'serial': '202b4f7c-2f66-480b-8c4a-883e86f01bb9'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.498 253465 WARNING nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.504 253465 DEBUG nova.virt.libvirt.host [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.505 253465 DEBUG nova.virt.libvirt.host [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.510 253465 DEBUG nova.virt.libvirt.host [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.510 253465 DEBUG nova.virt.libvirt.host [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.511 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.511 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.511 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.511 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.512 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.512 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.512 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.512 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.512 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.512 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.513 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.513 253465 DEBUG nova.virt.hardware [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.513 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[c7620ea0-16d1-400a-b8ab-f33b3144e05e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.537 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[40d92df7-20b5-42af-81c8-267616158287]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4692d97f-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:6c:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424571, 'reachable_time': 17992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281231, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.545 253465 DEBUG nova.storage.rbd_utils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image e6504c9f-4e62-4cc8-9bb0-de2af483fe9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.554 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.559 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[db0d973d-9840-429a-a681-be2444a9c131]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4692d97f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424583, 'tstamp': 424583}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281251, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4692d97f-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424588, 'tstamp': 424588}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281251, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.561 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4692d97f-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.567 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4692d97f-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.568 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.568 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4692d97f-30, col_values=(('external_ids', {'iface-id': '30338b02-a11d-4ec7-8237-9f070233f5bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.569 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.583 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:26 compute-0 ceph-mon[75011]: osdmap e345: 3 total, 3 up, 3 in
Nov 22 04:01:26 compute-0 ceph-mon[75011]: osdmap e346: 3 total, 3 up, 3 in
Nov 22 04:01:26 compute-0 ceph-mon[75011]: pgmap v1441: 305 pgs: 305 active+clean; 270 MiB data, 468 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 37 KiB/s wr, 37 op/s
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.594 253465 INFO nova.virt.libvirt.driver [-] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Instance destroyed successfully.
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.595 253465 DEBUG nova.objects.instance [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'resources' on Instance uuid 45b06598-5fca-47e2-962e-824755f52a2b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.614 253465 DEBUG nova.virt.libvirt.vif [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:00:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-1926980836',display_name='tempest-TestStampPattern-server-1926980836',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-1926980836',id=14,image_ref='d0ee314f-72f8-4728-88e7-429472591834',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYgn9CTDvmfK+9lwizGtXeEZlSZuA1AJsMHGR/6t8oyy2KLeA+NyxTmeE6fCgDUhF1kETDxpPXjj8wfb8eB/z4sjIcgn3I98Rj3v+7eP88Wa0lihBTXU++d2vPdWMcG3w==',key_name='tempest-TestStampPattern-116986255',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:00:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a62857fbf8cf446cac9c207ae6750597',ramdisk_id='',reservation_id='r-ocxnc1nt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618',image_min_disk='1',image_min_ram='0',image_owner_id='a62857fbf8cf446cac9c207ae6750597',image_owner_project_name='tempest-TestStampPattern-1055115370',image_owner_user_name='tempest-TestStampPattern-1055115370-project-member',image_user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',owner_project_name='tempest-TestStampPattern-1055115370',owner_user_name='tempest-TestStampPattern-1055115370-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:00:40Z,user_data=None,user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',uuid=45b06598-5fca-47e2-962e-824755f52a2b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.615 253465 DEBUG nova.network.os_vif_util [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converting VIF {"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.616 253465 DEBUG nova.network.os_vif_util [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:e4:3f,bridge_name='br-int',has_traffic_filtering=True,id=2f6b03ee-33c1-4a13-813c-b794d61056dd,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6b03ee-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.617 253465 DEBUG os_vif [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:e4:3f,bridge_name='br-int',has_traffic_filtering=True,id=2f6b03ee-33c1-4a13-813c-b794d61056dd,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6b03ee-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.620 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.621 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f6b03ee-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.623 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.626 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.629 253465 INFO os_vif [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:e4:3f,bridge_name='br-int',has_traffic_filtering=True,id=2f6b03ee-33c1-4a13-813c-b794d61056dd,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2f6b03ee-33')
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.867 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:01:26 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:26.869 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:01:26 compute-0 nova_compute[253461]: 2025-11-22 04:01:26.917 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:01:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3822758969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.008 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Nov 22 04:01:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.152 253465 DEBUG os_brick.encryptors [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Using volume encryption metadata '{'encryption_key_id': 'c2384aee-fad6-46b2-8fc0-76e2fe59a6e5', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-202b4f7c-2f66-480b-8c4a-883e86f01bb9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '202b4f7c-2f66-480b-8c4a-883e86f01bb9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'e6504c9f-4e62-4cc8-9bb0-de2af483fe9e', 'attached_at': '', 'detached_at': '', 'volume_id': '202b4f7c-2f66-480b-8c4a-883e86f01bb9', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.157 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.181 253465 DEBUG barbicanclient.v1.secrets [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.183 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.208 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.209 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.230 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.231 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.252 253465 INFO nova.virt.libvirt.driver [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Deleting instance files /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b_del
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.253 253465 INFO nova.virt.libvirt.driver [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Deletion of /var/lib/nova/instances/45b06598-5fca-47e2-962e-824755f52a2b_del complete
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.258 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.259 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.281 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.282 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.304 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.305 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.322 253465 INFO nova.compute.manager [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Took 1.00 seconds to destroy the instance on the hypervisor.
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.323 253465 DEBUG oslo.service.loopingcall [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.323 253465 DEBUG nova.compute.manager [-] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.324 253465 DEBUG nova.network.neutron [-] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.330 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.331 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.352 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.358 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.359 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.379 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.380 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.407 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.408 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.434 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.434 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.459 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.460 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.487 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.487 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.521 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.522 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.547 253465 DEBUG nova.compute.manager [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-vif-unplugged-2f6b03ee-33c1-4a13-813c-b794d61056dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.547 253465 DEBUG oslo_concurrency.lockutils [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.548 253465 DEBUG oslo_concurrency.lockutils [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.548 253465 DEBUG oslo_concurrency.lockutils [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.548 253465 DEBUG nova.compute.manager [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] No waiting events found dispatching network-vif-unplugged-2f6b03ee-33c1-4a13-813c-b794d61056dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.549 253465 DEBUG nova.compute.manager [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-vif-unplugged-2f6b03ee-33c1-4a13-813c-b794d61056dd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.549 253465 DEBUG nova.compute.manager [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.549 253465 DEBUG oslo_concurrency.lockutils [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "45b06598-5fca-47e2-962e-824755f52a2b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.549 253465 DEBUG oslo_concurrency.lockutils [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.550 253465 DEBUG oslo_concurrency.lockutils [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.550 253465 DEBUG nova.compute.manager [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] No waiting events found dispatching network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.550 253465 WARNING nova.compute.manager [req-c2b0e6ae-c523-4916-8e44-452fb2d4417e req-7c1361b4-e96d-4429-85fb-a928b0559203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received unexpected event network-vif-plugged-2f6b03ee-33c1-4a13-813c-b794d61056dd for instance with vm_state active and task_state deleting.
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.552 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.552 253465 INFO barbicanclient.base [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Calculated Secrets uuid ref: secrets/c2384aee-fad6-46b2-8fc0-76e2fe59a6e5
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.578 253465 DEBUG barbicanclient.client [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.579 253465 DEBUG nova.virt.libvirt.host [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <volume>202b4f7c-2f66-480b-8c4a-883e86f01bb9</volume>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   </usage>
Nov 22 04:01:27 compute-0 nova_compute[253461]: </secret>
Nov 22 04:01:27 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.587 253465 DEBUG nova.network.neutron [req-eca42a3f-f45b-40d9-bc01-2fb382395253 req-d689b750-6f70-4ea5-989d-09d364797919 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updated VIF entry in instance network info cache for port 2f6b03ee-33c1-4a13-813c-b794d61056dd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.587 253465 DEBUG nova.network.neutron [req-eca42a3f-f45b-40d9-bc01-2fb382395253 req-d689b750-6f70-4ea5-989d-09d364797919 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updating instance_info_cache with network_info: [{"id": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "address": "fa:16:3e:c1:e4:3f", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2f6b03ee-33", "ovs_interfaceid": "2f6b03ee-33c1-4a13-813c-b794d61056dd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3822758969' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:27 compute-0 ceph-mon[75011]: osdmap e347: 3 total, 3 up, 3 in
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.613 253465 DEBUG oslo_concurrency.lockutils [req-eca42a3f-f45b-40d9-bc01-2fb382395253 req-d689b750-6f70-4ea5-989d-09d364797919 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-45b06598-5fca-47e2-962e-824755f52a2b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.626 253465 DEBUG nova.virt.libvirt.vif [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1411667694',display_name='tempest-TestVolumeBootPattern-server-1411667694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1411667694',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-0cxdq0xv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:01:23Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=e6504c9f-4e62-4cc8-9bb0-de2af483fe9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.626 253465 DEBUG nova.network.os_vif_util [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.628 253465 DEBUG nova.network.os_vif_util [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:bb:e2,bridge_name='br-int',has_traffic_filtering=True,id=03db0410-4a38-4cb4-ac6c-c8112237d93f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03db0410-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.631 253465 DEBUG nova.objects.instance [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'pci_devices' on Instance uuid e6504c9f-4e62-4cc8-9bb0-de2af483fe9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.646 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <uuid>e6504c9f-4e62-4cc8-9bb0-de2af483fe9e</uuid>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <name>instance-0000000f</name>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <nova:name>tempest-TestVolumeBootPattern-server-1411667694</nova:name>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:01:26</nova:creationTime>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <nova:user uuid="45ccef35c0c843a59c9dfd0eb67190a6">tempest-TestVolumeBootPattern-1584219565-project-member</nova:user>
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <nova:project uuid="83cc5de7368b40b984b51f781e85343c">tempest-TestVolumeBootPattern-1584219565</nova:project>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <nova:port uuid="03db0410-4a38-4cb4-ac6c-c8112237d93f">
Nov 22 04:01:27 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <system>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <entry name="serial">e6504c9f-4e62-4cc8-9bb0-de2af483fe9e</entry>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <entry name="uuid">e6504c9f-4e62-4cc8-9bb0-de2af483fe9e</entry>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </system>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <os>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   </os>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <features>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   </features>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e_disk.config">
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       </source>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-202b4f7c-2f66-480b-8c4a-883e86f01bb9">
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       </source>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <serial>202b4f7c-2f66-480b-8c4a-883e86f01bb9</serial>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <encryption format="luks">
Nov 22 04:01:27 compute-0 nova_compute[253461]:         <secret type="passphrase" uuid="6613e191-b10f-4d47-ac2b-dbd37b21bdc5"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       </encryption>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:3b:bb:e2"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <target dev="tap03db0410-4a"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e/console.log" append="off"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <video>
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </video>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:01:27 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:01:27 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:01:27 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:01:27 compute-0 nova_compute[253461]: </domain>
Nov 22 04:01:27 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.646 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Preparing to wait for external event network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.648 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.648 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.649 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.650 253465 DEBUG nova.virt.libvirt.vif [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1411667694',display_name='tempest-TestVolumeBootPattern-server-1411667694',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1411667694',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-0cxdq0xv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:01:23Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=e6504c9f-4e62-4cc8-9bb0-de2af483fe9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.651 253465 DEBUG nova.network.os_vif_util [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.652 253465 DEBUG nova.network.os_vif_util [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:bb:e2,bridge_name='br-int',has_traffic_filtering=True,id=03db0410-4a38-4cb4-ac6c-c8112237d93f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03db0410-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.653 253465 DEBUG os_vif [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:bb:e2,bridge_name='br-int',has_traffic_filtering=True,id=03db0410-4a38-4cb4-ac6c-c8112237d93f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03db0410-4a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.654 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.655 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.656 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.660 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.661 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03db0410-4a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.661 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap03db0410-4a, col_values=(('external_ids', {'iface-id': '03db0410-4a38-4cb4-ac6c-c8112237d93f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3b:bb:e2', 'vm-uuid': 'e6504c9f-4e62-4cc8-9bb0-de2af483fe9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:27 compute-0 NetworkManager[48916]: <info>  [1763784087.6643] manager: (tap03db0410-4a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.663 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.672 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.673 253465 INFO os_vif [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:bb:e2,bridge_name='br-int',has_traffic_filtering=True,id=03db0410-4a38-4cb4-ac6c-c8112237d93f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03db0410-4a')
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.741 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.741 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.742 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No VIF found with MAC fa:16:3e:3b:bb:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.742 253465 INFO nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Using config drive
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.774 253465 DEBUG nova.storage.rbd_utils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image e6504c9f-4e62-4cc8-9bb0-de2af483fe9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.827 253465 DEBUG nova.network.neutron [-] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.849 253465 INFO nova.compute.manager [-] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Took 0.52 seconds to deallocate network for instance.
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.856 253465 DEBUG nova.network.neutron [req-11b2f31c-5391-4ba3-bf17-ae445b1e92bd req-21a81c67-118e-435e-8b50-aed75df69e4f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Updated VIF entry in instance network info cache for port 03db0410-4a38-4cb4-ac6c-c8112237d93f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.856 253465 DEBUG nova.network.neutron [req-11b2f31c-5391-4ba3-bf17-ae445b1e92bd req-21a81c67-118e-435e-8b50-aed75df69e4f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Updating instance_info_cache with network_info: [{"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.878 253465 DEBUG oslo_concurrency.lockutils [req-11b2f31c-5391-4ba3-bf17-ae445b1e92bd req-21a81c67-118e-435e-8b50-aed75df69e4f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.904 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:27 compute-0 nova_compute[253461]: 2025-11-22 04:01:27.904 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4079852940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4079852940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.004 253465 DEBUG oslo_concurrency.processutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.177 253465 INFO nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Creating config drive at /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e/disk.config
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.182 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp351p9l27 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.293 253465 DEBUG nova.compute.manager [req-f7c3ad7b-bf89-43ea-a755-0b4409ab9bd5 req-a156ab1b-a4d6-4a78-b3e6-d7524876baa3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Received event network-vif-deleted-2f6b03ee-33c1-4a13-813c-b794d61056dd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.309 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp351p9l27" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.331 253465 DEBUG nova.storage.rbd_utils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image e6504c9f-4e62-4cc8-9bb0-de2af483fe9e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:01:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 272 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 792 KiB/s rd, 345 KiB/s wr, 55 op/s
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.336 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e/disk.config e6504c9f-4e62-4cc8-9bb0-de2af483fe9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266421573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.465 253465 DEBUG oslo_concurrency.processutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e/disk.config e6504c9f-4e62-4cc8-9bb0-de2af483fe9e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.466 253465 INFO nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Deleting local config drive /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e/disk.config because it was imported into RBD.
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.471 253465 DEBUG oslo_concurrency.processutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.479 253465 DEBUG nova.compute.provider_tree [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.500 253465 DEBUG nova.scheduler.client.report [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:01:28 compute-0 kernel: tap03db0410-4a: entered promiscuous mode
Nov 22 04:01:28 compute-0 systemd-udevd[281218]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:01:28 compute-0 ovn_controller[152691]: 2025-11-22T04:01:28Z|00164|binding|INFO|Claiming lport 03db0410-4a38-4cb4-ac6c-c8112237d93f for this chassis.
Nov 22 04:01:28 compute-0 ovn_controller[152691]: 2025-11-22T04:01:28Z|00165|binding|INFO|03db0410-4a38-4cb4-ac6c-c8112237d93f: Claiming fa:16:3e:3b:bb:e2 10.100.0.10
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.516 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:28 compute-0 NetworkManager[48916]: <info>  [1763784088.5172] manager: (tap03db0410-4a): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.519 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.615s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.522 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:bb:e2 10.100.0.10'], port_security=['fa:16:3e:3b:bb:e2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e6504c9f-4e62-4cc8-9bb0-de2af483fe9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a457fd1a-7e56-4665-9c38-fd65feb93293', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=03db0410-4a38-4cb4-ac6c-c8112237d93f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.524 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 03db0410-4a38-4cb4-ac6c-c8112237d93f in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 bound to our chassis
Nov 22 04:01:28 compute-0 NetworkManager[48916]: <info>  [1763784088.5269] device (tap03db0410-4a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.526 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:01:28 compute-0 NetworkManager[48916]: <info>  [1763784088.5276] device (tap03db0410-4a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:01:28 compute-0 ovn_controller[152691]: 2025-11-22T04:01:28Z|00166|binding|INFO|Setting lport 03db0410-4a38-4cb4-ac6c-c8112237d93f ovn-installed in OVS
Nov 22 04:01:28 compute-0 ovn_controller[152691]: 2025-11-22T04:01:28Z|00167|binding|INFO|Setting lport 03db0410-4a38-4cb4-ac6c-c8112237d93f up in Southbound
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.534 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.535 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.538 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6096f34f-ef39-4bcd-9261-1216b1a94e98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.540 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4670b112-91 in ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.541 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4670b112-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.541 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[850da4aa-59cf-4ea3-9be8-20466c3b5e8a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.542 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed1221e-0417-432b-bab6-29bd96acb0f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 systemd-machined[215728]: New machine qemu-15-instance-0000000f.
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.543 253465 INFO nova.scheduler.client.report [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Deleted allocations for instance 45b06598-5fca-47e2-962e-824755f52a2b
Nov 22 04:01:28 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.554 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c59591-5538-4338-909e-3e0950292560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.578 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e5f6c35e-414c-4b26-97c0-6bfc9c6e4531]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.603 253465 DEBUG oslo_concurrency.lockutils [None req-08fa4578-f7cc-457d-8c37-28d3aeb3d723 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "45b06598-5fca-47e2-962e-824755f52a2b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4079852940' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4079852940' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:28 compute-0 ceph-mon[75011]: pgmap v1443: 305 pgs: 305 active+clean; 272 MiB data, 470 MiB used, 60 GiB / 60 GiB avail; 792 KiB/s rd, 345 KiB/s wr, 55 op/s
Nov 22 04:01:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1266421573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.609 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a459c164-7612-4e0c-8058-a99b8e7f6a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 NetworkManager[48916]: <info>  [1763784088.6144] manager: (tap4670b112-90): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.614 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1890ba90-111b-4f03-aec9-5d2bc41115bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.647 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[abd8bd0a-a9e6-43ab-a411-487c0062fd39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.650 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[2725d757-3376-412b-95b3-0b1d55edb0a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 NetworkManager[48916]: <info>  [1763784088.6696] device (tap4670b112-90): carrier: link connected
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.674 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ee7c7b-e9c5-4fb3-ae7f-557017ef501f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.692 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3e9195-adae-475a-b26e-dad05060e2c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434126, 'reachable_time': 19733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281427, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.708 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0aabaee1-b428-44eb-9d94-ec8c7f4a340f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:43a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434126, 'tstamp': 434126}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281428, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.722 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[831169b8-9d53-44c6-808e-1ad9d568a44b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 50], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434126, 'reachable_time': 19733, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281429, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.760 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7572d8-ac76-4cf0-963a-decf00f416b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.814 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[18fe467e-a461-49e5-b976-fd89d52635e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.816 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.816 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.816 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:28 compute-0 NetworkManager[48916]: <info>  [1763784088.8187] manager: (tap4670b112-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.817 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:28 compute-0 kernel: tap4670b112-90: entered promiscuous mode
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.826 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:28 compute-0 ovn_controller[152691]: 2025-11-22T04:01:28Z|00168|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.836 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:28 compute-0 nova_compute[253461]: 2025-11-22 04:01:28.862 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.863 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.864 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[59684bb5-2276-441f-a5ce-525c6050e523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.865 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.865 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'env', 'PROCESS_TAG=haproxy-4670b112-9f63-4a03-8d79-91f581c69c03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4670b112-9f63-4a03-8d79-91f581c69c03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:01:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:28.876 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:29 compute-0 podman[281497]: 2025-11-22 04:01:29.276293508 +0000 UTC m=+0.067453549 container create 48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:01:29 compute-0 podman[281497]: 2025-11-22 04:01:29.234416604 +0000 UTC m=+0.025576695 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:01:29 compute-0 systemd[1]: Started libpod-conmon-48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb.scope.
Nov 22 04:01:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:01:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c02a7a3a03802f2eafc7a535d48a6b19d68c67fab205c774a32392a2396c7b6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:01:29 compute-0 podman[281497]: 2025-11-22 04:01:29.400336527 +0000 UTC m=+0.191496548 container init 48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:01:29 compute-0 podman[281497]: 2025-11-22 04:01:29.413447427 +0000 UTC m=+0.204607438 container start 48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:01:29 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[281512]: [NOTICE]   (281516) : New worker (281518) forked
Nov 22 04:01:29 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[281512]: [NOTICE]   (281516) : Loading success.
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.643 253465 DEBUG nova.compute.manager [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received event network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.644 253465 DEBUG oslo_concurrency.lockutils [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.644 253465 DEBUG oslo_concurrency.lockutils [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.644 253465 DEBUG oslo_concurrency.lockutils [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.645 253465 DEBUG nova.compute.manager [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Processing event network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.645 253465 DEBUG nova.compute.manager [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received event network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.645 253465 DEBUG oslo_concurrency.lockutils [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.645 253465 DEBUG oslo_concurrency.lockutils [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.645 253465 DEBUG oslo_concurrency.lockutils [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.646 253465 DEBUG nova.compute.manager [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] No waiting events found dispatching network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:01:29 compute-0 nova_compute[253461]: 2025-11-22 04:01:29.646 253465 WARNING nova.compute.manager [req-0fc9183d-c119-43f3-8d73-3f31fca90e60 req-ff60883c-9c49-40a5-a166-cfa6e4fd3dc7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received unexpected event network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f for instance with vm_state building and task_state spawning.
Nov 22 04:01:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1374440105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1374440105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1374440105' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1374440105' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 259 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 598 KiB/s rd, 372 KiB/s wr, 129 op/s
Nov 22 04:01:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Nov 22 04:01:30 compute-0 ceph-mon[75011]: pgmap v1444: 305 pgs: 305 active+clean; 259 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 598 KiB/s rd, 372 KiB/s wr, 129 op/s
Nov 22 04:01:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Nov 22 04:01:30 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Nov 22 04:01:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.562 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784091.561465, e6504c9f-4e62-4cc8-9bb0-de2af483fe9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.562 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] VM Started (Lifecycle Event)
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.565 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.569 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.573 253465 INFO nova.virt.libvirt.driver [-] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Instance spawned successfully.
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.574 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.590 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.596 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.600 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.601 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.601 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.602 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.602 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.603 253465 DEBUG nova.virt.libvirt.driver [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.630 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.631 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784091.5616689, e6504c9f-4e62-4cc8-9bb0-de2af483fe9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.631 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] VM Paused (Lifecycle Event)
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.670 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.675 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784091.568765, e6504c9f-4e62-4cc8-9bb0-de2af483fe9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.675 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] VM Resumed (Lifecycle Event)
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.683 253465 INFO nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Took 6.85 seconds to spawn the instance on the hypervisor.
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.684 253465 DEBUG nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.699 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.704 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.742 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.762 253465 INFO nova.compute.manager [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Took 9.21 seconds to build instance.
Nov 22 04:01:31 compute-0 nova_compute[253461]: 2025-11-22 04:01:31.783 253465 DEBUG oslo_concurrency.lockutils [None req-d86bcf01-1944-4d5c-96b4-eeb8e6937dbb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.365s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Nov 22 04:01:31 compute-0 ceph-mon[75011]: osdmap e348: 3 total, 3 up, 3 in
Nov 22 04:01:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Nov 22 04:01:31 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Nov 22 04:01:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 248 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 626 KiB/s rd, 375 KiB/s wr, 169 op/s
Nov 22 04:01:32 compute-0 nova_compute[253461]: 2025-11-22 04:01:32.384 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:32 compute-0 nova_compute[253461]: 2025-11-22 04:01:32.663 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Nov 22 04:01:32 compute-0 ceph-mon[75011]: osdmap e349: 3 total, 3 up, 3 in
Nov 22 04:01:32 compute-0 ceph-mon[75011]: pgmap v1447: 305 pgs: 305 active+clean; 248 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 626 KiB/s rd, 375 KiB/s wr, 169 op/s
Nov 22 04:01:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Nov 22 04:01:32 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.399 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.399 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.400 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.400 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.401 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.402 253465 INFO nova.compute.manager [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Terminating instance
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.403 253465 DEBUG nova.compute.manager [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:01:33 compute-0 kernel: tap03db0410-4a (unregistering): left promiscuous mode
Nov 22 04:01:33 compute-0 NetworkManager[48916]: <info>  [1763784093.4472] device (tap03db0410-4a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:01:33 compute-0 ovn_controller[152691]: 2025-11-22T04:01:33Z|00169|binding|INFO|Releasing lport 03db0410-4a38-4cb4-ac6c-c8112237d93f from this chassis (sb_readonly=0)
Nov 22 04:01:33 compute-0 ovn_controller[152691]: 2025-11-22T04:01:33Z|00170|binding|INFO|Setting lport 03db0410-4a38-4cb4-ac6c-c8112237d93f down in Southbound
Nov 22 04:01:33 compute-0 ovn_controller[152691]: 2025-11-22T04:01:33Z|00171|binding|INFO|Removing iface tap03db0410-4a ovn-installed in OVS
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.461 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.463 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.470 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3b:bb:e2 10.100.0.10'], port_security=['fa:16:3e:3b:bb:e2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e6504c9f-4e62-4cc8-9bb0-de2af483fe9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a457fd1a-7e56-4665-9c38-fd65feb93293', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=03db0410-4a38-4cb4-ac6c-c8112237d93f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.472 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 03db0410-4a38-4cb4-ac6c-c8112237d93f in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.474 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4670b112-9f63-4a03-8d79-91f581c69c03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.474 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[52653a63-6361-4ee1-aaee-06835b79afe6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.475 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace which is not needed anymore
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.500 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 22 04:01:33 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 3.458s CPU time.
Nov 22 04:01:33 compute-0 systemd-machined[215728]: Machine qemu-15-instance-0000000f terminated.
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.625 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.634 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[281512]: [NOTICE]   (281516) : haproxy version is 2.8.14-c23fe91
Nov 22 04:01:33 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[281512]: [NOTICE]   (281516) : path to executable is /usr/sbin/haproxy
Nov 22 04:01:33 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[281512]: [WARNING]  (281516) : Exiting Master process...
Nov 22 04:01:33 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[281512]: [WARNING]  (281516) : Exiting Master process...
Nov 22 04:01:33 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[281512]: [ALERT]    (281516) : Current worker (281518) exited with code 143 (Terminated)
Nov 22 04:01:33 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[281512]: [WARNING]  (281516) : All workers exited. Exiting... (0)
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.643 253465 INFO nova.virt.libvirt.driver [-] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Instance destroyed successfully.
Nov 22 04:01:33 compute-0 systemd[1]: libpod-48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb.scope: Deactivated successfully.
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.644 253465 DEBUG nova.objects.instance [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'resources' on Instance uuid e6504c9f-4e62-4cc8-9bb0-de2af483fe9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:01:33 compute-0 podman[281557]: 2025-11-22 04:01:33.651782668 +0000 UTC m=+0.051443764 container died 48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.659 253465 DEBUG nova.virt.libvirt.vif [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:01:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1411667694',display_name='tempest-TestVolumeBootPattern-server-1411667694',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1411667694',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:01:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-0cxdq0xv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:01:31Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=e6504c9f-4e62-4cc8-9bb0-de2af483fe9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.660 253465 DEBUG nova.network.os_vif_util [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "address": "fa:16:3e:3b:bb:e2", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03db0410-4a", "ovs_interfaceid": "03db0410-4a38-4cb4-ac6c-c8112237d93f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.662 253465 DEBUG nova.network.os_vif_util [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3b:bb:e2,bridge_name='br-int',has_traffic_filtering=True,id=03db0410-4a38-4cb4-ac6c-c8112237d93f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03db0410-4a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.663 253465 DEBUG os_vif [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:bb:e2,bridge_name='br-int',has_traffic_filtering=True,id=03db0410-4a38-4cb4-ac6c-c8112237d93f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03db0410-4a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.669 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03db0410-4a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.670 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.672 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.674 253465 INFO os_vif [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3b:bb:e2,bridge_name='br-int',has_traffic_filtering=True,id=03db0410-4a38-4cb4-ac6c-c8112237d93f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03db0410-4a')
Nov 22 04:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb-userdata-shm.mount: Deactivated successfully.
Nov 22 04:01:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c02a7a3a03802f2eafc7a535d48a6b19d68c67fab205c774a32392a2396c7b6-merged.mount: Deactivated successfully.
Nov 22 04:01:33 compute-0 podman[281557]: 2025-11-22 04:01:33.707879215 +0000 UTC m=+0.107540311 container cleanup 48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:01:33 compute-0 systemd[1]: libpod-conmon-48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb.scope: Deactivated successfully.
Nov 22 04:01:33 compute-0 podman[281613]: 2025-11-22 04:01:33.794858194 +0000 UTC m=+0.055735189 container remove 48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.805 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1ea186d5-2c91-4095-ad6e-0c73187e1cdd]: (4, ('Sat Nov 22 04:01:33 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb)\n48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb\nSat Nov 22 04:01:33 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb)\n48cd5d8b8524a8589dd7add30b8e6189842f20a0003b3312855ba5142b23dddb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.807 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e24505-5ee6-4f2a-9f10-75d521ec3874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.809 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.811 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 kernel: tap4670b112-90: left promiscuous mode
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.814 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.817 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[db5b5e94-511a-4a0d-bb91-aaf98f10d1cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.834 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4c3286b5-8a1b-4a5d-8d9d-133c31ca7f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.836 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f8793f34-52ce-4605-a67a-f3ba0ce73b9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.839 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:33 compute-0 ceph-mon[75011]: osdmap e350: 3 total, 3 up, 3 in
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.852 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d425a7-3f26-4652-bb0f-c90d696fbad4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434120, 'reachable_time': 27507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281628, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.855 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:01:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:33.855 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[c091ddf8-7001-4908-bd85-8839701d4405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:33 compute-0 systemd[1]: run-netns-ovnmeta\x2d4670b112\x2d9f63\x2d4a03\x2d8d79\x2d91f581c69c03.mount: Deactivated successfully.
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.904 253465 INFO nova.virt.libvirt.driver [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Deleting instance files /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e_del
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.905 253465 INFO nova.virt.libvirt.driver [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Deletion of /var/lib/nova/instances/e6504c9f-4e62-4cc8-9bb0-de2af483fe9e_del complete
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.973 253465 INFO nova.compute.manager [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Took 0.57 seconds to destroy the instance on the hypervisor.
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.974 253465 DEBUG oslo.service.loopingcall [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.974 253465 DEBUG nova.compute.manager [-] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:01:33 compute-0 nova_compute[253461]: 2025-11-22 04:01:33.974 253465 DEBUG nova.network.neutron [-] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.246 253465 DEBUG nova.compute.manager [req-0bcc0ad5-2421-4e40-bad8-396935effb0a req-9644696a-0ec1-4555-81a4-3d45bbf604ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received event network-vif-unplugged-03db0410-4a38-4cb4-ac6c-c8112237d93f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.246 253465 DEBUG oslo_concurrency.lockutils [req-0bcc0ad5-2421-4e40-bad8-396935effb0a req-9644696a-0ec1-4555-81a4-3d45bbf604ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.246 253465 DEBUG oslo_concurrency.lockutils [req-0bcc0ad5-2421-4e40-bad8-396935effb0a req-9644696a-0ec1-4555-81a4-3d45bbf604ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.247 253465 DEBUG oslo_concurrency.lockutils [req-0bcc0ad5-2421-4e40-bad8-396935effb0a req-9644696a-0ec1-4555-81a4-3d45bbf604ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.247 253465 DEBUG nova.compute.manager [req-0bcc0ad5-2421-4e40-bad8-396935effb0a req-9644696a-0ec1-4555-81a4-3d45bbf604ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] No waiting events found dispatching network-vif-unplugged-03db0410-4a38-4cb4-ac6c-c8112237d93f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.247 253465 DEBUG nova.compute.manager [req-0bcc0ad5-2421-4e40-bad8-396935effb0a req-9644696a-0ec1-4555-81a4-3d45bbf604ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received event network-vif-unplugged-03db0410-4a38-4cb4-ac6c-c8112237d93f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:01:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 190 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 36 KiB/s wr, 259 op/s
Nov 22 04:01:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627738189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3627738189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.673 253465 DEBUG nova.network.neutron [-] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.693 253465 INFO nova.compute.manager [-] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Took 0.72 seconds to deallocate network for instance.
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.753 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.754 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.754 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.754 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.754 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.755 253465 INFO nova.compute.manager [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Terminating instance
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.756 253465 DEBUG nova.compute.manager [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.804 253465 DEBUG nova.compute.manager [req-67a6fb2e-b517-4521-998d-59ea4d102546 req-55c2ebcf-8117-48d8-8ec4-4c75b4666203 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received event network-vif-deleted-03db0410-4a38-4cb4-ac6c-c8112237d93f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:34 compute-0 kernel: tap19b8a4fb-a5 (unregistering): left promiscuous mode
Nov 22 04:01:34 compute-0 NetworkManager[48916]: <info>  [1763784094.8121] device (tap19b8a4fb-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.816 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:34 compute-0 ovn_controller[152691]: 2025-11-22T04:01:34Z|00172|binding|INFO|Releasing lport 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a from this chassis (sb_readonly=0)
Nov 22 04:01:34 compute-0 ovn_controller[152691]: 2025-11-22T04:01:34Z|00173|binding|INFO|Setting lport 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a down in Southbound
Nov 22 04:01:34 compute-0 ovn_controller[152691]: 2025-11-22T04:01:34Z|00174|binding|INFO|Removing iface tap19b8a4fb-a5 ovn-installed in OVS
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.818 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:34.823 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:e3:cf 10.100.0.5'], port_security=['fa:16:3e:44:e3:cf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a62857fbf8cf446cac9c207ae6750597', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b0832384-6d69-4b2e-a587-602048007135', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4f9f8761-3ac6-4a72-804a-92d1a0df209a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:01:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:34.824 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a in datapath 4692d97f-32c5-4a6f-a095-ba8dda0baf05 unbound from our chassis
Nov 22 04:01:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:34.825 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4692d97f-32c5-4a6f-a095-ba8dda0baf05, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:01:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:34.826 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[56956835-8420-4f1c-8748-f7c25d8767be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:34 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:34.827 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05 namespace which is not needed anymore
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.839 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:34 compute-0 ceph-mon[75011]: pgmap v1449: 305 pgs: 305 active+clean; 190 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 36 KiB/s wr, 259 op/s
Nov 22 04:01:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3627738189' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3627738189' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.861332) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784094861351, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2547, "num_deletes": 283, "total_data_size": 3538895, "memory_usage": 3594840, "flush_reason": "Manual Compaction"}
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Nov 22 04:01:34 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 22 04:01:34 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 18.488s CPU time.
Nov 22 04:01:34 compute-0 systemd-machined[215728]: Machine qemu-12-instance-0000000c terminated.
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784094893240, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3476506, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26678, "largest_seqno": 29224, "table_properties": {"data_size": 3464580, "index_size": 7908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 25820, "raw_average_key_size": 21, "raw_value_size": 3440506, "raw_average_value_size": 2928, "num_data_blocks": 340, "num_entries": 1175, "num_filter_entries": 1175, "num_deletions": 283, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763783940, "oldest_key_time": 1763783940, "file_creation_time": 1763784094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 31967 microseconds, and 6529 cpu microseconds.
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.893291) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3476506 bytes OK
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.893313) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.895667) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.895684) EVENT_LOG_v1 {"time_micros": 1763784094895678, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.895701) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3527683, prev total WAL file size 3527683, number of live WAL files 2.
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.896918) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3395KB)], [59(7508KB)]
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784094896974, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 11165303, "oldest_snapshot_seqno": -1}
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.905 253465 INFO nova.compute.manager [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Took 0.21 seconds to detach 1 volumes for instance.
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5818 keys, 9453972 bytes, temperature: kUnknown
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784094949959, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9453972, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9410567, "index_size": 27743, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 145370, "raw_average_key_size": 24, "raw_value_size": 9301414, "raw_average_value_size": 1598, "num_data_blocks": 1124, "num_entries": 5818, "num_filter_entries": 5818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784094, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.950 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.951 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.950241) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9453972 bytes
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.952373) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.4 rd, 178.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 7.3 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 6372, records dropped: 554 output_compression: NoCompression
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.952402) EVENT_LOG_v1 {"time_micros": 1763784094952389, "job": 32, "event": "compaction_finished", "compaction_time_micros": 53063, "compaction_time_cpu_micros": 19493, "output_level": 6, "num_output_files": 1, "total_output_size": 9453972, "num_input_records": 6372, "num_output_records": 5818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784094953671, "job": 32, "event": "table_file_deletion", "file_number": 61}
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784094956197, "job": 32, "event": "table_file_deletion", "file_number": 59}
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.896834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.956332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.956341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.956344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.956347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:01:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:01:34.956350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:01:34 compute-0 neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05[279044]: [NOTICE]   (279048) : haproxy version is 2.8.14-c23fe91
Nov 22 04:01:34 compute-0 neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05[279044]: [NOTICE]   (279048) : path to executable is /usr/sbin/haproxy
Nov 22 04:01:34 compute-0 neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05[279044]: [WARNING]  (279048) : Exiting Master process...
Nov 22 04:01:34 compute-0 neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05[279044]: [ALERT]    (279048) : Current worker (279050) exited with code 143 (Terminated)
Nov 22 04:01:34 compute-0 neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05[279044]: [WARNING]  (279048) : All workers exited. Exiting... (0)
Nov 22 04:01:34 compute-0 systemd[1]: libpod-af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8.scope: Deactivated successfully.
Nov 22 04:01:34 compute-0 conmon[279044]: conmon af2f85c9557056aa3afb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8.scope/container/memory.events
Nov 22 04:01:34 compute-0 podman[281654]: 2025-11-22 04:01:34.968095017 +0000 UTC m=+0.056770836 container died af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:01:34 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.998 253465 INFO nova.virt.libvirt.driver [-] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Instance destroyed successfully.
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:34.999 253465 DEBUG nova.objects.instance [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lazy-loading 'resources' on Instance uuid fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8-userdata-shm.mount: Deactivated successfully.
Nov 22 04:01:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9be3d2b946117ffc4bc9d01158ab12cf31ed33fb5cee30dc990788e710e1ccc-merged.mount: Deactivated successfully.
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.013 253465 DEBUG nova.virt.libvirt.vif [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T03:59:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-696385546',display_name='tempest-TestStampPattern-server-696385546',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-696385546',id=12,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNYgn9CTDvmfK+9lwizGtXeEZlSZuA1AJsMHGR/6t8oyy2KLeA+NyxTmeE6fCgDUhF1kETDxpPXjj8wfb8eB/z4sjIcgn3I98Rj3v+7eP88Wa0lihBTXU++d2vPdWMcG3w==',key_name='tempest-TestStampPattern-116986255',keypairs=<?>,launch_index=0,launched_at=2025-11-22T03:59:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a62857fbf8cf446cac9c207ae6750597',ramdisk_id='',reservation_id='r-xe4gsd50',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-1055115370',owner_user_name='tempest-TestStampPattern-1055115370-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:00:27Z,user_data=None,user_id='0b246fc3abe648cf93dbdc3bd03c5cbb',uuid=fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.013 253465 DEBUG nova.network.os_vif_util [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converting VIF {"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.014 253465 DEBUG nova.network.os_vif_util [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:e3:cf,bridge_name='br-int',has_traffic_filtering=True,id=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8a4fb-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.014 253465 DEBUG os_vif [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:e3:cf,bridge_name='br-int',has_traffic_filtering=True,id=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8a4fb-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.015 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.016 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap19b8a4fb-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.020 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.021 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.023 253465 INFO os_vif [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:e3:cf,bridge_name='br-int',has_traffic_filtering=True,id=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a,network=Network(4692d97f-32c5-4a6f-a095-ba8dda0baf05),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap19b8a4fb-a5')
Nov 22 04:01:35 compute-0 podman[281654]: 2025-11-22 04:01:35.02663368 +0000 UTC m=+0.115309499 container cleanup af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.040 253465 DEBUG oslo_concurrency.processutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:35 compute-0 systemd[1]: libpod-conmon-af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8.scope: Deactivated successfully.
Nov 22 04:01:35 compute-0 podman[281704]: 2025-11-22 04:01:35.095547193 +0000 UTC m=+0.046771192 container remove af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.102 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9cde63a3-a0b3-42eb-ae43-3c4a61c4249d]: (4, ('Sat Nov 22 04:01:34 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05 (af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8)\naf2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8\nSat Nov 22 04:01:35 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05 (af2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8)\naf2f85c9557056aa3afbbe0e64c316eeaa8559fe326c8865cba103d03f3d2ec8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.104 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b71d0eb4-728d-4f46-8fe5-74f70d0e30d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.105 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4692d97f-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:01:35 compute-0 kernel: tap4692d97f-30: left promiscuous mode
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.107 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.141 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.143 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a90e5b-099b-4b57-b3fa-efd8da68cab3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.163 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9a3ecaa3-2966-4c8a-8947-dd6736f12bb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.165 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1bb6a9-91e7-4e4d-ba54-a03c534a8a04]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.184 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2fedc7-2fc7-42b6-872d-6757cc9b9101]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424564, 'reachable_time': 23534, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281739, 'error': None, 'target': 'ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.186 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4692d97f-32c5-4a6f-a095-ba8dda0baf05 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:01:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:01:35.186 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[5652e909-f602-4b56-be0a-edb4f8a135ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:35 compute-0 systemd[1]: run-netns-ovnmeta\x2d4692d97f\x2d32c5\x2d4a6f\x2da095\x2dba8dda0baf05.mount: Deactivated successfully.
Nov 22 04:01:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/901969301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.502 253465 DEBUG oslo_concurrency.processutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.508 253465 DEBUG nova.compute.provider_tree [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.572 253465 DEBUG nova.scheduler.client.report [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.597 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.635 253465 INFO nova.scheduler.client.report [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Deleted allocations for instance e6504c9f-4e62-4cc8-9bb0-de2af483fe9e
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.668 253465 INFO nova.virt.libvirt.driver [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Deleting instance files /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_del
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.669 253465 INFO nova.virt.libvirt.driver [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Deletion of /var/lib/nova/instances/fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618_del complete
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.725 253465 DEBUG oslo_concurrency.lockutils [None req-e75c27fa-d7e6-4b2b-ae9b-da1d76e7871f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.734 253465 INFO nova.compute.manager [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Took 0.98 seconds to destroy the instance on the hypervisor.
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.734 253465 DEBUG oslo.service.loopingcall [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.735 253465 DEBUG nova.compute.manager [-] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:01:35 compute-0 nova_compute[253461]: 2025-11-22 04:01:35.735 253465 DEBUG nova.network.neutron [-] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:01:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/901969301' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Nov 22 04:01:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Nov 22 04:01:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:01:36
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['backups', 'images', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'vms']
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 190 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 7.6 KiB/s wr, 173 op/s
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.362 253465 DEBUG nova.compute.manager [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received event network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.363 253465 DEBUG oslo_concurrency.lockutils [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.363 253465 DEBUG oslo_concurrency.lockutils [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.364 253465 DEBUG oslo_concurrency.lockutils [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "e6504c9f-4e62-4cc8-9bb0-de2af483fe9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.364 253465 DEBUG nova.compute.manager [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] No waiting events found dispatching network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.364 253465 WARNING nova.compute.manager [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Received unexpected event network-vif-plugged-03db0410-4a38-4cb4-ac6c-c8112237d93f for instance with vm_state deleted and task_state None.
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.365 253465 DEBUG nova.compute.manager [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-changed-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.365 253465 DEBUG nova.compute.manager [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Refreshing instance network info cache due to event network-changed-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.365 253465 DEBUG oslo_concurrency.lockutils [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.366 253465 DEBUG oslo_concurrency.lockutils [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.366 253465 DEBUG nova.network.neutron [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Refreshing network info cache for port 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.451 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.451 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.452 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.452 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.452 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:01:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:01:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2041861646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.917 253465 DEBUG nova.compute.manager [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-vif-unplugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.918 253465 DEBUG oslo_concurrency.lockutils [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.918 253465 DEBUG oslo_concurrency.lockutils [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.919 253465 DEBUG oslo_concurrency.lockutils [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.919 253465 DEBUG nova.compute.manager [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] No waiting events found dispatching network-vif-unplugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.920 253465 DEBUG nova.compute.manager [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-vif-unplugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.920 253465 DEBUG nova.compute.manager [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.921 253465 DEBUG oslo_concurrency.lockutils [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.921 253465 DEBUG oslo_concurrency.lockutils [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.922 253465 DEBUG oslo_concurrency.lockutils [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.922 253465 DEBUG nova.compute.manager [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] No waiting events found dispatching network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.923 253465 WARNING nova.compute.manager [req-0bbdac8e-7a1a-496a-a2f1-cde77775edf3 req-500d4b11-2f83-4048-9b12-cd84610a75b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received unexpected event network-vif-plugged-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a for instance with vm_state active and task_state deleting.
Nov 22 04:01:36 compute-0 nova_compute[253461]: 2025-11-22 04:01:36.933 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:37 compute-0 ceph-mon[75011]: osdmap e351: 3 total, 3 up, 3 in
Nov 22 04:01:37 compute-0 ceph-mon[75011]: pgmap v1451: 305 pgs: 305 active+clean; 190 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 171 KiB/s rd, 7.6 KiB/s wr, 173 op/s
Nov 22 04:01:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2041861646' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:37 compute-0 podman[281775]: 2025-11-22 04:01:37.06589685 +0000 UTC m=+0.077575001 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.121 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.122 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4480MB free_disk=59.942630767822266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.122 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.123 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.181 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.181 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.181 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.229 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.300 253465 DEBUG nova.network.neutron [-] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.325 253465 INFO nova.compute.manager [-] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Took 1.59 seconds to deallocate network for instance.
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.367 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.386 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2428236900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2428236900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:37 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3079401639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.673 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.679 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.709 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.762 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.762 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.763 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.821 253465 DEBUG oslo_concurrency.processutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.918 253465 DEBUG nova.network.neutron [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updated VIF entry in instance network info cache for port 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.920 253465 DEBUG nova.network.neutron [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Updating instance_info_cache with network_info: [{"id": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "address": "fa:16:3e:44:e3:cf", "network": {"id": "4692d97f-32c5-4a6f-a095-ba8dda0baf05", "bridge": "br-int", "label": "tempest-TestStampPattern-1216871887-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a62857fbf8cf446cac9c207ae6750597", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap19b8a4fb-a5", "ovs_interfaceid": "19b8a4fb-a5a7-4112-8511-2aa985a0ae5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:37 compute-0 nova_compute[253461]: 2025-11-22 04:01:37.974 253465 DEBUG oslo_concurrency.lockutils [req-52bf5075-0315-4989-82ae-ef92e39a4dc3 req-e41ae2f3-aa19-40d3-afad-f0878ede3bfd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:01:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2428236900' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2428236900' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:38 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3079401639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2354841413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.270 253465 DEBUG oslo_concurrency.processutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.277 253465 DEBUG nova.compute.provider_tree [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.308 253465 DEBUG nova.scheduler.client.report [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:01:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 150 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 6.3 KiB/s wr, 171 op/s
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.365 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.416 253465 INFO nova.scheduler.client.report [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Deleted allocations for instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.524 253465 DEBUG oslo_concurrency.lockutils [None req-f332635d-5630-4840-ac0b-65c676f531e5 0b246fc3abe648cf93dbdc3bd03c5cbb a62857fbf8cf446cac9c207ae6750597 - - default default] Lock "fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.763 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.764 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.764 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.764 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.777 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.778 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:38 compute-0 nova_compute[253461]: 2025-11-22 04:01:38.778 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2354841413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:39 compute-0 ceph-mon[75011]: pgmap v1452: 305 pgs: 305 active+clean; 150 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 164 KiB/s rd, 6.3 KiB/s wr, 171 op/s
Nov 22 04:01:39 compute-0 nova_compute[253461]: 2025-11-22 04:01:39.156 253465 DEBUG nova.compute.manager [req-495b3858-4b12-4335-a3cf-20b04c721d62 req-30dbb2bb-b444-45c1-9b6b-f2b6fd5c1406 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Received event network-vif-deleted-19b8a4fb-a5a7-4112-8511-2aa985a0ae5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:39 compute-0 nova_compute[253461]: 2025-11-22 04:01:39.157 253465 INFO nova.compute.manager [req-495b3858-4b12-4335-a3cf-20b04c721d62 req-30dbb2bb-b444-45c1-9b6b-f2b6fd5c1406 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Neutron deleted interface 19b8a4fb-a5a7-4112-8511-2aa985a0ae5a; detaching it from the instance and deleting it from the info cache
Nov 22 04:01:39 compute-0 nova_compute[253461]: 2025-11-22 04:01:39.157 253465 DEBUG nova.network.neutron [req-495b3858-4b12-4335-a3cf-20b04c721d62 req-30dbb2bb-b444-45c1-9b6b-f2b6fd5c1406 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Nov 22 04:01:39 compute-0 nova_compute[253461]: 2025-11-22 04:01:39.160 253465 DEBUG nova.compute.manager [req-495b3858-4b12-4335-a3cf-20b04c721d62 req-30dbb2bb-b444-45c1-9b6b-f2b6fd5c1406 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Detach interface failed, port_id=19b8a4fb-a5a7-4112-8511-2aa985a0ae5a, reason: Instance fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 04:01:40 compute-0 nova_compute[253461]: 2025-11-22 04:01:40.045 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 90 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 7.7 KiB/s wr, 178 op/s
Nov 22 04:01:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:01:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2558083532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:01:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2558083532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:40 compute-0 nova_compute[253461]: 2025-11-22 04:01:40.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:40 compute-0 nova_compute[253461]: 2025-11-22 04:01:40.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:01:40 compute-0 ceph-mon[75011]: pgmap v1453: 305 pgs: 305 active+clean; 90 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 158 KiB/s rd, 7.7 KiB/s wr, 178 op/s
Nov 22 04:01:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2558083532' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:01:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2558083532' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:01:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Nov 22 04:01:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Nov 22 04:01:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Nov 22 04:01:41 compute-0 nova_compute[253461]: 2025-11-22 04:01:41.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:41 compute-0 nova_compute[253461]: 2025-11-22 04:01:41.589 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784086.5644667, 45b06598-5fca-47e2-962e-824755f52a2b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:01:41 compute-0 nova_compute[253461]: 2025-11-22 04:01:41.590 253465 INFO nova.compute.manager [-] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] VM Stopped (Lifecycle Event)
Nov 22 04:01:41 compute-0 nova_compute[253461]: 2025-11-22 04:01:41.614 253465 DEBUG nova.compute.manager [None req-aa07d9ea-47cd-4c2b-b717-41df9d410f03 - - - - - -] [instance: 45b06598-5fca-47e2-962e-824755f52a2b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:01:42 compute-0 ceph-mon[75011]: osdmap e352: 3 total, 3 up, 3 in
Nov 22 04:01:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 88 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.4 KiB/s wr, 106 op/s
Nov 22 04:01:42 compute-0 nova_compute[253461]: 2025-11-22 04:01:42.426 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:42 compute-0 nova_compute[253461]: 2025-11-22 04:01:42.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:01:43 compute-0 ceph-mon[75011]: pgmap v1455: 305 pgs: 305 active+clean; 88 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.4 KiB/s wr, 106 op/s
Nov 22 04:01:43 compute-0 nova_compute[253461]: 2025-11-22 04:01:43.596 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:43 compute-0 nova_compute[253461]: 2025-11-22 04:01:43.870 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 88 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 KiB/s wr, 132 op/s
Nov 22 04:01:44 compute-0 ceph-mon[75011]: pgmap v1456: 305 pgs: 305 active+clean; 88 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.1 KiB/s wr, 132 op/s
Nov 22 04:01:45 compute-0 nova_compute[253461]: 2025-11-22 04:01:45.046 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e352 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 88 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.1 KiB/s wr, 110 op/s
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003469509018688546 of space, bias 1.0, pg target 0.10408527056065639 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:01:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:01:46 compute-0 ceph-mon[75011]: pgmap v1457: 305 pgs: 305 active+clean; 88 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 5.1 KiB/s wr, 110 op/s
Nov 22 04:01:47 compute-0 nova_compute[253461]: 2025-11-22 04:01:47.458 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Nov 22 04:01:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Nov 22 04:01:47 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Nov 22 04:01:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 107 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 596 KiB/s wr, 66 op/s
Nov 22 04:01:48 compute-0 nova_compute[253461]: 2025-11-22 04:01:48.638 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784093.6381443, e6504c9f-4e62-4cc8-9bb0-de2af483fe9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:01:48 compute-0 nova_compute[253461]: 2025-11-22 04:01:48.639 253465 INFO nova.compute.manager [-] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] VM Stopped (Lifecycle Event)
Nov 22 04:01:48 compute-0 nova_compute[253461]: 2025-11-22 04:01:48.657 253465 DEBUG nova.compute.manager [None req-881641e1-6a1b-46a2-a63a-dc987d95f8a8 - - - - - -] [instance: e6504c9f-4e62-4cc8-9bb0-de2af483fe9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:01:49 compute-0 ceph-mon[75011]: osdmap e353: 3 total, 3 up, 3 in
Nov 22 04:01:49 compute-0 ceph-mon[75011]: pgmap v1459: 305 pgs: 305 active+clean; 107 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 596 KiB/s wr, 66 op/s
Nov 22 04:01:49 compute-0 nova_compute[253461]: 2025-11-22 04:01:49.988 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784094.9855654, fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:01:49 compute-0 nova_compute[253461]: 2025-11-22 04:01:49.988 253465 INFO nova.compute.manager [-] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] VM Stopped (Lifecycle Event)
Nov 22 04:01:50 compute-0 nova_compute[253461]: 2025-11-22 04:01:50.015 253465 DEBUG nova.compute.manager [None req-99ef1030-ce4e-4af6-b793-5047ba2673d7 - - - - - -] [instance: fc2ed1e4-b3fb-4cb3-9ca7-5e0b5e9c1618] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:01:50 compute-0 nova_compute[253461]: 2025-11-22 04:01:50.049 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 94 op/s
Nov 22 04:01:50 compute-0 ceph-mon[75011]: pgmap v1460: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 94 op/s
Nov 22 04:01:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:51 compute-0 podman[281842]: 2025-11-22 04:01:51.37434846 +0000 UTC m=+0.054206915 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:01:51 compute-0 podman[281843]: 2025-11-22 04:01:51.428410454 +0000 UTC m=+0.103630851 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 22 04:01:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 870 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Nov 22 04:01:52 compute-0 ceph-mon[75011]: pgmap v1461: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 870 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.460 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.746 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.746 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.763 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.838 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.839 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.851 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.852 253465 INFO nova.compute.claims [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:01:52 compute-0 nova_compute[253461]: 2025-11-22 04:01:52.991 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:01:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603115418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1603115418' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.446 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.453 253465 DEBUG nova.compute.provider_tree [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.471 253465 DEBUG nova.scheduler.client.report [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.495 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.496 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.549 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.550 253465 DEBUG nova.network.neutron [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.568 253465 INFO nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.586 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.628 253465 INFO nova.virt.block_device [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Booting with volume snapshot b0fd6f69-3f4b-4469-86cb-f727dacaf50b at /dev/vda
Nov 22 04:01:53 compute-0 nova_compute[253461]: 2025-11-22 04:01:53.945 253465 DEBUG nova.policy [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45ccef35c0c843a59c9dfd0eb67190a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '83cc5de7368b40b984b51f781e85343c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:01:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 22 04:01:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:01:54 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1082769485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:54 compute-0 ceph-mon[75011]: pgmap v1462: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Nov 22 04:01:55 compute-0 nova_compute[253461]: 2025-11-22 04:01:55.053 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1082769485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:55 compute-0 nova_compute[253461]: 2025-11-22 04:01:55.987 253465 DEBUG nova.network.neutron [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Successfully created port: d3d71762-3b8d-4284-93b4-3e0010782ff8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:01:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:01:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Nov 22 04:01:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Nov 22 04:01:56 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Nov 22 04:01:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Nov 22 04:01:56 compute-0 nova_compute[253461]: 2025-11-22 04:01:56.665 253465 DEBUG nova.network.neutron [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Successfully updated port: d3d71762-3b8d-4284-93b4-3e0010782ff8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:01:56 compute-0 nova_compute[253461]: 2025-11-22 04:01:56.688 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "refresh_cache-8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:01:56 compute-0 nova_compute[253461]: 2025-11-22 04:01:56.688 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquired lock "refresh_cache-8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:01:56 compute-0 nova_compute[253461]: 2025-11-22 04:01:56.689 253465 DEBUG nova.network.neutron [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:01:56 compute-0 nova_compute[253461]: 2025-11-22 04:01:56.763 253465 DEBUG nova.compute.manager [req-8badddff-b659-4bf9-ad68-3c434b6711e6 req-ed35b4dc-60d7-4077-91cf-21aae74bf973 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received event network-changed-d3d71762-3b8d-4284-93b4-3e0010782ff8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:01:56 compute-0 nova_compute[253461]: 2025-11-22 04:01:56.764 253465 DEBUG nova.compute.manager [req-8badddff-b659-4bf9-ad68-3c434b6711e6 req-ed35b4dc-60d7-4077-91cf-21aae74bf973 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Refreshing instance network info cache due to event network-changed-d3d71762-3b8d-4284-93b4-3e0010782ff8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:01:56 compute-0 nova_compute[253461]: 2025-11-22 04:01:56.764 253465 DEBUG oslo_concurrency.lockutils [req-8badddff-b659-4bf9-ad68-3c434b6711e6 req-ed35b4dc-60d7-4077-91cf-21aae74bf973 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:01:57 compute-0 nova_compute[253461]: 2025-11-22 04:01:57.043 253465 DEBUG nova.network.neutron [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:01:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Nov 22 04:01:57 compute-0 ceph-mon[75011]: osdmap e354: 3 total, 3 up, 3 in
Nov 22 04:01:57 compute-0 ceph-mon[75011]: pgmap v1464: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Nov 22 04:01:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Nov 22 04:01:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Nov 22 04:01:57 compute-0 nova_compute[253461]: 2025-11-22 04:01:57.462 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:01:58 compute-0 ceph-mon[75011]: osdmap e355: 3 total, 3 up, 3 in
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.212 253465 DEBUG nova.network.neutron [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Updating instance_info_cache with network_info: [{"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 14 op/s
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.352 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Releasing lock "refresh_cache-8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.353 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Instance network_info: |[{"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.354 253465 DEBUG oslo_concurrency.lockutils [req-8badddff-b659-4bf9-ad68-3c434b6711e6 req-ed35b4dc-60d7-4077-91cf-21aae74bf973 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.354 253465 DEBUG nova.network.neutron [req-8badddff-b659-4bf9-ad68-3c434b6711e6 req-ed35b4dc-60d7-4077-91cf-21aae74bf973 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Refreshing network info cache for port d3d71762-3b8d-4284-93b4-3e0010782ff8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.548 253465 DEBUG os_brick.utils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.549 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.566 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.566 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9c459c-6ca1-459e-8cc7-be9faebdd392]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.569 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.582 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.582 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e18f8ffe-d52c-4e28-bd18-7557a2e38fd4]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.584 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.599 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.600 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[3dbfa049-46bd-4e47-9cd9-b3592a43e978]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.601 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[48ca46d2-986a-49aa-a064-bec42d898441]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.602 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.634 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "nvme version" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.637 253465 DEBUG os_brick.initiator.connectors.lightos [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.638 253465 DEBUG os_brick.initiator.connectors.lightos [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.638 253465 DEBUG os_brick.initiator.connectors.lightos [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.639 253465 DEBUG os_brick.utils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] <== get_connector_properties: return (90ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:01:58 compute-0 nova_compute[253461]: 2025-11-22 04:01:58.640 253465 DEBUG nova.virt.block_device [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Updating existing volume attachment record: 111ca50a-70c3-4c9a-b40a-e669bac4c15d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:01:59 compute-0 ceph-mon[75011]: pgmap v1466: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.1 KiB/s wr, 14 op/s
Nov 22 04:01:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:01:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3415642789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.733 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.735 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.735 253465 INFO nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Creating image(s)
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.736 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.736 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Ensure instance console log exists: /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.736 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.737 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.737 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.739 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Start _get_guest_xml network_info=[{"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': '111ca50a-70c3-4c9a-b40a-e669bac4c15d', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-6ef74269-5d74-463e-84ca-f6551e7dae30', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '6ef74269-5d74-463e-84ca-f6551e7dae30', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c', 'attached_at': '', 'detached_at': '', 'volume_id': '6ef74269-5d74-463e-84ca-f6551e7dae30', 'serial': '6ef74269-5d74-463e-84ca-f6551e7dae30'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': True, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.744 253465 WARNING nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.758 253465 DEBUG nova.virt.libvirt.host [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.758 253465 DEBUG nova.virt.libvirt.host [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.767 253465 DEBUG nova.virt.libvirt.host [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.767 253465 DEBUG nova.virt.libvirt.host [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.768 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.768 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.768 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.769 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.769 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.769 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.769 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.770 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.770 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.770 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.770 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.770 253465 DEBUG nova.virt.hardware [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:01:59 compute-0 sudo[281916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:01:59 compute-0 sudo[281916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:01:59 compute-0 sudo[281916]: pam_unix(sudo:session): session closed for user root
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.796 253465 DEBUG nova.storage.rbd_utils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.802 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.824 253465 DEBUG nova.network.neutron [req-8badddff-b659-4bf9-ad68-3c434b6711e6 req-ed35b4dc-60d7-4077-91cf-21aae74bf973 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Updated VIF entry in instance network info cache for port d3d71762-3b8d-4284-93b4-3e0010782ff8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.825 253465 DEBUG nova.network.neutron [req-8badddff-b659-4bf9-ad68-3c434b6711e6 req-ed35b4dc-60d7-4077-91cf-21aae74bf973 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Updating instance_info_cache with network_info: [{"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:01:59 compute-0 nova_compute[253461]: 2025-11-22 04:01:59.840 253465 DEBUG oslo_concurrency.lockutils [req-8badddff-b659-4bf9-ad68-3c434b6711e6 req-ed35b4dc-60d7-4077-91cf-21aae74bf973 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:01:59 compute-0 sudo[281959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:01:59 compute-0 sudo[281959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:01:59 compute-0 sudo[281959]: pam_unix(sudo:session): session closed for user root
Nov 22 04:01:59 compute-0 sudo[281985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:01:59 compute-0 sudo[281985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:01:59 compute-0 sudo[281985]: pam_unix(sudo:session): session closed for user root
Nov 22 04:01:59 compute-0 sudo[282010]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:01:59 compute-0 sudo[282010]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.055 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654888243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3415642789' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3654888243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.227 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/644191719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/644191719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.252 253465 DEBUG nova.virt.libvirt.vif [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2119404051',display_name='tempest-TestVolumeBootPattern-server-2119404051',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2119404051',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-295mrc0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:01:53Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.253 253465 DEBUG nova.network.os_vif_util [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.254 253465 DEBUG nova.network.os_vif_util [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ce:cd,bridge_name='br-int',has_traffic_filtering=True,id=d3d71762-3b8d-4284-93b4-3e0010782ff8,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3d71762-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.255 253465 DEBUG nova.objects.instance [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'pci_devices' on Instance uuid 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.270 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <uuid>8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c</uuid>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <name>instance-00000010</name>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <nova:name>tempest-TestVolumeBootPattern-server-2119404051</nova:name>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:01:59</nova:creationTime>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <nova:user uuid="45ccef35c0c843a59c9dfd0eb67190a6">tempest-TestVolumeBootPattern-1584219565-project-member</nova:user>
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <nova:project uuid="83cc5de7368b40b984b51f781e85343c">tempest-TestVolumeBootPattern-1584219565</nova:project>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <nova:port uuid="d3d71762-3b8d-4284-93b4-3e0010782ff8">
Nov 22 04:02:00 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <system>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <entry name="serial">8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c</entry>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <entry name="uuid">8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c</entry>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </system>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <os>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   </os>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <features>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   </features>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c_disk.config">
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       </source>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-6ef74269-5d74-463e-84ca-f6551e7dae30">
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       </source>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:02:00 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <serial>6ef74269-5d74-463e-84ca-f6551e7dae30</serial>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:2e:ce:cd"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <target dev="tapd3d71762-3b"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c/console.log" append="off"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <video>
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </video>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:02:00 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:02:00 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:02:00 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:02:00 compute-0 nova_compute[253461]: </domain>
Nov 22 04:02:00 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.272 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Preparing to wait for external event network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.272 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.272 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.272 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.273 253465 DEBUG nova.virt.libvirt.vif [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2119404051',display_name='tempest-TestVolumeBootPattern-server-2119404051',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2119404051',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-295mrc0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:01:53Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.273 253465 DEBUG nova.network.os_vif_util [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.273 253465 DEBUG nova.network.os_vif_util [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ce:cd,bridge_name='br-int',has_traffic_filtering=True,id=d3d71762-3b8d-4284-93b4-3e0010782ff8,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3d71762-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.274 253465 DEBUG os_vif [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ce:cd,bridge_name='br-int',has_traffic_filtering=True,id=d3d71762-3b8d-4284-93b4-3e0010782ff8,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3d71762-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.274 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.274 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.275 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.277 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.277 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd3d71762-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.277 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd3d71762-3b, col_values=(('external_ids', {'iface-id': 'd3d71762-3b8d-4284-93b4-3e0010782ff8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:ce:cd', 'vm-uuid': '8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:00 compute-0 NetworkManager[48916]: <info>  [1763784120.2798] manager: (tapd3d71762-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.281 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.285 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.287 253465 INFO os_vif [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ce:cd,bridge_name='br-int',has_traffic_filtering=True,id=d3d71762-3b8d-4284-93b4-3e0010782ff8,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3d71762-3b')
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.339 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.339 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.340 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No VIF found with MAC fa:16:3e:2e:ce:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.340 253465 INFO nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Using config drive
Nov 22 04:02:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.2 KiB/s wr, 36 op/s
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.363 253465 DEBUG nova.storage.rbd_utils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:00 compute-0 sudo[282010]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:02:00 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d4c7f276-5bcc-442f-8544-b1c176ee6478 does not exist
Nov 22 04:02:00 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 533dc44e-6c4d-4dfa-bcfd-eeec0c0411aa does not exist
Nov 22 04:02:00 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev bde3a200-abaa-4381-ab0d-286b38af51e5 does not exist
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:02:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:02:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:02:00 compute-0 sudo[282110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:02:00 compute-0 sudo[282110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:00 compute-0 sudo[282110]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.680 253465 INFO nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Creating config drive at /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c/disk.config
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.689 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprfmdlxt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:00 compute-0 sudo[282135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:02:00 compute-0 sudo[282135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:00 compute-0 sudo[282135]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:00 compute-0 sudo[282161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:02:00 compute-0 sudo[282161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:00 compute-0 sudo[282161]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.820 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpprfmdlxt" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:00 compute-0 sudo[282188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:02:00 compute-0 sudo[282188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.848 253465 DEBUG nova.storage.rbd_utils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:00 compute-0 nova_compute[253461]: 2025-11-22 04:02:00.853 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c/disk.config 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e355 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.023 253465 DEBUG oslo_concurrency.processutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c/disk.config 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.023 253465 INFO nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Deleting local config drive /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c/disk.config because it was imported into RBD.
Nov 22 04:02:01 compute-0 kernel: tapd3d71762-3b: entered promiscuous mode
Nov 22 04:02:01 compute-0 ovn_controller[152691]: 2025-11-22T04:02:01Z|00175|binding|INFO|Claiming lport d3d71762-3b8d-4284-93b4-3e0010782ff8 for this chassis.
Nov 22 04:02:01 compute-0 ovn_controller[152691]: 2025-11-22T04:02:01Z|00176|binding|INFO|d3d71762-3b8d-4284-93b4-3e0010782ff8: Claiming fa:16:3e:2e:ce:cd 10.100.0.3
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.080 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 NetworkManager[48916]: <info>  [1763784121.0818] manager: (tapd3d71762-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.085 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.095 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:ce:cd 10.100.0.3'], port_security=['fa:16:3e:2e:ce:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a457fd1a-7e56-4665-9c38-fd65feb93293', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=d3d71762-3b8d-4284-93b4-3e0010782ff8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.096 162689 INFO neutron.agent.ovn.metadata.agent [-] Port d3d71762-3b8d-4284-93b4-3e0010782ff8 in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 bound to our chassis
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.097 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.107 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[31e04036-28f4-48ae-aca3-aefd636a6666]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.108 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4670b112-91 in ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.109 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4670b112-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.110 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c1601b74-02fd-4224-b49e-814a3ac002a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 systemd-udevd[282309]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.110 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[87bf5064-8abf-4a19-a0ed-6df2e8e65165]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 systemd-machined[215728]: New machine qemu-16-instance-00000010.
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.125 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[7c8356ea-9e9e-4b8b-ac96-bd9b97371c9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 NetworkManager[48916]: <info>  [1763784121.1311] device (tapd3d71762-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:02:01 compute-0 NetworkManager[48916]: <info>  [1763784121.1327] device (tapd3d71762-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:02:01 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.154 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5dcd2c2a-848b-4ebf-b021-f362649d9d11]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.165 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 podman[282298]: 2025-11-22 04:02:01.166885588 +0000 UTC m=+0.058876481 container create ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.170 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 ovn_controller[152691]: 2025-11-22T04:02:01Z|00177|binding|INFO|Setting lport d3d71762-3b8d-4284-93b4-3e0010782ff8 ovn-installed in OVS
Nov 22 04:02:01 compute-0 ovn_controller[152691]: 2025-11-22T04:02:01Z|00178|binding|INFO|Setting lport d3d71762-3b8d-4284-93b4-3e0010782ff8 up in Southbound
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.174 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.193 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[2d0d84fb-db5a-4b9c-8e68-1bcde2ffad8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 NetworkManager[48916]: <info>  [1763784121.2000] manager: (tap4670b112-90): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.200 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[21f54770-3764-49ee-9681-952d8bbc04b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 systemd[1]: Started libpod-conmon-ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e.scope.
Nov 22 04:02:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Nov 22 04:02:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/644191719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/644191719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:01 compute-0 ceph-mon[75011]: pgmap v1467: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.2 KiB/s wr, 36 op/s
Nov 22 04:02:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:02:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:02:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:02:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:02:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:02:01 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.234 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7c88b3-ff0a-4afc-9e98-7482a63c3ca2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.237 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[79f350e8-0275-4b1a-a0a6-e30335cbf39f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Nov 22 04:02:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Nov 22 04:02:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:01 compute-0 podman[282298]: 2025-11-22 04:02:01.149538804 +0000 UTC m=+0.041529717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:01 compute-0 podman[282298]: 2025-11-22 04:02:01.261105166 +0000 UTC m=+0.153096079 container init ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:02:01 compute-0 NetworkManager[48916]: <info>  [1763784121.2626] device (tap4670b112-90): carrier: link connected
Nov 22 04:02:01 compute-0 podman[282298]: 2025-11-22 04:02:01.269233909 +0000 UTC m=+0.161224802 container start ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.269 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[fc046fa4-5765-4d33-91dd-07db3fc28b2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 podman[282298]: 2025-11-22 04:02:01.272623676 +0000 UTC m=+0.164614559 container attach ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:02:01 compute-0 clever_turing[282333]: 167 167
Nov 22 04:02:01 compute-0 systemd[1]: libpod-ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e.scope: Deactivated successfully.
Nov 22 04:02:01 compute-0 podman[282298]: 2025-11-22 04:02:01.276068269 +0000 UTC m=+0.168059162 container died ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.286 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3e385832-459b-4465-9464-8a47be6d7514]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437386, 'reachable_time': 43796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282355, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.303 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6974074a-8848-4074-a3c8-982ed685a4ba]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:43a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 437386, 'tstamp': 437386}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282357, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-43984b8d055eef6b0a9db7c75fc0a1e7e3040c039f74675048675cedea98ff4f-merged.mount: Deactivated successfully.
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.321 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8dacb343-2e40-43c0-ab83-2804d4c58a58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437386, 'reachable_time': 43796, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282367, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 podman[282298]: 2025-11-22 04:02:01.328300091 +0000 UTC m=+0.220290974 container remove ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_turing, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Nov 22 04:02:01 compute-0 systemd[1]: libpod-conmon-ade5275ab42d66540407cf37b8824ba63c4b980d2af526594252dda07c4e549e.scope: Deactivated successfully.
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.355 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d664028e-c6bd-4156-aea0-65d047e1589b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.424 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6112a26a-421d-4b5d-8826-9b5ea1196081]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.425 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.425 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.425 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:01 compute-0 NetworkManager[48916]: <info>  [1763784121.4278] manager: (tap4670b112-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Nov 22 04:02:01 compute-0 kernel: tap4670b112-90: entered promiscuous mode
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.427 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.429 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.431 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.434 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 ovn_controller[152691]: 2025-11-22T04:02:01Z|00179|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.437 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.438 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[47f14b3e-848e-48ba-8bc9-417673c7cd56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.439 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:02:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:01.441 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'env', 'PROCESS_TAG=haproxy-4670b112-9f63-4a03-8d79-91f581c69c03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4670b112-9f63-4a03-8d79-91f581c69c03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.459 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.492 253465 DEBUG nova.compute.manager [req-c7dc93ed-eb0c-4e5d-b972-5a27625ac9d5 req-41f33285-38ca-4e7f-8c66-5a28250ffa21 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received event network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.492 253465 DEBUG oslo_concurrency.lockutils [req-c7dc93ed-eb0c-4e5d-b972-5a27625ac9d5 req-41f33285-38ca-4e7f-8c66-5a28250ffa21 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.493 253465 DEBUG oslo_concurrency.lockutils [req-c7dc93ed-eb0c-4e5d-b972-5a27625ac9d5 req-41f33285-38ca-4e7f-8c66-5a28250ffa21 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.493 253465 DEBUG oslo_concurrency.lockutils [req-c7dc93ed-eb0c-4e5d-b972-5a27625ac9d5 req-41f33285-38ca-4e7f-8c66-5a28250ffa21 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:01 compute-0 nova_compute[253461]: 2025-11-22 04:02:01.494 253465 DEBUG nova.compute.manager [req-c7dc93ed-eb0c-4e5d-b972-5a27625ac9d5 req-41f33285-38ca-4e7f-8c66-5a28250ffa21 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Processing event network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:02:01 compute-0 podman[282382]: 2025-11-22 04:02:01.505624012 +0000 UTC m=+0.044065729 container create 53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:02:01 compute-0 systemd[1]: Started libpod-conmon-53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315.scope.
Nov 22 04:02:01 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:01 compute-0 podman[282382]: 2025-11-22 04:02:01.485709741 +0000 UTC m=+0.024151488 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b960e7927db1e8e121c9d83cdd3e3ee56823ad7006d0f4d192649a80173997/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b960e7927db1e8e121c9d83cdd3e3ee56823ad7006d0f4d192649a80173997/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b960e7927db1e8e121c9d83cdd3e3ee56823ad7006d0f4d192649a80173997/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b960e7927db1e8e121c9d83cdd3e3ee56823ad7006d0f4d192649a80173997/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21b960e7927db1e8e121c9d83cdd3e3ee56823ad7006d0f4d192649a80173997/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:01 compute-0 podman[282382]: 2025-11-22 04:02:01.597810997 +0000 UTC m=+0.136252744 container init 53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:02:01 compute-0 podman[282382]: 2025-11-22 04:02:01.606260282 +0000 UTC m=+0.144702019 container start 53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:02:01 compute-0 podman[282382]: 2025-11-22 04:02:01.609931929 +0000 UTC m=+0.148373666 container attach 53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:02:01 compute-0 podman[282426]: 2025-11-22 04:02:01.825242049 +0000 UTC m=+0.030632321 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:02:01 compute-0 podman[282426]: 2025-11-22 04:02:01.92370636 +0000 UTC m=+0.129096592 container create 9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 22 04:02:01 compute-0 systemd[1]: Started libpod-conmon-9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90.scope.
Nov 22 04:02:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6cb8224b711e81096df241cee9779a5c7aab2477826027bf47e4b1e0fdfa0563/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:02 compute-0 podman[282426]: 2025-11-22 04:02:02.033831906 +0000 UTC m=+0.239222148 container init 9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:02:02 compute-0 podman[282426]: 2025-11-22 04:02:02.040277915 +0000 UTC m=+0.245668137 container start 9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 22 04:02:02 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[282441]: [NOTICE]   (282445) : New worker (282447) forked
Nov 22 04:02:02 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[282441]: [NOTICE]   (282445) : Loading success.
Nov 22 04:02:02 compute-0 ceph-mon[75011]: osdmap e356: 3 total, 3 up, 3 in
Nov 22 04:02:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 6.0 KiB/s wr, 81 op/s
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.496 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.537 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784122.537257, 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.538 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] VM Started (Lifecycle Event)
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.542 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.546 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.551 253465 INFO nova.virt.libvirt.driver [-] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Instance spawned successfully.
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.551 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.569 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.573 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.585 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.585 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.586 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.586 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.587 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.587 253465 DEBUG nova.virt.libvirt.driver [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.610 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.610 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784122.537505, 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.611 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] VM Paused (Lifecycle Event)
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.631 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.635 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784122.5454814, 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.636 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] VM Resumed (Lifecycle Event)
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.644 253465 INFO nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Took 2.91 seconds to spawn the instance on the hypervisor.
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.645 253465 DEBUG nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.658 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:02 compute-0 vibrant_jang[282400]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:02:02 compute-0 vibrant_jang[282400]: --> relative data size: 1.0
Nov 22 04:02:02 compute-0 vibrant_jang[282400]: --> All data devices are unavailable
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.663 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.688 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:02:02 compute-0 systemd[1]: libpod-53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315.scope: Deactivated successfully.
Nov 22 04:02:02 compute-0 podman[282382]: 2025-11-22 04:02:02.696335386 +0000 UTC m=+1.234777123 container died 53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef)
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.707 253465 INFO nova.compute.manager [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Took 9.90 seconds to build instance.
Nov 22 04:02:02 compute-0 nova_compute[253461]: 2025-11-22 04:02:02.722 253465 DEBUG oslo_concurrency.lockutils [None req-e4ab0cf8-d656-427c-aa74-3fc4b0d40220 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.974s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-21b960e7927db1e8e121c9d83cdd3e3ee56823ad7006d0f4d192649a80173997-merged.mount: Deactivated successfully.
Nov 22 04:02:02 compute-0 podman[282382]: 2025-11-22 04:02:02.754103298 +0000 UTC m=+1.292545015 container remove 53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:02:02 compute-0 systemd[1]: libpod-conmon-53de0251751587bbfe6499524f7ca2e859d3d1c8f66fe62be6229488ae637315.scope: Deactivated successfully.
Nov 22 04:02:02 compute-0 sudo[282188]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:02 compute-0 sudo[282534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:02:02 compute-0 sudo[282534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:02 compute-0 sudo[282534]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:02 compute-0 sudo[282559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:02:02 compute-0 sudo[282559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:02 compute-0 sudo[282559]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:03 compute-0 sudo[282584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:02:03 compute-0 sudo[282584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:03 compute-0 sudo[282584]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:03 compute-0 sudo[282609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:02:03 compute-0 sudo[282609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4150955602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4150955602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:03 compute-0 ceph-mon[75011]: pgmap v1469: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 6.0 KiB/s wr, 81 op/s
Nov 22 04:02:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4150955602' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4150955602' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:03 compute-0 podman[282673]: 2025-11-22 04:02:03.515289808 +0000 UTC m=+0.046691508 container create a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euler, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:02:03 compute-0 systemd[1]: Started libpod-conmon-a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c.scope.
Nov 22 04:02:03 compute-0 podman[282673]: 2025-11-22 04:02:03.494365224 +0000 UTC m=+0.025766964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:03 compute-0 nova_compute[253461]: 2025-11-22 04:02:03.607 253465 DEBUG nova.compute.manager [req-e7701715-1f02-4856-a161-b2f1dcdefd17 req-df0588ff-c2b9-415f-9dea-b19556eda2b0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received event network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:03 compute-0 nova_compute[253461]: 2025-11-22 04:02:03.609 253465 DEBUG oslo_concurrency.lockutils [req-e7701715-1f02-4856-a161-b2f1dcdefd17 req-df0588ff-c2b9-415f-9dea-b19556eda2b0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:03 compute-0 nova_compute[253461]: 2025-11-22 04:02:03.610 253465 DEBUG oslo_concurrency.lockutils [req-e7701715-1f02-4856-a161-b2f1dcdefd17 req-df0588ff-c2b9-415f-9dea-b19556eda2b0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:03 compute-0 nova_compute[253461]: 2025-11-22 04:02:03.610 253465 DEBUG oslo_concurrency.lockutils [req-e7701715-1f02-4856-a161-b2f1dcdefd17 req-df0588ff-c2b9-415f-9dea-b19556eda2b0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:03 compute-0 nova_compute[253461]: 2025-11-22 04:02:03.610 253465 DEBUG nova.compute.manager [req-e7701715-1f02-4856-a161-b2f1dcdefd17 req-df0588ff-c2b9-415f-9dea-b19556eda2b0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] No waiting events found dispatching network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:02:03 compute-0 nova_compute[253461]: 2025-11-22 04:02:03.611 253465 WARNING nova.compute.manager [req-e7701715-1f02-4856-a161-b2f1dcdefd17 req-df0588ff-c2b9-415f-9dea-b19556eda2b0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received unexpected event network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 for instance with vm_state active and task_state None.
Nov 22 04:02:03 compute-0 podman[282673]: 2025-11-22 04:02:03.631579783 +0000 UTC m=+0.162981473 container init a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euler, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:02:03 compute-0 podman[282673]: 2025-11-22 04:02:03.643014612 +0000 UTC m=+0.174416312 container start a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euler, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:02:03 compute-0 podman[282673]: 2025-11-22 04:02:03.647089403 +0000 UTC m=+0.178491123 container attach a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euler, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:02:03 compute-0 elegant_euler[282689]: 167 167
Nov 22 04:02:03 compute-0 podman[282673]: 2025-11-22 04:02:03.651083462 +0000 UTC m=+0.182485162 container died a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:02:03 compute-0 systemd[1]: libpod-a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c.scope: Deactivated successfully.
Nov 22 04:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-54d1b68fc4c3c464a982c0ef85238bd6716b24047f0cea94da7d446e27eb4b6d-merged.mount: Deactivated successfully.
Nov 22 04:02:03 compute-0 podman[282673]: 2025-11-22 04:02:03.69510386 +0000 UTC m=+0.226505550 container remove a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_euler, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 04:02:03 compute-0 systemd[1]: libpod-conmon-a566e5083c1d5983d5c2a857847abd2b19561a3340942ce5e30f01e532306a1c.scope: Deactivated successfully.
Nov 22 04:02:03 compute-0 podman[282713]: 2025-11-22 04:02:03.878911331 +0000 UTC m=+0.039528619 container create 5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_rubin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:02:03 compute-0 systemd[1]: Started libpod-conmon-5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348.scope.
Nov 22 04:02:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1f580bebed258c8bd81573a5ecd73f1fa9972da179c29e7745ff58e7ac48d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1f580bebed258c8bd81573a5ecd73f1fa9972da179c29e7745ff58e7ac48d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1f580bebed258c8bd81573a5ecd73f1fa9972da179c29e7745ff58e7ac48d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1f580bebed258c8bd81573a5ecd73f1fa9972da179c29e7745ff58e7ac48d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:03 compute-0 podman[282713]: 2025-11-22 04:02:03.864407877 +0000 UTC m=+0.025025175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:03 compute-0 podman[282713]: 2025-11-22 04:02:03.980078777 +0000 UTC m=+0.140696075 container init 5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:02:03 compute-0 podman[282713]: 2025-11-22 04:02:03.989911885 +0000 UTC m=+0.150529173 container start 5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:02:03 compute-0 podman[282713]: 2025-11-22 04:02:03.994595855 +0000 UTC m=+0.155213143 container attach 5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_rubin, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:02:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 25 KiB/s wr, 151 op/s
Nov 22 04:02:04 compute-0 ceph-mon[75011]: pgmap v1470: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 25 KiB/s wr, 151 op/s
Nov 22 04:02:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1047590971' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1047590971' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]: {
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:     "0": [
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:         {
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "devices": [
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "/dev/loop3"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             ],
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_name": "ceph_lv0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_size": "21470642176",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "name": "ceph_lv0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "tags": {
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cluster_name": "ceph",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.crush_device_class": "",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.encrypted": "0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osd_id": "0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.type": "block",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.vdo": "0"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             },
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "type": "block",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "vg_name": "ceph_vg0"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:         }
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:     ],
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:     "1": [
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:         {
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "devices": [
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "/dev/loop4"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             ],
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_name": "ceph_lv1",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_size": "21470642176",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "name": "ceph_lv1",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "tags": {
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cluster_name": "ceph",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.crush_device_class": "",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.encrypted": "0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osd_id": "1",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.type": "block",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.vdo": "0"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             },
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "type": "block",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "vg_name": "ceph_vg1"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:         }
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:     ],
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:     "2": [
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:         {
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "devices": [
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "/dev/loop5"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             ],
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_name": "ceph_lv2",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_size": "21470642176",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "name": "ceph_lv2",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "tags": {
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.cluster_name": "ceph",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.crush_device_class": "",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.encrypted": "0",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osd_id": "2",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.type": "block",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:                 "ceph.vdo": "0"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             },
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "type": "block",
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:             "vg_name": "ceph_vg2"
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:         }
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]:     ]
Nov 22 04:02:04 compute-0 nostalgic_rubin[282729]: }
Nov 22 04:02:04 compute-0 systemd[1]: libpod-5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348.scope: Deactivated successfully.
Nov 22 04:02:04 compute-0 podman[282739]: 2025-11-22 04:02:04.86632293 +0000 UTC m=+0.038630494 container died 5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.869 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.873 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.874 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.874 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.874 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.877 253465 INFO nova.compute.manager [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Terminating instance
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.879 253465 DEBUG nova.compute.manager [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff1f580bebed258c8bd81573a5ecd73f1fa9972da179c29e7745ff58e7ac48d8-merged.mount: Deactivated successfully.
Nov 22 04:02:04 compute-0 kernel: tapd3d71762-3b (unregistering): left promiscuous mode
Nov 22 04:02:04 compute-0 podman[282739]: 2025-11-22 04:02:04.927005068 +0000 UTC m=+0.099312632 container remove 5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_rubin, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:02:04 compute-0 NetworkManager[48916]: <info>  [1763784124.9313] device (tapd3d71762-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:02:04 compute-0 systemd[1]: libpod-conmon-5d2679e0140fb9f495cf8c094ab8f255602bf19aae4c341cff4c7d56f8f52348.scope: Deactivated successfully.
Nov 22 04:02:04 compute-0 ovn_controller[152691]: 2025-11-22T04:02:04Z|00180|binding|INFO|Releasing lport d3d71762-3b8d-4284-93b4-3e0010782ff8 from this chassis (sb_readonly=0)
Nov 22 04:02:04 compute-0 ovn_controller[152691]: 2025-11-22T04:02:04Z|00181|binding|INFO|Setting lport d3d71762-3b8d-4284-93b4-3e0010782ff8 down in Southbound
Nov 22 04:02:04 compute-0 ovn_controller[152691]: 2025-11-22T04:02:04Z|00182|binding|INFO|Removing iface tapd3d71762-3b ovn-installed in OVS
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.943 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.945 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:04 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:04.955 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:ce:cd 10.100.0.3'], port_security=['fa:16:3e:2e:ce:cd 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a457fd1a-7e56-4665-9c38-fd65feb93293', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=d3d71762-3b8d-4284-93b4-3e0010782ff8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:02:04 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:04.957 162689 INFO neutron.agent.ovn.metadata.agent [-] Port d3d71762-3b8d-4284-93b4-3e0010782ff8 in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:02:04 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:04.959 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4670b112-9f63-4a03-8d79-91f581c69c03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:02:04 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:04.960 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[001dd07a-aaa4-4fbf-bff4-d9a1d97ce5ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:04 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:04.962 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace which is not needed anymore
Nov 22 04:02:04 compute-0 nova_compute[253461]: 2025-11-22 04:02:04.966 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:04 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Nov 22 04:02:04 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 3.918s CPU time.
Nov 22 04:02:04 compute-0 systemd-machined[215728]: Machine qemu-16-instance-00000010 terminated.
Nov 22 04:02:04 compute-0 sudo[282609]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:05 compute-0 sudo[282767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:02:05 compute-0 sudo[282767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:05 compute-0 sudo[282767]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.105 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:05 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[282441]: [NOTICE]   (282445) : haproxy version is 2.8.14-c23fe91
Nov 22 04:02:05 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[282441]: [NOTICE]   (282445) : path to executable is /usr/sbin/haproxy
Nov 22 04:02:05 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[282441]: [WARNING]  (282445) : Exiting Master process...
Nov 22 04:02:05 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[282441]: [WARNING]  (282445) : Exiting Master process...
Nov 22 04:02:05 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[282441]: [ALERT]    (282445) : Current worker (282447) exited with code 143 (Terminated)
Nov 22 04:02:05 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[282441]: [WARNING]  (282445) : All workers exited. Exiting... (0)
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.114 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:05 compute-0 systemd[1]: libpod-9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90.scope: Deactivated successfully.
Nov 22 04:02:05 compute-0 podman[282805]: 2025-11-22 04:02:05.123174672 +0000 UTC m=+0.053865152 container died 9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.124 253465 INFO nova.virt.libvirt.driver [-] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Instance destroyed successfully.
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.124 253465 DEBUG nova.objects.instance [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'resources' on Instance uuid 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:02:05 compute-0 sudo[282806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:02:05 compute-0 sudo[282806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:05 compute-0 sudo[282806]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.140 253465 DEBUG nova.virt.libvirt.vif [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:01:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2119404051',display_name='tempest-TestVolumeBootPattern-server-2119404051',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2119404051',id=16,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:02:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-295mrc0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:02:02Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.141 253465 DEBUG nova.network.os_vif_util [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "address": "fa:16:3e:2e:ce:cd", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd3d71762-3b", "ovs_interfaceid": "d3d71762-3b8d-4284-93b4-3e0010782ff8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.142 253465 DEBUG nova.network.os_vif_util [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ce:cd,bridge_name='br-int',has_traffic_filtering=True,id=d3d71762-3b8d-4284-93b4-3e0010782ff8,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3d71762-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.142 253465 DEBUG os_vif [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ce:cd,bridge_name='br-int',has_traffic_filtering=True,id=d3d71762-3b8d-4284-93b4-3e0010782ff8,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3d71762-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.144 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.145 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd3d71762-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.147 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.149 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.152 253465 INFO os_vif [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:ce:cd,bridge_name='br-int',has_traffic_filtering=True,id=d3d71762-3b8d-4284-93b4-3e0010782ff8,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd3d71762-3b')
Nov 22 04:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90-userdata-shm.mount: Deactivated successfully.
Nov 22 04:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-6cb8224b711e81096df241cee9779a5c7aab2477826027bf47e4b1e0fdfa0563-merged.mount: Deactivated successfully.
Nov 22 04:02:05 compute-0 podman[282805]: 2025-11-22 04:02:05.17699827 +0000 UTC m=+0.107688750 container cleanup 9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:02:05 compute-0 systemd[1]: libpod-conmon-9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90.scope: Deactivated successfully.
Nov 22 04:02:05 compute-0 sudo[282865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:02:05 compute-0 sudo[282865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:05 compute-0 sudo[282865]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:05 compute-0 podman[282909]: 2025-11-22 04:02:05.253582908 +0000 UTC m=+0.051985647 container remove 9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.259 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9927bc04-8858-48ce-a2fd-961b1babe641]: (4, ('Sat Nov 22 04:02:05 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90)\n9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90\nSat Nov 22 04:02:05 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90)\n9eed80aefaba177616648bb884a9689ac07a58879ed859d757f38b4f8a7f3f90\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.261 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[14a0891e-1e70-4e1a-89de-dac721f33201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.262 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.263 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:05 compute-0 sudo[282927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:02:05 compute-0 sudo[282927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:05 compute-0 kernel: tap4670b112-90: left promiscuous mode
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.292 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.295 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[443c667d-586e-4e20-8018-c0cd6bd1251b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.312 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c419f999-11a6-4852-a0fa-1e702a79c3ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.313 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f9964b77-63b4-4d20-aaf5-c0f91980b138]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.330 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ea090126-e8ff-40c6-a881-3cfb7816e3ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 437378, 'reachable_time': 20756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282954, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.333 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:02:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:05.333 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[0a2b4a9b-4b9e-49ff-b72d-8fc9dafb18a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d4670b112\x2d9f63\x2d4a03\x2d8d79\x2d91f581c69c03.mount: Deactivated successfully.
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.397 253465 INFO nova.virt.libvirt.driver [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Deleting instance files /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c_del
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.399 253465 INFO nova.virt.libvirt.driver [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Deletion of /var/lib/nova/instances/8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c_del complete
Nov 22 04:02:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1047590971' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1047590971' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.471 253465 INFO nova.compute.manager [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Took 0.59 seconds to destroy the instance on the hypervisor.
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.471 253465 DEBUG oslo.service.loopingcall [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.472 253465 DEBUG nova.compute.manager [-] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.472 253465 DEBUG nova.network.neutron [-] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:02:05 compute-0 podman[282998]: 2025-11-22 04:02:05.634901723 +0000 UTC m=+0.045554921 container create cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:02:05 compute-0 systemd[1]: Started libpod-conmon-cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a.scope.
Nov 22 04:02:05 compute-0 podman[282998]: 2025-11-22 04:02:05.616787883 +0000 UTC m=+0.027441111 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:05 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.713 253465 DEBUG nova.compute.manager [req-a54f1156-ffd6-49ac-8944-ee11449d19cb req-66038e2d-8074-45d2-823c-71271bada9e5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received event network-vif-unplugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.714 253465 DEBUG oslo_concurrency.lockutils [req-a54f1156-ffd6-49ac-8944-ee11449d19cb req-66038e2d-8074-45d2-823c-71271bada9e5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.714 253465 DEBUG oslo_concurrency.lockutils [req-a54f1156-ffd6-49ac-8944-ee11449d19cb req-66038e2d-8074-45d2-823c-71271bada9e5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.714 253465 DEBUG oslo_concurrency.lockutils [req-a54f1156-ffd6-49ac-8944-ee11449d19cb req-66038e2d-8074-45d2-823c-71271bada9e5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.714 253465 DEBUG nova.compute.manager [req-a54f1156-ffd6-49ac-8944-ee11449d19cb req-66038e2d-8074-45d2-823c-71271bada9e5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] No waiting events found dispatching network-vif-unplugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:02:05 compute-0 nova_compute[253461]: 2025-11-22 04:02:05.714 253465 DEBUG nova.compute.manager [req-a54f1156-ffd6-49ac-8944-ee11449d19cb req-66038e2d-8074-45d2-823c-71271bada9e5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received event network-vif-unplugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:02:05 compute-0 podman[282998]: 2025-11-22 04:02:05.732148606 +0000 UTC m=+0.142801824 container init cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:02:05 compute-0 podman[282998]: 2025-11-22 04:02:05.742917862 +0000 UTC m=+0.153571059 container start cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:02:05 compute-0 podman[282998]: 2025-11-22 04:02:05.748620614 +0000 UTC m=+0.159273842 container attach cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:02:05 compute-0 sleepy_hofstadter[283014]: 167 167
Nov 22 04:02:05 compute-0 systemd[1]: libpod-cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a.scope: Deactivated successfully.
Nov 22 04:02:05 compute-0 podman[282998]: 2025-11-22 04:02:05.751068814 +0000 UTC m=+0.161722022 container died cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:02:05 compute-0 podman[282998]: 2025-11-22 04:02:05.786490577 +0000 UTC m=+0.197143785 container remove cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sleepy_hofstadter, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:02:05 compute-0 systemd[1]: libpod-conmon-cb813f1bbfa2e47606195cdc70f6e1aecace2fa4e3d9d8e8627d5d34d2a2807a.scope: Deactivated successfully.
Nov 22 04:02:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/370296798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/370296798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-13ffed49b75610461d091170c365c80b1723b9894da1eed8e42b0ca65239ccc6-merged.mount: Deactivated successfully.
Nov 22 04:02:05 compute-0 podman[283037]: 2025-11-22 04:02:05.950626136 +0000 UTC m=+0.043619484 container create d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_visvesvaraya, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:02:05 compute-0 systemd[1]: Started libpod-conmon-d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8.scope.
Nov 22 04:02:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e356 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Nov 22 04:02:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Nov 22 04:02:06 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Nov 22 04:02:06 compute-0 podman[283037]: 2025-11-22 04:02:05.93159201 +0000 UTC m=+0.024585388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:02:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bdc7f0d47282e92196bc76371282a75c61aedc2206185b8f45698b8c46448d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bdc7f0d47282e92196bc76371282a75c61aedc2206185b8f45698b8c46448d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bdc7f0d47282e92196bc76371282a75c61aedc2206185b8f45698b8c46448d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24bdc7f0d47282e92196bc76371282a75c61aedc2206185b8f45698b8c46448d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:06 compute-0 podman[283037]: 2025-11-22 04:02:06.059443725 +0000 UTC m=+0.152437123 container init d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_visvesvaraya, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 22 04:02:06 compute-0 podman[283037]: 2025-11-22 04:02:06.071398204 +0000 UTC m=+0.164391552 container start d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_visvesvaraya, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:02:06 compute-0 podman[283037]: 2025-11-22 04:02:06.103685161 +0000 UTC m=+0.196678559 container attach d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_visvesvaraya, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:02:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:06 compute-0 nova_compute[253461]: 2025-11-22 04:02:06.314 253465 DEBUG nova.network.neutron [-] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:02:06 compute-0 nova_compute[253461]: 2025-11-22 04:02:06.340 253465 INFO nova.compute.manager [-] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Took 0.87 seconds to deallocate network for instance.
Nov 22 04:02:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 24 KiB/s wr, 138 op/s
Nov 22 04:02:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/370296798' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/370296798' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:06 compute-0 ceph-mon[75011]: osdmap e357: 3 total, 3 up, 3 in
Nov 22 04:02:06 compute-0 ceph-mon[75011]: pgmap v1472: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 24 KiB/s wr, 138 op/s
Nov 22 04:02:06 compute-0 nova_compute[253461]: 2025-11-22 04:02:06.513 253465 INFO nova.compute.manager [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Took 0.17 seconds to detach 1 volumes for instance.
Nov 22 04:02:06 compute-0 nova_compute[253461]: 2025-11-22 04:02:06.514 253465 DEBUG nova.compute.manager [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Deleting volume: 6ef74269-5d74-463e-84ca-f6551e7dae30 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 22 04:02:06 compute-0 nova_compute[253461]: 2025-11-22 04:02:06.683 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:06 compute-0 nova_compute[253461]: 2025-11-22 04:02:06.683 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:06 compute-0 nova_compute[253461]: 2025-11-22 04:02:06.743 253465 DEBUG oslo_concurrency.processutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]: {
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "osd_id": 1,
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "type": "bluestore"
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:     },
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "osd_id": 0,
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "type": "bluestore"
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:     },
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "osd_id": 2,
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:         "type": "bluestore"
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]:     }
Nov 22 04:02:07 compute-0 thirsty_visvesvaraya[283053]: }
Nov 22 04:02:07 compute-0 systemd[1]: libpod-d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8.scope: Deactivated successfully.
Nov 22 04:02:07 compute-0 podman[283037]: 2025-11-22 04:02:07.084231825 +0000 UTC m=+1.177225173 container died d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:02:07 compute-0 systemd[1]: libpod-d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8.scope: Consumed 1.008s CPU time.
Nov 22 04:02:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-24bdc7f0d47282e92196bc76371282a75c61aedc2206185b8f45698b8c46448d-merged.mount: Deactivated successfully.
Nov 22 04:02:07 compute-0 podman[283037]: 2025-11-22 04:02:07.168059513 +0000 UTC m=+1.261052871 container remove d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_visvesvaraya, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:02:07 compute-0 systemd[1]: libpod-conmon-d7c4582cdf3dca0459590ddb52d01f4defe82419f80bedd6e964b28e540d20a8.scope: Deactivated successfully.
Nov 22 04:02:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:02:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1304622763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:07 compute-0 sudo[282927]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:02:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:02:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:02:07 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.220 253465 DEBUG oslo_concurrency.processutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:07 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b5d9e874-fb83-4023-a6b8-a547ec5b1a1e does not exist
Nov 22 04:02:07 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 718d05e8-de10-4536-a2a9-e5dfeff4c0a6 does not exist
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.228 253465 DEBUG nova.compute.provider_tree [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:02:07 compute-0 podman[283107]: 2025-11-22 04:02:07.229752315 +0000 UTC m=+0.094403665 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.243 253465 DEBUG nova.scheduler.client.report [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.264 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:07 compute-0 sudo[283138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:02:07 compute-0 sudo[283138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:07 compute-0 sudo[283138]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/442108942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/442108942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.316 253465 INFO nova.scheduler.client.report [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Deleted allocations for instance 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c
Nov 22 04:02:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2030946179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2030946179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:07 compute-0 sudo[283165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:02:07 compute-0 sudo[283165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:02:07 compute-0 sudo[283165]: pam_unix(sudo:session): session closed for user root
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.388 253465 DEBUG oslo_concurrency.lockutils [None req-ef45715f-8f18-468e-9af5-4de1fa3b3608 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.514s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.498 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.829 253465 DEBUG nova.compute.manager [req-5e821b79-a33f-4ab7-a87b-14e5a3cb1d19 req-fcc1f154-2c9e-40e2-bdb0-56c9a0c9d7b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received event network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.830 253465 DEBUG oslo_concurrency.lockutils [req-5e821b79-a33f-4ab7-a87b-14e5a3cb1d19 req-fcc1f154-2c9e-40e2-bdb0-56c9a0c9d7b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.831 253465 DEBUG oslo_concurrency.lockutils [req-5e821b79-a33f-4ab7-a87b-14e5a3cb1d19 req-fcc1f154-2c9e-40e2-bdb0-56c9a0c9d7b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.831 253465 DEBUG oslo_concurrency.lockutils [req-5e821b79-a33f-4ab7-a87b-14e5a3cb1d19 req-fcc1f154-2c9e-40e2-bdb0-56c9a0c9d7b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.831 253465 DEBUG nova.compute.manager [req-5e821b79-a33f-4ab7-a87b-14e5a3cb1d19 req-fcc1f154-2c9e-40e2-bdb0-56c9a0c9d7b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] No waiting events found dispatching network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.832 253465 WARNING nova.compute.manager [req-5e821b79-a33f-4ab7-a87b-14e5a3cb1d19 req-fcc1f154-2c9e-40e2-bdb0-56c9a0c9d7b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received unexpected event network-vif-plugged-d3d71762-3b8d-4284-93b4-3e0010782ff8 for instance with vm_state deleted and task_state None.
Nov 22 04:02:07 compute-0 nova_compute[253461]: 2025-11-22 04:02:07.832 253465 DEBUG nova.compute.manager [req-5e821b79-a33f-4ab7-a87b-14e5a3cb1d19 req-fcc1f154-2c9e-40e2-bdb0-56c9a0c9d7b2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Received event network-vif-deleted-d3d71762-3b8d-4284-93b4-3e0010782ff8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1304622763' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:02:08 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:02:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/442108942' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/442108942' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2030946179' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2030946179' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 145 op/s
Nov 22 04:02:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1250896149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1250896149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Nov 22 04:02:09 compute-0 ceph-mon[75011]: pgmap v1473: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 145 op/s
Nov 22 04:02:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1250896149' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1250896149' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Nov 22 04:02:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Nov 22 04:02:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3145572988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3145572988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Nov 22 04:02:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Nov 22 04:02:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Nov 22 04:02:10 compute-0 ceph-mon[75011]: osdmap e358: 3 total, 3 up, 3 in
Nov 22 04:02:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3145572988' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3145572988' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:10 compute-0 nova_compute[253461]: 2025-11-22 04:02:10.147 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 KiB/s wr, 253 op/s
Nov 22 04:02:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:02:10 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3209954409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:02:10 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3209954409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:11 compute-0 ceph-mon[75011]: osdmap e359: 3 total, 3 up, 3 in
Nov 22 04:02:11 compute-0 ceph-mon[75011]: pgmap v1476: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 KiB/s wr, 253 op/s
Nov 22 04:02:11 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3209954409' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:02:11 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3209954409' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:02:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 115 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.8 KiB/s wr, 276 op/s
Nov 22 04:02:12 compute-0 ceph-mon[75011]: pgmap v1477: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 115 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.8 KiB/s wr, 276 op/s
Nov 22 04:02:12 compute-0 nova_compute[253461]: 2025-11-22 04:02:12.532 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 88 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.0 KiB/s wr, 257 op/s
Nov 22 04:02:14 compute-0 ceph-mon[75011]: pgmap v1478: 305 pgs: 305 active+clean; 88 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.0 KiB/s wr, 257 op/s
Nov 22 04:02:15 compute-0 nova_compute[253461]: 2025-11-22 04:02:15.151 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e359 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Nov 22 04:02:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Nov 22 04:02:16 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Nov 22 04:02:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 88 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.0 KiB/s wr, 132 op/s
Nov 22 04:02:17 compute-0 ceph-mon[75011]: osdmap e360: 3 total, 3 up, 3 in
Nov 22 04:02:17 compute-0 ceph-mon[75011]: pgmap v1480: 305 pgs: 305 active+clean; 88 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.0 KiB/s wr, 132 op/s
Nov 22 04:02:17 compute-0 nova_compute[253461]: 2025-11-22 04:02:17.532 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 88 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 KiB/s wr, 66 op/s
Nov 22 04:02:18 compute-0 ceph-mon[75011]: pgmap v1481: 305 pgs: 305 active+clean; 88 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 KiB/s wr, 66 op/s
Nov 22 04:02:20 compute-0 nova_compute[253461]: 2025-11-22 04:02:20.120 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784125.1188984, 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:20 compute-0 nova_compute[253461]: 2025-11-22 04:02:20.121 253465 INFO nova.compute.manager [-] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] VM Stopped (Lifecycle Event)
Nov 22 04:02:20 compute-0 nova_compute[253461]: 2025-11-22 04:02:20.140 253465 DEBUG nova.compute.manager [None req-825b31be-93e9-40df-9c7d-2b3d3ca5f436 - - - - - -] [instance: 8957f108-f35d-4c2a-bf13-bd6a6aa3fc9c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:20 compute-0 nova_compute[253461]: 2025-11-22 04:02:20.155 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 120 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Nov 22 04:02:20 compute-0 ceph-mon[75011]: pgmap v1482: 305 pgs: 305 active+clean; 120 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 MiB/s wr, 66 op/s
Nov 22 04:02:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 72 op/s
Nov 22 04:02:22 compute-0 podman[283190]: 2025-11-22 04:02:22.37719821 +0000 UTC m=+0.058383428 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent)
Nov 22 04:02:22 compute-0 ceph-mon[75011]: pgmap v1483: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 72 op/s
Nov 22 04:02:22 compute-0 podman[283191]: 2025-11-22 04:02:22.47913678 +0000 UTC m=+0.148668869 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:02:22 compute-0 nova_compute[253461]: 2025-11-22 04:02:22.533 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:23.014 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:23.014 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:23.014 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 43 op/s
Nov 22 04:02:24 compute-0 ceph-mon[75011]: pgmap v1484: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.1 MiB/s wr, 43 op/s
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.839 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "30f0e486-2dc6-492c-9891-5f02237d7435" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.840 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.862 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.876 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.876 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.913 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.980 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.981 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.992 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:02:24 compute-0 nova_compute[253461]: 2025-11-22 04:02:24.993 253465 INFO nova.compute.claims [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.029 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.123 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.157 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:02:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411692628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.569 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.575 253465 DEBUG nova.compute.provider_tree [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.590 253465 DEBUG nova.scheduler.client.report [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:02:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1411692628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.614 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.615 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.619 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.632 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.633 253465 INFO nova.compute.claims [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.670 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.671 253465 DEBUG nova.network.neutron [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.691 253465 INFO nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.707 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.756 253465 INFO nova.virt.block_device [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Booting with volume c8851b72-0ea0-4abf-b3c9-07e9e110de7d at /dev/vda
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.786 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4040056984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.929 253465 DEBUG nova.policy [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45ccef35c0c843a59c9dfd0eb67190a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '83cc5de7368b40b984b51f781e85343c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.935 253465 DEBUG os_brick.utils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.937 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.956 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.956 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4ea3fc-b8a0-4fd3-b78d-665243e9a2fd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.958 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.967 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.968 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[74248f49-ee12-494a-ab4a-fe3dfd50bf2e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.970 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.984 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.984 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[d51f20c8-ccfa-4a6a-9182-0479567dc949]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.986 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[f0396aac-e85b-4005-a3eb-ad7e72f41bcc]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:25 compute-0 nova_compute[253461]: 2025-11-22 04:02:25.987 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e360 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.025 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.029 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.029 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.030 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.030 253465 DEBUG os_brick.utils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] <== get_connector_properties: return (95ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.031 253465 DEBUG nova.virt.block_device [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updating existing volume attachment record: b330de2e-715a-4563-beab-8c423c87b48f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:02:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:02:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2909041634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.241 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.247 253465 DEBUG nova.compute.provider_tree [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.272 253465 DEBUG nova.scheduler.client.report [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.303 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.304 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:02:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 42 op/s
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.377 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.377 253465 DEBUG nova.network.neutron [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.407 253465 INFO nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.430 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.572 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.574 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.575 253465 INFO nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Creating image(s)
Nov 22 04:02:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.603 253465 DEBUG nova.storage.rbd_utils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] rbd image f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4040056984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2909041634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:26 compute-0 ceph-mon[75011]: pgmap v1485: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 42 op/s
Nov 22 04:02:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Nov 22 04:02:26 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.652 253465 DEBUG nova.storage.rbd_utils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] rbd image f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.683 253465 DEBUG nova.storage.rbd_utils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] rbd image f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.688 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.718 253465 DEBUG nova.policy [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a2ea2fdf84c34961a57ed463c6daa7ba', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ed2837d5c0344b88b5ba7799c801241', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:02:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/336579072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.773 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.774 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.774 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.775 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.799 253465 DEBUG nova.storage.rbd_utils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] rbd image f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.803 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:26 compute-0 nova_compute[253461]: 2025-11-22 04:02:26.970 253465 DEBUG nova.network.neutron [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Successfully created port: b394a80d-1857-49b1-bd4f-a2a675cc7ebe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.228 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.306 253465 DEBUG nova.storage.rbd_utils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] resizing rbd image f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 04:02:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:27.384 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:02:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:27.385 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.462 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.465 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.467 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.467 253465 INFO nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Creating image(s)
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.468 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.468 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Ensure instance console log exists: /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.469 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.469 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.469 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.474 253465 DEBUG nova.objects.instance [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'migration_context' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.490 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.490 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Ensure instance console log exists: /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.491 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.491 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.491 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.531 253465 DEBUG nova.network.neutron [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Successfully created port: 905dd436-f21d-4498-9bc7-a159e966bc32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.535 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Nov 22 04:02:27 compute-0 ceph-mon[75011]: osdmap e361: 3 total, 3 up, 3 in
Nov 22 04:02:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/336579072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Nov 22 04:02:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.642 253465 DEBUG nova.network.neutron [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Successfully updated port: b394a80d-1857-49b1-bd4f-a2a675cc7ebe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.669 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.669 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquired lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.669 253465 DEBUG nova.network.neutron [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.769 253465 DEBUG nova.compute.manager [req-d96a9424-05ef-47ad-81e0-e285f4b50970 req-9731c87e-ab1c-46c0-9542-4978c404b652 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received event network-changed-b394a80d-1857-49b1-bd4f-a2a675cc7ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.770 253465 DEBUG nova.compute.manager [req-d96a9424-05ef-47ad-81e0-e285f4b50970 req-9731c87e-ab1c-46c0-9542-4978c404b652 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Refreshing instance network info cache due to event network-changed-b394a80d-1857-49b1-bd4f-a2a675cc7ebe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.770 253465 DEBUG oslo_concurrency.lockutils [req-d96a9424-05ef-47ad-81e0-e285f4b50970 req-9731c87e-ab1c-46c0-9542-4978c404b652 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:02:27 compute-0 nova_compute[253461]: 2025-11-22 04:02:27.841 253465 DEBUG nova.network.neutron [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.265 253465 DEBUG nova.network.neutron [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Successfully updated port: 905dd436-f21d-4498-9bc7-a159e966bc32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.285 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.286 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquired lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.286 253465 DEBUG nova.network.neutron [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.357 253465 DEBUG nova.compute.manager [req-adebf550-8e3b-4ada-87f2-1dca2ac9ceaa req-bff9a961-8f58-4eea-9641-adda4d88adef f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received event network-changed-905dd436-f21d-4498-9bc7-a159e966bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.357 253465 DEBUG nova.compute.manager [req-adebf550-8e3b-4ada-87f2-1dca2ac9ceaa req-bff9a961-8f58-4eea-9641-adda4d88adef f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Refreshing instance network info cache due to event network-changed-905dd436-f21d-4498-9bc7-a159e966bc32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.358 253465 DEBUG oslo_concurrency.lockutils [req-adebf550-8e3b-4ada-87f2-1dca2ac9ceaa req-bff9a961-8f58-4eea-9641-adda4d88adef f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:02:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 875 KiB/s wr, 40 op/s
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.481 253465 DEBUG nova.network.neutron [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:02:28 compute-0 ceph-mon[75011]: osdmap e362: 3 total, 3 up, 3 in
Nov 22 04:02:28 compute-0 ceph-mon[75011]: pgmap v1488: 305 pgs: 305 active+clean; 134 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 875 KiB/s wr, 40 op/s
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.696 253465 DEBUG nova.network.neutron [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updating instance_info_cache with network_info: [{"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.720 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Releasing lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.721 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Instance network_info: |[{"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.721 253465 DEBUG oslo_concurrency.lockutils [req-d96a9424-05ef-47ad-81e0-e285f4b50970 req-9731c87e-ab1c-46c0-9542-4978c404b652 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.722 253465 DEBUG nova.network.neutron [req-d96a9424-05ef-47ad-81e0-e285f4b50970 req-9731c87e-ab1c-46c0-9542-4978c404b652 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Refreshing network info cache for port b394a80d-1857-49b1-bd4f-a2a675cc7ebe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.728 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Start _get_guest_xml network_info=[{"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': 'b330de2e-715a-4563-beab-8c423c87b48f', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-c8851b72-0ea0-4abf-b3c9-07e9e110de7d', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'c8851b72-0ea0-4abf-b3c9-07e9e110de7d', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '30f0e486-2dc6-492c-9891-5f02237d7435', 'attached_at': '', 'detached_at': '', 'volume_id': 'c8851b72-0ea0-4abf-b3c9-07e9e110de7d', 'serial': 'c8851b72-0ea0-4abf-b3c9-07e9e110de7d'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': True, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.734 253465 WARNING nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.740 253465 DEBUG nova.virt.libvirt.host [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.741 253465 DEBUG nova.virt.libvirt.host [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:02:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/132616792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.751 253465 DEBUG nova.virt.libvirt.host [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.752 253465 DEBUG nova.virt.libvirt.host [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.752 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.753 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.753 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.754 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.754 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.754 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.755 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.755 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.755 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.756 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.756 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.756 253465 DEBUG nova.virt.hardware [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.796 253465 DEBUG nova.storage.rbd_utils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 30f0e486-2dc6-492c-9891-5f02237d7435_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:28 compute-0 nova_compute[253461]: 2025-11-22 04:02:28.803 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1161078257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.310 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.314 253465 DEBUG nova.network.neutron [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating instance_info_cache with network_info: [{"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.393 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Releasing lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.393 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Instance network_info: |[{"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.394 253465 DEBUG oslo_concurrency.lockutils [req-adebf550-8e3b-4ada-87f2-1dca2ac9ceaa req-bff9a961-8f58-4eea-9641-adda4d88adef f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.395 253465 DEBUG nova.network.neutron [req-adebf550-8e3b-4ada-87f2-1dca2ac9ceaa req-bff9a961-8f58-4eea-9641-adda4d88adef f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Refreshing network info cache for port 905dd436-f21d-4498-9bc7-a159e966bc32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.401 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Start _get_guest_xml network_info=[{"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.406 253465 WARNING nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.410 253465 DEBUG nova.virt.libvirt.vif [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1287261252',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1287261252',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1287261252',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0wizjV88v9pjFVdz4W0Dqu87LyHlDZL/mj+Xssyoqdm5h1EI/pY5eZoXAS94VRdlC25e0MWyvAUI01U92avGCuXRAJMD+18vkkRHL8+54054r1q8yWW+jLr1jlNKumHg==',key_name='tempest-keypair-1850478184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-2rdi0c72',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:02:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=30f0e486-2dc6-492c-9891-5f02237d7435,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.410 253465 DEBUG nova.network.os_vif_util [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.411 253465 DEBUG nova.network.os_vif_util [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:6c:3c,bridge_name='br-int',has_traffic_filtering=True,id=b394a80d-1857-49b1-bd4f-a2a675cc7ebe,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb394a80d-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.413 253465 DEBUG nova.objects.instance [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'pci_devices' on Instance uuid 30f0e486-2dc6-492c-9891-5f02237d7435 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.422 253465 DEBUG nova.virt.libvirt.host [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.423 253465 DEBUG nova.virt.libvirt.host [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.431 253465 DEBUG nova.virt.libvirt.host [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.431 253465 DEBUG nova.virt.libvirt.host [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.432 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.433 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.433 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.434 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.434 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.435 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.435 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.436 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.436 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.436 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.437 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.437 253465 DEBUG nova.virt.hardware [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.445 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.507 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <uuid>30f0e486-2dc6-492c-9891-5f02237d7435</uuid>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <name>instance-00000011</name>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-1287261252</nova:name>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:02:28</nova:creationTime>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <nova:user uuid="45ccef35c0c843a59c9dfd0eb67190a6">tempest-TestVolumeBootPattern-1584219565-project-member</nova:user>
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <nova:project uuid="83cc5de7368b40b984b51f781e85343c">tempest-TestVolumeBootPattern-1584219565</nova:project>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <nova:port uuid="b394a80d-1857-49b1-bd4f-a2a675cc7ebe">
Nov 22 04:02:29 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <system>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <entry name="serial">30f0e486-2dc6-492c-9891-5f02237d7435</entry>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <entry name="uuid">30f0e486-2dc6-492c-9891-5f02237d7435</entry>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </system>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <os>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   </os>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <features>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   </features>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/30f0e486-2dc6-492c-9891-5f02237d7435_disk.config">
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       </source>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-c8851b72-0ea0-4abf-b3c9-07e9e110de7d">
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       </source>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:02:29 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <serial>c8851b72-0ea0-4abf-b3c9-07e9e110de7d</serial>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:85:6c:3c"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <target dev="tapb394a80d-18"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435/console.log" append="off"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <video>
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </video>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:02:29 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:02:29 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:02:29 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:02:29 compute-0 nova_compute[253461]: </domain>
Nov 22 04:02:29 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.511 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Preparing to wait for external event network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.512 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.512 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.513 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.514 253465 DEBUG nova.virt.libvirt.vif [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1287261252',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1287261252',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1287261252',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0wizjV88v9pjFVdz4W0Dqu87LyHlDZL/mj+Xssyoqdm5h1EI/pY5eZoXAS94VRdlC25e0MWyvAUI01U92avGCuXRAJMD+18vkkRHL8+54054r1q8yWW+jLr1jlNKumHg==',key_name='tempest-keypair-1850478184',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-2rdi0c72',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:02:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=30f0e486-2dc6-492c-9891-5f02237d7435,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.515 253465 DEBUG nova.network.os_vif_util [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.516 253465 DEBUG nova.network.os_vif_util [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:6c:3c,bridge_name='br-int',has_traffic_filtering=True,id=b394a80d-1857-49b1-bd4f-a2a675cc7ebe,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb394a80d-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.517 253465 DEBUG os_vif [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:6c:3c,bridge_name='br-int',has_traffic_filtering=True,id=b394a80d-1857-49b1-bd4f-a2a675cc7ebe,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb394a80d-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.518 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.519 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.520 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.525 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.526 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb394a80d-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.526 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb394a80d-18, col_values=(('external_ids', {'iface-id': 'b394a80d-1857-49b1-bd4f-a2a675cc7ebe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:6c:3c', 'vm-uuid': '30f0e486-2dc6-492c-9891-5f02237d7435'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.529 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:29 compute-0 NetworkManager[48916]: <info>  [1763784149.5307] manager: (tapb394a80d-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.536 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.540 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.547 253465 INFO os_vif [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:6c:3c,bridge_name='br-int',has_traffic_filtering=True,id=b394a80d-1857-49b1-bd4f-a2a675cc7ebe,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb394a80d-18')
Nov 22 04:02:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/132616792' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1161078257' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.699 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.700 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.700 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No VIF found with MAC fa:16:3e:85:6c:3c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.701 253465 INFO nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Using config drive
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.738 253465 DEBUG nova.storage.rbd_utils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 30f0e486-2dc6-492c-9891-5f02237d7435_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933008465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:29 compute-0 nova_compute[253461]: 2025-11-22 04:02:29.971 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.005 253465 DEBUG nova.storage.rbd_utils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] rbd image f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.013 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.215 253465 INFO nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Creating config drive at /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435/disk.config
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.226 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5z6k6yx7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.314 253465 DEBUG nova.network.neutron [req-d96a9424-05ef-47ad-81e0-e285f4b50970 req-9731c87e-ab1c-46c0-9542-4978c404b652 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updated VIF entry in instance network info cache for port b394a80d-1857-49b1-bd4f-a2a675cc7ebe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.315 253465 DEBUG nova.network.neutron [req-d96a9424-05ef-47ad-81e0-e285f4b50970 req-9731c87e-ab1c-46c0-9542-4978c404b652 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updating instance_info_cache with network_info: [{"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.345 253465 DEBUG oslo_concurrency.lockutils [req-d96a9424-05ef-47ad-81e0-e285f4b50970 req-9731c87e-ab1c-46c0-9542-4978c404b652 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:02:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 199 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.7 MiB/s wr, 53 op/s
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.369 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5z6k6yx7" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.407 253465 DEBUG nova.storage.rbd_utils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 30f0e486-2dc6-492c-9891-5f02237d7435_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.413 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435/disk.config 30f0e486-2dc6-492c-9891-5f02237d7435_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3833346012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.536 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.537 253465 DEBUG nova.virt.libvirt.vif [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-448994149',display_name='tempest-SnapshotDataIntegrityTests-server-448994149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-448994149',id=18,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMdHPvqdgfYANQmokSC7429xVzDKgMwZE4oiZaPOpXH5lgO3KNV4xaut64/pEBvzIQTnQWSGFpIS7A+K3rfQ+++WPw0I8OMiD86CFB9DXTD6TBgfwIpCH8imYNPR9HbvfQ==',key_name='tempest-keypair-1058175192',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ed2837d5c0344b88b5ba7799c801241',ramdisk_id='',reservation_id='r-3wk7snjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-58717939',owner_user_name='tempest-SnapshotDataIntegrityTests-58717939-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:02:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a2ea2fdf84c34961a57ed463c6daa7ba',uuid=f916655a-aa1c-4071-b05b-7bd2a8661ce0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.538 253465 DEBUG nova.network.os_vif_util [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Converting VIF {"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.539 253465 DEBUG nova.network.os_vif_util [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9b:8d,bridge_name='br-int',has_traffic_filtering=True,id=905dd436-f21d-4498-9bc7-a159e966bc32,network=Network(20419c46-b854-4274-a893-985996c423ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap905dd436-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.540 253465 DEBUG nova.objects.instance [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'pci_devices' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.566 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <uuid>f916655a-aa1c-4071-b05b-7bd2a8661ce0</uuid>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <name>instance-00000012</name>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <nova:name>tempest-SnapshotDataIntegrityTests-server-448994149</nova:name>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:02:29</nova:creationTime>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <nova:user uuid="a2ea2fdf84c34961a57ed463c6daa7ba">tempest-SnapshotDataIntegrityTests-58717939-project-member</nova:user>
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <nova:project uuid="2ed2837d5c0344b88b5ba7799c801241">tempest-SnapshotDataIntegrityTests-58717939</nova:project>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <nova:port uuid="905dd436-f21d-4498-9bc7-a159e966bc32">
Nov 22 04:02:30 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <system>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <entry name="serial">f916655a-aa1c-4071-b05b-7bd2a8661ce0</entry>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <entry name="uuid">f916655a-aa1c-4071-b05b-7bd2a8661ce0</entry>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </system>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <os>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   </os>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <features>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   </features>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk">
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       </source>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk.config">
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       </source>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:02:30 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:67:9b:8d"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <target dev="tap905dd436-f2"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0/console.log" append="off"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <video>
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </video>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:02:30 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:02:30 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:02:30 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:02:30 compute-0 nova_compute[253461]: </domain>
Nov 22 04:02:30 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.567 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Preparing to wait for external event network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.567 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.567 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.568 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.568 253465 DEBUG nova.virt.libvirt.vif [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-448994149',display_name='tempest-SnapshotDataIntegrityTests-server-448994149',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-448994149',id=18,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMdHPvqdgfYANQmokSC7429xVzDKgMwZE4oiZaPOpXH5lgO3KNV4xaut64/pEBvzIQTnQWSGFpIS7A+K3rfQ+++WPw0I8OMiD86CFB9DXTD6TBgfwIpCH8imYNPR9HbvfQ==',key_name='tempest-keypair-1058175192',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ed2837d5c0344b88b5ba7799c801241',ramdisk_id='',reservation_id='r-3wk7snjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-58717939',owner_user_name='tempest-SnapshotDataIntegrityTests-58717939-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:02:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a2ea2fdf84c34961a57ed463c6daa7ba',uuid=f916655a-aa1c-4071-b05b-7bd2a8661ce0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.569 253465 DEBUG nova.network.os_vif_util [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Converting VIF {"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.569 253465 DEBUG nova.network.os_vif_util [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:9b:8d,bridge_name='br-int',has_traffic_filtering=True,id=905dd436-f21d-4498-9bc7-a159e966bc32,network=Network(20419c46-b854-4274-a893-985996c423ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap905dd436-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.570 253465 DEBUG os_vif [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9b:8d,bridge_name='br-int',has_traffic_filtering=True,id=905dd436-f21d-4498-9bc7-a159e966bc32,network=Network(20419c46-b854-4274-a893-985996c423ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap905dd436-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.570 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.571 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.571 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.573 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.574 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap905dd436-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.575 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap905dd436-f2, col_values=(('external_ids', {'iface-id': '905dd436-f21d-4498-9bc7-a159e966bc32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:9b:8d', 'vm-uuid': 'f916655a-aa1c-4071-b05b-7bd2a8661ce0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.577 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:30 compute-0 NetworkManager[48916]: <info>  [1763784150.5789] manager: (tap905dd436-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.581 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.584 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.585 253465 INFO os_vif [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:9b:8d,bridge_name='br-int',has_traffic_filtering=True,id=905dd436-f21d-4498-9bc7-a159e966bc32,network=Network(20419c46-b854-4274-a893-985996c423ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap905dd436-f2')
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.682 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.683 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.683 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No VIF found with MAC fa:16:3e:67:9b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.684 253465 INFO nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Using config drive
Nov 22 04:02:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3933008465' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:30 compute-0 ceph-mon[75011]: pgmap v1489: 305 pgs: 305 active+clean; 199 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.7 MiB/s wr, 53 op/s
Nov 22 04:02:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3833346012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:30 compute-0 nova_compute[253461]: 2025-11-22 04:02:30.765 253465 DEBUG nova.storage.rbd_utils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] rbd image f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.115 253465 DEBUG oslo_concurrency.processutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435/disk.config 30f0e486-2dc6-492c-9891-5f02237d7435_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.702s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.115 253465 INFO nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Deleting local config drive /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435/disk.config because it was imported into RBD.
Nov 22 04:02:31 compute-0 kernel: tapb394a80d-18: entered promiscuous mode
Nov 22 04:02:31 compute-0 NetworkManager[48916]: <info>  [1763784151.1934] manager: (tapb394a80d-18): new Tun device (/org/freedesktop/NetworkManager/Devices/96)
Nov 22 04:02:31 compute-0 ovn_controller[152691]: 2025-11-22T04:02:31Z|00183|binding|INFO|Claiming lport b394a80d-1857-49b1-bd4f-a2a675cc7ebe for this chassis.
Nov 22 04:02:31 compute-0 ovn_controller[152691]: 2025-11-22T04:02:31Z|00184|binding|INFO|b394a80d-1857-49b1-bd4f-a2a675cc7ebe: Claiming fa:16:3e:85:6c:3c 10.100.0.12
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.194 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:31 compute-0 systemd-machined[215728]: New machine qemu-17-instance-00000011.
Nov 22 04:02:31 compute-0 ovn_controller[152691]: 2025-11-22T04:02:31Z|00185|binding|INFO|Setting lport b394a80d-1857-49b1-bd4f-a2a675cc7ebe ovn-installed in OVS
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.232 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.240 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:6c:3c 10.100.0.12'], port_security=['fa:16:3e:85:6c:3c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30f0e486-2dc6-492c-9891-5f02237d7435', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '20bce94a-76bb-4cce-8d86-d3a6c4976306', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b394a80d-1857-49b1-bd4f-a2a675cc7ebe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.241 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:31 compute-0 ovn_controller[152691]: 2025-11-22T04:02:31Z|00186|binding|INFO|Setting lport b394a80d-1857-49b1-bd4f-a2a675cc7ebe up in Southbound
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.242 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b394a80d-1857-49b1-bd4f-a2a675cc7ebe in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 bound to our chassis
Nov 22 04:02:31 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.245 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.267 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[33a61ea3-5135-47f1-a34e-49dcef713e11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.268 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4670b112-91 in ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.271 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4670b112-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.271 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[17dbbbb3-de75-4511-be58-bf693c7d1521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.272 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1d7681-365b-46d2-8ad0-1b37f8eb40e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 systemd-udevd[283652]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.286 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[5b046aeb-b7af-4ae9-a050-9e7f919df4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 NetworkManager[48916]: <info>  [1763784151.3048] device (tapb394a80d-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:02:31 compute-0 NetworkManager[48916]: <info>  [1763784151.3062] device (tapb394a80d-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.316 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[13c53747-a37b-4050-9330-5d354194674c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.355 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[6224241b-07e1-4735-8902-853e6ebbca8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 systemd-udevd[283661]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:02:31 compute-0 NetworkManager[48916]: <info>  [1763784151.3654] manager: (tap4670b112-90): new Veth device (/org/freedesktop/NetworkManager/Devices/97)
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.364 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8f44abdb-0aa5-4b57-86f5-c2b38279050b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.387 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.406 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[03e8ff37-1d75-438f-bee3-6f91cebe54fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.411 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[7d713d05-e1af-4f4a-9881-a507768aefb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 NetworkManager[48916]: <info>  [1763784151.4393] device (tap4670b112-90): carrier: link connected
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.442 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[18a28dea-47fb-40d5-8e58-d20304062694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.467 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[82720804-c5ec-40f9-a159-d22dbbd0d20d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440403, 'reachable_time': 42317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283687, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.499 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e568de53-f16a-4e04-a0c5-854b80db7fc2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:43a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440403, 'tstamp': 440403}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283703, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.518 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4344abe0-594e-4b03-b949-7bd83dc0c408]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440403, 'reachable_time': 42317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283707, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.548 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb379bd-68a7-4c42-812d-ee2c944760b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.628 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[47cb10da-abf4-4d85-9ece-52a123ff7a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.629 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.630 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.630 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.632 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:31 compute-0 NetworkManager[48916]: <info>  [1763784151.6329] manager: (tap4670b112-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/98)
Nov 22 04:02:31 compute-0 kernel: tap4670b112-90: entered promiscuous mode
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.638 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.641 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.642 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:31 compute-0 ovn_controller[152691]: 2025-11-22T04:02:31Z|00187|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.659 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.664 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.665 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.666 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6feee1f5-cd3a-41e9-b3ab-cde5bad63441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.666 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:02:31 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:31.667 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'env', 'PROCESS_TAG=haproxy-4670b112-9f63-4a03-8d79-91f581c69c03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4670b112-9f63-4a03-8d79-91f581c69c03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.891 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784151.8912866, 30f0e486-2dc6-492c-9891-5f02237d7435 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.892 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] VM Started (Lifecycle Event)
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.941 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.949 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784151.8914573, 30f0e486-2dc6-492c-9891-5f02237d7435 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.949 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] VM Paused (Lifecycle Event)
Nov 22 04:02:31 compute-0 nova_compute[253461]: 2025-11-22 04:02:31.998 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.004 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.044 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:02:32 compute-0 podman[283768]: 2025-11-22 04:02:32.148482831 +0000 UTC m=+0.108944232 container create 9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:02:32 compute-0 podman[283768]: 2025-11-22 04:02:32.065792408 +0000 UTC m=+0.026253869 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:02:32 compute-0 systemd[1]: Started libpod-conmon-9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f.scope.
Nov 22 04:02:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c5f4cb1dd56dd164ea6b95b83df09f83cddb450258c784b5f4cd99808f1431/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:32 compute-0 podman[283768]: 2025-11-22 04:02:32.30064536 +0000 UTC m=+0.261106811 container init 9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:02:32 compute-0 podman[283768]: 2025-11-22 04:02:32.310990091 +0000 UTC m=+0.271451492 container start 9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 22 04:02:32 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[283783]: [NOTICE]   (283787) : New worker (283789) forked
Nov 22 04:02:32 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[283783]: [NOTICE]   (283787) : Loading success.
Nov 22 04:02:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 328 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 21 MiB/s wr, 167 op/s
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.538 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.586 253465 INFO nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Creating config drive at /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0/disk.config
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.601 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hskexlb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.686 253465 DEBUG nova.compute.manager [req-7b875cc1-4e4c-4e56-8842-95cd2f70a3a1 req-5e404a5a-53fd-4cdc-ba1d-d7adc3f3e4f8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received event network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.687 253465 DEBUG oslo_concurrency.lockutils [req-7b875cc1-4e4c-4e56-8842-95cd2f70a3a1 req-5e404a5a-53fd-4cdc-ba1d-d7adc3f3e4f8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.690 253465 DEBUG oslo_concurrency.lockutils [req-7b875cc1-4e4c-4e56-8842-95cd2f70a3a1 req-5e404a5a-53fd-4cdc-ba1d-d7adc3f3e4f8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.690 253465 DEBUG oslo_concurrency.lockutils [req-7b875cc1-4e4c-4e56-8842-95cd2f70a3a1 req-5e404a5a-53fd-4cdc-ba1d-d7adc3f3e4f8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.691 253465 DEBUG nova.compute.manager [req-7b875cc1-4e4c-4e56-8842-95cd2f70a3a1 req-5e404a5a-53fd-4cdc-ba1d-d7adc3f3e4f8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Processing event network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.692 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.697 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784152.6967788, 30f0e486-2dc6-492c-9891-5f02237d7435 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.698 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] VM Resumed (Lifecycle Event)
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.702 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.706 253465 INFO nova.virt.libvirt.driver [-] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Instance spawned successfully.
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.707 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.755 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5hskexlb" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.800 253465 DEBUG nova.storage.rbd_utils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] rbd image f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.817 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0/disk.config f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.848 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.855 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.856 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.856 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.857 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.857 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.858 253465 DEBUG nova.virt.libvirt.driver [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.863 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:02:32 compute-0 nova_compute[253461]: 2025-11-22 04:02:32.943 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.005 253465 INFO nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Took 5.54 seconds to spawn the instance on the hypervisor.
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.006 253465 DEBUG nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.108 253465 INFO nova.compute.manager [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Took 8.18 seconds to build instance.
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.131 253465 DEBUG oslo_concurrency.lockutils [None req-f5573db5-10e6-443c-bfe7-28047fe62ee5 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.291s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.145 253465 DEBUG oslo_concurrency.processutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0/disk.config f916655a-aa1c-4071-b05b-7bd2a8661ce0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.147 253465 INFO nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Deleting local config drive /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0/disk.config because it was imported into RBD.
Nov 22 04:02:33 compute-0 NetworkManager[48916]: <info>  [1763784153.2078] manager: (tap905dd436-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/99)
Nov 22 04:02:33 compute-0 kernel: tap905dd436-f2: entered promiscuous mode
Nov 22 04:02:33 compute-0 systemd-udevd[283676]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.211 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:33 compute-0 ovn_controller[152691]: 2025-11-22T04:02:33Z|00188|binding|INFO|Claiming lport 905dd436-f21d-4498-9bc7-a159e966bc32 for this chassis.
Nov 22 04:02:33 compute-0 ovn_controller[152691]: 2025-11-22T04:02:33Z|00189|binding|INFO|905dd436-f21d-4498-9bc7-a159e966bc32: Claiming fa:16:3e:67:9b:8d 10.100.0.11
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.235 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:9b:8d 10.100.0.11'], port_security=['fa:16:3e:67:9b:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f916655a-aa1c-4071-b05b-7bd2a8661ce0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20419c46-b854-4274-a893-985996c423ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ed2837d5c0344b88b5ba7799c801241', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d5a7c53-cf71-48a7-9702-55f86ae6b22a d4a0b50e-e25d-45d7-8066-0806f23d5429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc364e99-58eb-4fc0-816d-2e7face6b382, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=905dd436-f21d-4498-9bc7-a159e966bc32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.238 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 905dd436-f21d-4498-9bc7-a159e966bc32 in datapath 20419c46-b854-4274-a893-985996c423ff bound to our chassis
Nov 22 04:02:33 compute-0 NetworkManager[48916]: <info>  [1763784153.2428] device (tap905dd436-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.244 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 20419c46-b854-4274-a893-985996c423ff
Nov 22 04:02:33 compute-0 NetworkManager[48916]: <info>  [1763784153.2452] device (tap905dd436-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.258 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0e00a62e-95b3-408c-a986-153b74ae748a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.259 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap20419c46-b1 in ovnmeta-20419c46-b854-4274-a893-985996c423ff namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.261 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap20419c46-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.261 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4ba7e825-9609-43d2-97c4-cb340faf9167]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.262 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9688e8ce-6667-487b-9a7b-e349d7185dae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 systemd-machined[215728]: New machine qemu-18-instance-00000012.
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.272 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf3b170-b157-41fd-a5c5-c49221e0bab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.294 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd115a1-d451-46a4-8eb2-69a757005800]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.300 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:33 compute-0 ovn_controller[152691]: 2025-11-22T04:02:33Z|00190|binding|INFO|Setting lport 905dd436-f21d-4498-9bc7-a159e966bc32 ovn-installed in OVS
Nov 22 04:02:33 compute-0 ovn_controller[152691]: 2025-11-22T04:02:33Z|00191|binding|INFO|Setting lport 905dd436-f21d-4498-9bc7-a159e966bc32 up in Southbound
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.305 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.322 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[45c655fa-0aa2-4287-96b3-758e0dc68d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 NetworkManager[48916]: <info>  [1763784153.3290] manager: (tap20419c46-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/100)
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.329 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8b080995-235f-45e9-8fb7-ae73f41399a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.367 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[e473cf50-8204-45fe-9006-a8bdf9319146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.371 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a391a366-a9d1-4f68-9791-fd4bea6f4842]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 NetworkManager[48916]: <info>  [1763784153.3976] device (tap20419c46-b0): carrier: link connected
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.407 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[2a270806-07a2-4603-a683-9060c55f0a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.430 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[82bc1783-b232-4442-823e-36076b54ff96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20419c46-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:cd:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440599, 'reachable_time': 22540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283869, 'error': None, 'target': 'ovnmeta-20419c46-b854-4274-a893-985996c423ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.448 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ca263a-0bca-4ea4-a29d-225cf22e75bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:cd7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440599, 'tstamp': 440599}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283870, 'error': None, 'target': 'ovnmeta-20419c46-b854-4274-a893-985996c423ff', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ceph-mon[75011]: pgmap v1490: 305 pgs: 305 active+clean; 328 MiB data, 547 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 21 MiB/s wr, 167 op/s
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.474 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e0031cb1-f5ff-4a3b-b9ab-73a29ef8e909]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap20419c46-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:cd:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440599, 'reachable_time': 22540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283871, 'error': None, 'target': 'ovnmeta-20419c46-b854-4274-a893-985996c423ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.518 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6a4a10-3e70-4418-91f8-76a4f22ddb5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.618 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[addc002a-d2cf-45a9-842d-cb9e3ae4071f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.620 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20419c46-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.620 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.621 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap20419c46-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:33 compute-0 NetworkManager[48916]: <info>  [1763784153.6236] manager: (tap20419c46-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/101)
Nov 22 04:02:33 compute-0 kernel: tap20419c46-b0: entered promiscuous mode
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.625 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.625 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap20419c46-b0, col_values=(('external_ids', {'iface-id': '6958247b-71c0-41d2-a518-2490d0fb2be6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:02:33 compute-0 ovn_controller[152691]: 2025-11-22T04:02:33Z|00192|binding|INFO|Releasing lport 6958247b-71c0-41d2-a518-2490d0fb2be6 from this chassis (sb_readonly=0)
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.648 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.648 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/20419c46-b854-4274-a893-985996c423ff.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/20419c46-b854-4274-a893-985996c423ff.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.649 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d242b4-36d9-4849-9792-23eb61bc7d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.650 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-20419c46-b854-4274-a893-985996c423ff
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/20419c46-b854-4274-a893-985996c423ff.pid.haproxy
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 20419c46-b854-4274-a893-985996c423ff
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:02:33 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:02:33.651 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-20419c46-b854-4274-a893-985996c423ff', 'env', 'PROCESS_TAG=haproxy-20419c46-b854-4274-a893-985996c423ff', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/20419c46-b854-4274-a893-985996c423ff.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.940 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784153.9398434, f916655a-aa1c-4071-b05b-7bd2a8661ce0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.941 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] VM Started (Lifecycle Event)
Nov 22 04:02:33 compute-0 nova_compute[253461]: 2025-11-22 04:02:33.990 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.014 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784153.9401195, f916655a-aa1c-4071-b05b-7bd2a8661ce0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.015 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] VM Paused (Lifecycle Event)
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.081 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.087 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.108 253465 DEBUG nova.network.neutron [req-adebf550-8e3b-4ada-87f2-1dca2ac9ceaa req-bff9a961-8f58-4eea-9641-adda4d88adef f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updated VIF entry in instance network info cache for port 905dd436-f21d-4498-9bc7-a159e966bc32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.110 253465 DEBUG nova.network.neutron [req-adebf550-8e3b-4ada-87f2-1dca2ac9ceaa req-bff9a961-8f58-4eea-9641-adda4d88adef f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating instance_info_cache with network_info: [{"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.117 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.149 253465 DEBUG oslo_concurrency.lockutils [req-adebf550-8e3b-4ada-87f2-1dca2ac9ceaa req-bff9a961-8f58-4eea-9641-adda4d88adef f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:02:34 compute-0 podman[283945]: 2025-11-22 04:02:34.175363952 +0000 UTC m=+0.072646214 container create 0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:02:34 compute-0 systemd[1]: Started libpod-conmon-0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6.scope.
Nov 22 04:02:34 compute-0 podman[283945]: 2025-11-22 04:02:34.135781518 +0000 UTC m=+0.033063850 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:02:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ec5dd98917944b48bfdf0bd09091d5989677044325f1b3fe05517be241af43a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:02:34 compute-0 podman[283945]: 2025-11-22 04:02:34.339223848 +0000 UTC m=+0.236506200 container init 0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:02:34 compute-0 podman[283945]: 2025-11-22 04:02:34.346485061 +0000 UTC m=+0.243767363 container start 0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:02:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 576 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 52 MiB/s wr, 198 op/s
Nov 22 04:02:34 compute-0 neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff[283960]: [NOTICE]   (283964) : New worker (283966) forked
Nov 22 04:02:34 compute-0 neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff[283960]: [NOTICE]   (283964) : Loading success.
Nov 22 04:02:34 compute-0 ceph-mon[75011]: pgmap v1491: 305 pgs: 305 active+clean; 576 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 52 MiB/s wr, 198 op/s
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.812 253465 DEBUG nova.compute.manager [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received event network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.812 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.813 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.814 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.814 253465 DEBUG nova.compute.manager [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] No waiting events found dispatching network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.814 253465 WARNING nova.compute.manager [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received unexpected event network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe for instance with vm_state active and task_state None.
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.815 253465 DEBUG nova.compute.manager [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received event network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.815 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.816 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.816 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.817 253465 DEBUG nova.compute.manager [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Processing event network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.817 253465 DEBUG nova.compute.manager [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received event network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.818 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.818 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.819 253465 DEBUG oslo_concurrency.lockutils [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.819 253465 DEBUG nova.compute.manager [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] No waiting events found dispatching network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.819 253465 WARNING nova.compute.manager [req-825da9d7-eb3a-4403-82d4-3125a204aa7f req-5ce8bb3f-f422-43fe-9b3a-67fbe8d3b883 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received unexpected event network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 for instance with vm_state building and task_state spawning.
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.820 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.826 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.828 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784154.8283126, f916655a-aa1c-4071-b05b-7bd2a8661ce0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.829 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] VM Resumed (Lifecycle Event)
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.845 253465 INFO nova.virt.libvirt.driver [-] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Instance spawned successfully.
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.846 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.853 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.857 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.895 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.900 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.900 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.901 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.902 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.902 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:34 compute-0 nova_compute[253461]: 2025-11-22 04:02:34.903 253465 DEBUG nova.virt.libvirt.driver [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:02:35 compute-0 nova_compute[253461]: 2025-11-22 04:02:35.051 253465 INFO nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Took 8.48 seconds to spawn the instance on the hypervisor.
Nov 22 04:02:35 compute-0 nova_compute[253461]: 2025-11-22 04:02:35.052 253465 DEBUG nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:02:35 compute-0 nova_compute[253461]: 2025-11-22 04:02:35.156 253465 INFO nova.compute.manager [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Took 10.15 seconds to build instance.
Nov 22 04:02:35 compute-0 nova_compute[253461]: 2025-11-22 04:02:35.180 253465 DEBUG oslo_concurrency.lockutils [None req-6429bbad-76ff-48ea-8ac5-a30c6966b96f a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.303s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:35 compute-0 nova_compute[253461]: 2025-11-22 04:02:35.577 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:02:36
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'images']
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 733 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 59 MiB/s wr, 182 op/s
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:02:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:02:36 compute-0 nova_compute[253461]: 2025-11-22 04:02:36.689 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:36 compute-0 NetworkManager[48916]: <info>  [1763784156.6919] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Nov 22 04:02:36 compute-0 NetworkManager[48916]: <info>  [1763784156.6928] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Nov 22 04:02:36 compute-0 nova_compute[253461]: 2025-11-22 04:02:36.897 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:36 compute-0 ovn_controller[152691]: 2025-11-22T04:02:36Z|00193|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:02:36 compute-0 ovn_controller[152691]: 2025-11-22T04:02:36Z|00194|binding|INFO|Releasing lport 6958247b-71c0-41d2-a518-2490d0fb2be6 from this chassis (sb_readonly=0)
Nov 22 04:02:36 compute-0 nova_compute[253461]: 2025-11-22 04:02:36.931 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:37 compute-0 ceph-mon[75011]: pgmap v1492: 305 pgs: 305 active+clean; 733 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 59 MiB/s wr, 182 op/s
Nov 22 04:02:37 compute-0 podman[283976]: 2025-11-22 04:02:37.436189089 +0000 UTC m=+0.102300174 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:02:37 compute-0 nova_compute[253461]: 2025-11-22 04:02:37.585 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:37 compute-0 nova_compute[253461]: 2025-11-22 04:02:37.674 253465 DEBUG nova.compute.manager [req-24703557-c306-4151-b294-b645414430f3 req-69e0aea7-9458-48ec-bd39-feb6b20e017b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received event network-changed-b394a80d-1857-49b1-bd4f-a2a675cc7ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:37 compute-0 nova_compute[253461]: 2025-11-22 04:02:37.675 253465 DEBUG nova.compute.manager [req-24703557-c306-4151-b294-b645414430f3 req-69e0aea7-9458-48ec-bd39-feb6b20e017b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Refreshing instance network info cache due to event network-changed-b394a80d-1857-49b1-bd4f-a2a675cc7ebe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:02:37 compute-0 nova_compute[253461]: 2025-11-22 04:02:37.675 253465 DEBUG oslo_concurrency.lockutils [req-24703557-c306-4151-b294-b645414430f3 req-69e0aea7-9458-48ec-bd39-feb6b20e017b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:02:37 compute-0 nova_compute[253461]: 2025-11-22 04:02:37.675 253465 DEBUG oslo_concurrency.lockutils [req-24703557-c306-4151-b294-b645414430f3 req-69e0aea7-9458-48ec-bd39-feb6b20e017b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:02:37 compute-0 nova_compute[253461]: 2025-11-22 04:02:37.675 253465 DEBUG nova.network.neutron [req-24703557-c306-4151-b294-b645414430f3 req-69e0aea7-9458-48ec-bd39-feb6b20e017b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Refreshing network info cache for port b394a80d-1857-49b1-bd4f-a2a675cc7ebe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:02:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 917 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 71 MiB/s wr, 328 op/s
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Nov 22 04:02:38 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.455 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.456 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.457 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.458 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.459 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.690 253465 DEBUG nova.compute.manager [req-4694a727-5c02-43e2-a5fd-5fe97bfb3e32 req-a39cede9-81cf-416b-a32b-2cacdf7602ce f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received event network-changed-905dd436-f21d-4498-9bc7-a159e966bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.690 253465 DEBUG nova.compute.manager [req-4694a727-5c02-43e2-a5fd-5fe97bfb3e32 req-a39cede9-81cf-416b-a32b-2cacdf7602ce f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Refreshing instance network info cache due to event network-changed-905dd436-f21d-4498-9bc7-a159e966bc32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.691 253465 DEBUG oslo_concurrency.lockutils [req-4694a727-5c02-43e2-a5fd-5fe97bfb3e32 req-a39cede9-81cf-416b-a32b-2cacdf7602ce f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.691 253465 DEBUG oslo_concurrency.lockutils [req-4694a727-5c02-43e2-a5fd-5fe97bfb3e32 req-a39cede9-81cf-416b-a32b-2cacdf7602ce f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.691 253465 DEBUG nova.network.neutron [req-4694a727-5c02-43e2-a5fd-5fe97bfb3e32 req-a39cede9-81cf-416b-a32b-2cacdf7602ce f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Refreshing network info cache for port 905dd436-f21d-4498-9bc7-a159e966bc32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:02:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:02:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285713441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:38 compute-0 nova_compute[253461]: 2025-11-22 04:02:38.973 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.072 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.072 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.077 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.078 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.310 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.312 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4188MB free_disk=59.96718978881836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.312 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.312 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.384 253465 DEBUG nova.network.neutron [req-24703557-c306-4151-b294-b645414430f3 req-69e0aea7-9458-48ec-bd39-feb6b20e017b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updated VIF entry in instance network info cache for port b394a80d-1857-49b1-bd4f-a2a675cc7ebe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.385 253465 DEBUG nova.network.neutron [req-24703557-c306-4151-b294-b645414430f3 req-69e0aea7-9458-48ec-bd39-feb6b20e017b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updating instance_info_cache with network_info: [{"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.386 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 30f0e486-2dc6-492c-9891-5f02237d7435 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.386 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.387 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.387 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.402 253465 DEBUG oslo_concurrency.lockutils [req-24703557-c306-4151-b294-b645414430f3 req-69e0aea7-9458-48ec-bd39-feb6b20e017b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:02:39 compute-0 ceph-mon[75011]: pgmap v1493: 305 pgs: 305 active+clean; 917 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 71 MiB/s wr, 328 op/s
Nov 22 04:02:39 compute-0 ceph-mon[75011]: osdmap e363: 3 total, 3 up, 3 in
Nov 22 04:02:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/285713441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.445 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:02:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:02:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/835226405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.904 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.912 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.932 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.963 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:02:39 compute-0 nova_compute[253461]: 2025-11-22 04:02:39.964 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:02:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 98 MiB/s wr, 438 op/s
Nov 22 04:02:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/835226405' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:02:40 compute-0 nova_compute[253461]: 2025-11-22 04:02:40.513 253465 DEBUG nova.network.neutron [req-4694a727-5c02-43e2-a5fd-5fe97bfb3e32 req-a39cede9-81cf-416b-a32b-2cacdf7602ce f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updated VIF entry in instance network info cache for port 905dd436-f21d-4498-9bc7-a159e966bc32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:02:40 compute-0 nova_compute[253461]: 2025-11-22 04:02:40.514 253465 DEBUG nova.network.neutron [req-4694a727-5c02-43e2-a5fd-5fe97bfb3e32 req-a39cede9-81cf-416b-a32b-2cacdf7602ce f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating instance_info_cache with network_info: [{"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:02:40 compute-0 nova_compute[253461]: 2025-11-22 04:02:40.553 253465 DEBUG oslo_concurrency.lockutils [req-4694a727-5c02-43e2-a5fd-5fe97bfb3e32 req-a39cede9-81cf-416b-a32b-2cacdf7602ce f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:02:40 compute-0 nova_compute[253461]: 2025-11-22 04:02:40.579 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:40 compute-0 nova_compute[253461]: 2025-11-22 04:02:40.965 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:40 compute-0 nova_compute[253461]: 2025-11-22 04:02:40.966 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:40 compute-0 nova_compute[253461]: 2025-11-22 04:02:40.967 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:02:40 compute-0 nova_compute[253461]: 2025-11-22 04:02:40.967 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:02:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e363 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:41 compute-0 nova_compute[253461]: 2025-11-22 04:02:41.419 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:02:41 compute-0 nova_compute[253461]: 2025-11-22 04:02:41.420 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:02:41 compute-0 nova_compute[253461]: 2025-11-22 04:02:41.420 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:02:41 compute-0 nova_compute[253461]: 2025-11-22 04:02:41.421 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 30f0e486-2dc6-492c-9891-5f02237d7435 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:02:41 compute-0 ceph-mon[75011]: pgmap v1495: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 98 MiB/s wr, 438 op/s
Nov 22 04:02:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/294172236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 88 MiB/s wr, 359 op/s
Nov 22 04:02:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Nov 22 04:02:42 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/294172236' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:42 compute-0 ceph-mon[75011]: pgmap v1496: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 88 MiB/s wr, 359 op/s
Nov 22 04:02:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Nov 22 04:02:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Nov 22 04:02:42 compute-0 nova_compute[253461]: 2025-11-22 04:02:42.587 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:42 compute-0 nova_compute[253461]: 2025-11-22 04:02:42.677 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updating instance_info_cache with network_info: [{"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:02:42 compute-0 nova_compute[253461]: 2025-11-22 04:02:42.693 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-30f0e486-2dc6-492c-9891-5f02237d7435" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:02:42 compute-0 nova_compute[253461]: 2025-11-22 04:02:42.693 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:02:42 compute-0 nova_compute[253461]: 2025-11-22 04:02:42.693 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:42 compute-0 nova_compute[253461]: 2025-11-22 04:02:42.694 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:42 compute-0 nova_compute[253461]: 2025-11-22 04:02:42.694 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:42 compute-0 nova_compute[253461]: 2025-11-22 04:02:42.694 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:02:43 compute-0 nova_compute[253461]: 2025-11-22 04:02:43.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Nov 22 04:02:43 compute-0 ceph-mon[75011]: osdmap e364: 3 total, 3 up, 3 in
Nov 22 04:02:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Nov 22 04:02:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Nov 22 04:02:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 48 MiB/s wr, 238 op/s
Nov 22 04:02:44 compute-0 nova_compute[253461]: 2025-11-22 04:02:44.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:44 compute-0 ceph-mon[75011]: osdmap e365: 3 total, 3 up, 3 in
Nov 22 04:02:44 compute-0 ceph-mon[75011]: pgmap v1499: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 48 MiB/s wr, 238 op/s
Nov 22 04:02:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2927983912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:45 compute-0 nova_compute[253461]: 2025-11-22 04:02:45.424 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:02:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Nov 22 04:02:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2927983912' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Nov 22 04:02:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Nov 22 04:02:45 compute-0 nova_compute[253461]: 2025-11-22 04:02:45.581 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Nov 22 04:02:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Nov 22 04:02:46 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Nov 22 04:02:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 KiB/s wr, 23 op/s
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035159302353975384 of space, bias 1.0, pg target 0.10547790706192615 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.01736306122615094 of space, bias 1.0, pg target 5.208918367845282 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.503703522823468e-05 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19652199526274663 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006002962818258775 quantized to 16 (current 16)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.503703522823468e-05 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006378147994399948 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:02:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015007407045646937 quantized to 32 (current 32)
Nov 22 04:02:46 compute-0 ceph-mon[75011]: osdmap e366: 3 total, 3 up, 3 in
Nov 22 04:02:46 compute-0 ceph-mon[75011]: osdmap e367: 3 total, 3 up, 3 in
Nov 22 04:02:46 compute-0 ceph-mon[75011]: pgmap v1502: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 KiB/s wr, 23 op/s
Nov 22 04:02:47 compute-0 ovn_controller[152691]: 2025-11-22T04:02:47Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:6c:3c 10.100.0.12
Nov 22 04:02:47 compute-0 ovn_controller[152691]: 2025-11-22T04:02:47Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:6c:3c 10.100.0.12
Nov 22 04:02:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:02:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/51535760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:47 compute-0 nova_compute[253461]: 2025-11-22 04:02:47.589 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/51535760' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:02:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 1.7 MiB/s wr, 116 op/s
Nov 22 04:02:49 compute-0 ceph-mon[75011]: pgmap v1503: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 1.7 MiB/s wr, 116 op/s
Nov 22 04:02:49 compute-0 ovn_controller[152691]: 2025-11-22T04:02:49Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:9b:8d 10.100.0.11
Nov 22 04:02:49 compute-0 ovn_controller[152691]: 2025-11-22T04:02:49Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:9b:8d 10.100.0.11
Nov 22 04:02:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 1.0 MiB/s rd, 23 MiB/s wr, 294 op/s
Nov 22 04:02:50 compute-0 nova_compute[253461]: 2025-11-22 04:02:50.584 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:51 compute-0 ceph-mon[75011]: pgmap v1504: 305 pgs: 305 active+clean; 1.3 GiB data, 1.6 GiB used, 58 GiB / 60 GiB avail; 1.0 MiB/s rd, 23 MiB/s wr, 294 op/s
Nov 22 04:02:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 1002 KiB/s rd, 34 MiB/s wr, 277 op/s
Nov 22 04:02:52 compute-0 ceph-mon[75011]: pgmap v1505: 305 pgs: 305 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 1002 KiB/s rd, 34 MiB/s wr, 277 op/s
Nov 22 04:02:52 compute-0 nova_compute[253461]: 2025-11-22 04:02:52.618 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:53 compute-0 podman[284047]: 2025-11-22 04:02:53.427152383 +0000 UTC m=+0.093318925 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 04:02:53 compute-0 podman[284048]: 2025-11-22 04:02:53.491220759 +0000 UTC m=+0.151512419 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:02:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 1.7 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 1010 KiB/s rd, 55 MiB/s wr, 380 op/s
Nov 22 04:02:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Nov 22 04:02:55 compute-0 ceph-mon[75011]: pgmap v1506: 305 pgs: 305 active+clean; 1.7 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 1010 KiB/s rd, 55 MiB/s wr, 380 op/s
Nov 22 04:02:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Nov 22 04:02:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Nov 22 04:02:55 compute-0 nova_compute[253461]: 2025-11-22 04:02:55.585 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e368 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:02:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 1.8 GiB data, 2.0 GiB used, 58 GiB / 60 GiB avail; 902 KiB/s rd, 65 MiB/s wr, 343 op/s
Nov 22 04:02:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Nov 22 04:02:56 compute-0 ceph-mon[75011]: osdmap e368: 3 total, 3 up, 3 in
Nov 22 04:02:56 compute-0 ceph-mon[75011]: pgmap v1508: 305 pgs: 305 active+clean; 1.8 GiB data, 2.0 GiB used, 58 GiB / 60 GiB avail; 902 KiB/s rd, 65 MiB/s wr, 343 op/s
Nov 22 04:02:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Nov 22 04:02:56 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Nov 22 04:02:57 compute-0 ceph-mon[75011]: osdmap e369: 3 total, 3 up, 3 in
Nov 22 04:02:57 compute-0 nova_compute[253461]: 2025-11-22 04:02:57.621 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:02:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 2.0 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 249 KiB/s rd, 90 MiB/s wr, 195 op/s
Nov 22 04:02:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Nov 22 04:02:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Nov 22 04:02:58 compute-0 ceph-mon[75011]: pgmap v1510: 305 pgs: 305 active+clean; 2.0 GiB data, 2.2 GiB used, 58 GiB / 60 GiB avail; 249 KiB/s rd, 90 MiB/s wr, 195 op/s
Nov 22 04:02:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Nov 22 04:02:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Nov 22 04:02:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Nov 22 04:02:59 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Nov 22 04:02:59 compute-0 ceph-mon[75011]: osdmap e370: 3 total, 3 up, 3 in
Nov 22 04:03:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:03:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/506527237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:03:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:03:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/506527237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:03:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 152 KiB/s rd, 89 MiB/s wr, 254 op/s
Nov 22 04:03:00 compute-0 nova_compute[253461]: 2025-11-22 04:03:00.587 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:00 compute-0 ceph-mon[75011]: osdmap e371: 3 total, 3 up, 3 in
Nov 22 04:03:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/506527237' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:03:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/506527237' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:03:00 compute-0 ceph-mon[75011]: pgmap v1513: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 152 KiB/s rd, 89 MiB/s wr, 254 op/s
Nov 22 04:03:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 124 KiB/s rd, 71 MiB/s wr, 208 op/s
Nov 22 04:03:02 compute-0 nova_compute[253461]: 2025-11-22 04:03:02.625 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:02 compute-0 nova_compute[253461]: 2025-11-22 04:03:02.865 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:02 compute-0 nova_compute[253461]: 2025-11-22 04:03:02.865 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:02 compute-0 nova_compute[253461]: 2025-11-22 04:03:02.885 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:03:02 compute-0 nova_compute[253461]: 2025-11-22 04:03:02.972 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:02 compute-0 nova_compute[253461]: 2025-11-22 04:03:02.972 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:02 compute-0 nova_compute[253461]: 2025-11-22 04:03:02.982 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:03:02 compute-0 nova_compute[253461]: 2025-11-22 04:03:02.982 253465 INFO nova.compute.claims [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.123 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:03 compute-0 ceph-mon[75011]: pgmap v1514: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 124 KiB/s rd, 71 MiB/s wr, 208 op/s
Nov 22 04:03:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:03:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/377850184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.648 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.654 253465 DEBUG nova.compute.provider_tree [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.677 253465 DEBUG nova.scheduler.client.report [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.707 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.708 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.774 253465 INFO nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.777 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.777 253465 DEBUG nova.network.neutron [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.805 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:03:03 compute-0 nova_compute[253461]: 2025-11-22 04:03:03.858 253465 INFO nova.virt.block_device [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Booting with volume snapshot 5fbb7c07-ca4c-4e6a-9a60-a101eac1fe33 at /dev/vda
Nov 22 04:03:04 compute-0 nova_compute[253461]: 2025-11-22 04:03:04.028 253465 DEBUG nova.policy [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45ccef35c0c843a59c9dfd0eb67190a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '83cc5de7368b40b984b51f781e85343c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:03:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 55 MiB/s wr, 169 op/s
Nov 22 04:03:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/377850184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:05.491 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:03:05 compute-0 nova_compute[253461]: 2025-11-22 04:03:05.492 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:05 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:05.493 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:03:05 compute-0 ceph-mon[75011]: pgmap v1515: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 55 MiB/s wr, 169 op/s
Nov 22 04:03:05 compute-0 nova_compute[253461]: 2025-11-22 04:03:05.574 253465 DEBUG nova.network.neutron [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Successfully created port: a8b62e2a-0384-4ffd-a779-f44e0b6673c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:03:05 compute-0 nova_compute[253461]: 2025-11-22 04:03:05.589 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Nov 22 04:03:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Nov 22 04:03:06 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Nov 22 04:03:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 25 MiB/s wr, 182 op/s
Nov 22 04:03:07 compute-0 ceph-mon[75011]: osdmap e372: 3 total, 3 up, 3 in
Nov 22 04:03:07 compute-0 ceph-mon[75011]: pgmap v1517: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 1.1 MiB/s rd, 25 MiB/s wr, 182 op/s
Nov 22 04:03:07 compute-0 sudo[284109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:07 compute-0 sudo[284109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:07 compute-0 sudo[284109]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:07 compute-0 podman[284133]: 2025-11-22 04:03:07.59789685 +0000 UTC m=+0.069503378 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:03:07 compute-0 sudo[284141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:03:07 compute-0 sudo[284141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:07 compute-0 sudo[284141]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.627 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.674 253465 DEBUG nova.network.neutron [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Successfully updated port: a8b62e2a-0384-4ffd-a779-f44e0b6673c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:03:07 compute-0 sudo[284178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.694 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.694 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquired lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.695 253465 DEBUG nova.network.neutron [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:03:07 compute-0 sudo[284178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:07 compute-0 sudo[284178]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.770 253465 DEBUG nova.compute.manager [req-371f97a9-8296-411b-ba89-4177c2a062eb req-367eac24-82cc-467a-90e8-a798809afbde f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received event network-changed-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.770 253465 DEBUG nova.compute.manager [req-371f97a9-8296-411b-ba89-4177c2a062eb req-367eac24-82cc-467a-90e8-a798809afbde f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Refreshing instance network info cache due to event network-changed-a8b62e2a-0384-4ffd-a779-f44e0b6673c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.771 253465 DEBUG oslo_concurrency.lockutils [req-371f97a9-8296-411b-ba89-4177c2a062eb req-367eac24-82cc-467a-90e8-a798809afbde f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:03:07 compute-0 sudo[284203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:03:07 compute-0 sudo[284203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:07 compute-0 nova_compute[253461]: 2025-11-22 04:03:07.894 253465 DEBUG nova.network.neutron [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:03:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.8 MiB/s wr, 53 op/s
Nov 22 04:03:08 compute-0 sudo[284203]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.420 253465 DEBUG os_brick.utils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.422 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.443 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.443 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[911e43f1-f72d-43aa-9b5a-5e67382f51e6]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.445 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.458 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.458 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d95dd1-e12f-424b-bdeb-fb2aa0441931]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.460 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.472 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.472 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3bcadd-3c10-46cb-ad9d-cd83328bcee0]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.474 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4cdb02-e39b-4283-909c-c99644d8058a]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.475 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:03:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:03:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:03:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:03:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.500 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:03:08 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 9152768f-f220-45b9-9540-1e29f8739861 does not exist
Nov 22 04:03:08 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6ffde74f-e43b-4ebd-a369-a5073ba27990 does not exist
Nov 22 04:03:08 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3045ffba-e61a-4fd9-a03b-36ad0e01ce87 does not exist
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.503 253465 DEBUG os_brick.initiator.connectors.lightos [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.504 253465 DEBUG os_brick.initiator.connectors.lightos [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.504 253465 DEBUG os_brick.initiator.connectors.lightos [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.504 253465 DEBUG os_brick.utils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] <== get_connector_properties: return (83ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:03:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:03:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.505 253465 DEBUG nova.virt.block_device [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Updating existing volume attachment record: b6121b29-60a4-457c-a637-77695be7ab73 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:03:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:03:08 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:03:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:03:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:03:08 compute-0 sudo[284266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:08 compute-0 sudo[284266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:08 compute-0 sudo[284266]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:08 compute-0 sudo[284291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:03:08 compute-0 sudo[284291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:08 compute-0 sudo[284291]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:08 compute-0 sudo[284316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:08 compute-0 sudo[284316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:08 compute-0 sudo[284316]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:08 compute-0 sudo[284341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:03:08 compute-0 sudo[284341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.865 253465 DEBUG nova.network.neutron [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Updating instance_info_cache with network_info: [{"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.887 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Releasing lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.887 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Instance network_info: |[{"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.888 253465 DEBUG oslo_concurrency.lockutils [req-371f97a9-8296-411b-ba89-4177c2a062eb req-367eac24-82cc-467a-90e8-a798809afbde f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:03:08 compute-0 nova_compute[253461]: 2025-11-22 04:03:08.888 253465 DEBUG nova.network.neutron [req-371f97a9-8296-411b-ba89-4177c2a062eb req-367eac24-82cc-467a-90e8-a798809afbde f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Refreshing network info cache for port a8b62e2a-0384-4ffd-a779-f44e0b6673c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:03:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:03:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2544714478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:09 compute-0 podman[284405]: 2025-11-22 04:03:09.185904213 +0000 UTC m=+0.054272992 container create f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_proskuriakova, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:09 compute-0 systemd[1]: Started libpod-conmon-f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df.scope.
Nov 22 04:03:09 compute-0 podman[284405]: 2025-11-22 04:03:09.162554529 +0000 UTC m=+0.030923357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:03:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:03:09 compute-0 podman[284405]: 2025-11-22 04:03:09.282232391 +0000 UTC m=+0.150601200 container init f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_proskuriakova, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:03:09 compute-0 podman[284405]: 2025-11-22 04:03:09.289473901 +0000 UTC m=+0.157842680 container start f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_proskuriakova, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:09 compute-0 podman[284405]: 2025-11-22 04:03:09.29297296 +0000 UTC m=+0.161341739 container attach f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Nov 22 04:03:09 compute-0 nifty_proskuriakova[284422]: 167 167
Nov 22 04:03:09 compute-0 systemd[1]: libpod-f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df.scope: Deactivated successfully.
Nov 22 04:03:09 compute-0 conmon[284422]: conmon f25fbb2c47b9e3e0119c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df.scope/container/memory.events
Nov 22 04:03:09 compute-0 podman[284405]: 2025-11-22 04:03:09.298724261 +0000 UTC m=+0.167093040 container died f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:03:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0c5dd2f05fafd3c314a616e1e9e64228a439c7671dc43440e4d639b233189f5-merged.mount: Deactivated successfully.
Nov 22 04:03:09 compute-0 podman[284405]: 2025-11-22 04:03:09.348692142 +0000 UTC m=+0.217060921 container remove f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:03:09 compute-0 systemd[1]: libpod-conmon-f25fbb2c47b9e3e0119c879f600bd7a441eae2f38e4cc5fa18d311dd6c3969df.scope: Deactivated successfully.
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.432 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:03:09 compute-0 ceph-mon[75011]: pgmap v1518: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.8 MiB/s wr, 53 op/s
Nov 22 04:03:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:03:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:03:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:03:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:03:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:03:09 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:03:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2544714478' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.437 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.438 253465 INFO nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Creating image(s)
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.440 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.440 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Ensure instance console log exists: /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.441 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.443 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.448 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.455 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Start _get_guest_xml network_info=[{"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-22T04:02:55Z,direct_url=<?>,disk_format='qcow2',id=ad2fe85a-6178-45f7-8e3e-f68b71bf07a9,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-446734482',owner='83cc5de7368b40b984b51f781e85343c',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-22T04:02:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': 'b6121b29-60a4-457c-a637-77695be7ab73', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-3add484d-4e04-495a-8e14-b8c72a0a6ae5', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '3add484d-4e04-495a-8e14-b8c72a0a6ae5', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '3371e7b7-8ad9-42ef-8a8d-8afa9840abfa', 'attached_at': '', 'detached_at': '', 'volume_id': '3add484d-4e04-495a-8e14-b8c72a0a6ae5', 'serial': '3add484d-4e04-495a-8e14-b8c72a0a6ae5'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': True, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.467 253465 WARNING nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.473 253465 DEBUG nova.virt.libvirt.host [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.474 253465 DEBUG nova.virt.libvirt.host [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.477 253465 DEBUG nova.virt.libvirt.host [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.477 253465 DEBUG nova.virt.libvirt.host [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.478 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.478 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-11-22T04:02:55Z,direct_url=<?>,disk_format='qcow2',id=ad2fe85a-6178-45f7-8e3e-f68b71bf07a9,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-446734482',owner='83cc5de7368b40b984b51f781e85343c',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-11-22T04:02:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.478 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.479 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.479 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.479 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.479 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.480 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.480 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.480 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.480 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.481 253465 DEBUG nova.virt.hardware [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.509 253465 DEBUG nova.storage.rbd_utils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.513 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:09 compute-0 podman[284464]: 2025-11-22 04:03:09.620896221 +0000 UTC m=+0.079862831 container create cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_johnson, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:09 compute-0 systemd[1]: Started libpod-conmon-cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0.scope.
Nov 22 04:03:09 compute-0 podman[284464]: 2025-11-22 04:03:09.56959565 +0000 UTC m=+0.028562300 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:03:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec92874c57c2fe39b659e20ff2408fd3fbc6c788f756c6507dcf8009c2d90b7e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec92874c57c2fe39b659e20ff2408fd3fbc6c788f756c6507dcf8009c2d90b7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec92874c57c2fe39b659e20ff2408fd3fbc6c788f756c6507dcf8009c2d90b7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec92874c57c2fe39b659e20ff2408fd3fbc6c788f756c6507dcf8009c2d90b7e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec92874c57c2fe39b659e20ff2408fd3fbc6c788f756c6507dcf8009c2d90b7e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:09 compute-0 podman[284464]: 2025-11-22 04:03:09.728906884 +0000 UTC m=+0.187873533 container init cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:03:09 compute-0 podman[284464]: 2025-11-22 04:03:09.738040102 +0000 UTC m=+0.197006732 container start cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_johnson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Nov 22 04:03:09 compute-0 podman[284464]: 2025-11-22 04:03:09.741861277 +0000 UTC m=+0.200827917 container attach cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_johnson, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.884 253465 DEBUG nova.network.neutron [req-371f97a9-8296-411b-ba89-4177c2a062eb req-367eac24-82cc-467a-90e8-a798809afbde f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Updated VIF entry in instance network info cache for port a8b62e2a-0384-4ffd-a779-f44e0b6673c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.885 253465 DEBUG nova.network.neutron [req-371f97a9-8296-411b-ba89-4177c2a062eb req-367eac24-82cc-467a-90e8-a798809afbde f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Updating instance_info_cache with network_info: [{"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.902 253465 DEBUG oslo_concurrency.lockutils [req-371f97a9-8296-411b-ba89-4177c2a062eb req-367eac24-82cc-467a-90e8-a798809afbde f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:03:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:03:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178786177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:03:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2128182608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.967 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.996 253465 DEBUG nova.virt.libvirt.vif [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:03:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-87060425',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-87060425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-87060425',id=19,image_ref='ad2fe85a-6178-45f7-8e3e-f68b71bf07a9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQv/esmYGdRQZC3ac7bYtX1moFbWotxG+nzpJyHh3CYDqWlyJWHthvqj91YRodckkzdS5+YrwrlfoMYUi8LZM2LFxBHoZmxokPnRnngd5iIrS8THJAy29ohgPc20jGlrA==',key_name='tempest-keypair-1381765859',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-sj2q5n40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1584219565',image_owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:03:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=3371e7b7-8ad9-42ef-8a8d-8afa9840abfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.996 253465 DEBUG nova.network.os_vif_util [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.997 253465 DEBUG nova.network.os_vif_util [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:4c:3e,bridge_name='br-int',has_traffic_filtering=True,id=a8b62e2a-0384-4ffd-a779-f44e0b6673c6,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8b62e2a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:03:09 compute-0 nova_compute[253461]: 2025-11-22 04:03:09.999 253465 DEBUG nova.objects.instance [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'pci_devices' on Instance uuid 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.014 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <uuid>3371e7b7-8ad9-42ef-8a8d-8afa9840abfa</uuid>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <name>instance-00000013</name>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-87060425</nova:name>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:03:09</nova:creationTime>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <nova:user uuid="45ccef35c0c843a59c9dfd0eb67190a6">tempest-TestVolumeBootPattern-1584219565-project-member</nova:user>
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <nova:project uuid="83cc5de7368b40b984b51f781e85343c">tempest-TestVolumeBootPattern-1584219565</nova:project>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="ad2fe85a-6178-45f7-8e3e-f68b71bf07a9"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <nova:port uuid="a8b62e2a-0384-4ffd-a779-f44e0b6673c6">
Nov 22 04:03:10 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <system>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <entry name="serial">3371e7b7-8ad9-42ef-8a8d-8afa9840abfa</entry>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <entry name="uuid">3371e7b7-8ad9-42ef-8a8d-8afa9840abfa</entry>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </system>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <os>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   </os>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <features>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   </features>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa_disk.config">
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       </source>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-3add484d-4e04-495a-8e14-b8c72a0a6ae5">
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       </source>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:03:10 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <serial>3add484d-4e04-495a-8e14-b8c72a0a6ae5</serial>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:76:4c:3e"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <target dev="tapa8b62e2a-03"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa/console.log" append="off"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <video>
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </video>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <input type="keyboard" bus="usb"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:03:10 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:03:10 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:03:10 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:03:10 compute-0 nova_compute[253461]: </domain>
Nov 22 04:03:10 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.014 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Preparing to wait for external event network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.014 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.015 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.015 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.016 253465 DEBUG nova.virt.libvirt.vif [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:03:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-87060425',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-87060425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-87060425',id=19,image_ref='ad2fe85a-6178-45f7-8e3e-f68b71bf07a9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQv/esmYGdRQZC3ac7bYtX1moFbWotxG+nzpJyHh3CYDqWlyJWHthvqj91YRodckkzdS5+YrwrlfoMYUi8LZM2LFxBHoZmxokPnRnngd5iIrS8THJAy29ohgPc20jGlrA==',key_name='tempest-keypair-1381765859',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-sj2q5n40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1584219565',image_owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:03:03Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=3371e7b7-8ad9-42ef-8a8d-8afa9840abfa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.016 253465 DEBUG nova.network.os_vif_util [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.017 253465 DEBUG nova.network.os_vif_util [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:4c:3e,bridge_name='br-int',has_traffic_filtering=True,id=a8b62e2a-0384-4ffd-a779-f44e0b6673c6,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8b62e2a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.017 253465 DEBUG os_vif [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:4c:3e,bridge_name='br-int',has_traffic_filtering=True,id=a8b62e2a-0384-4ffd-a779-f44e0b6673c6,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8b62e2a-03') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.018 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.019 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.019 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.024 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.024 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8b62e2a-03, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.025 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8b62e2a-03, col_values=(('external_ids', {'iface-id': 'a8b62e2a-0384-4ffd-a779-f44e0b6673c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:4c:3e', 'vm-uuid': '3371e7b7-8ad9-42ef-8a8d-8afa9840abfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.027 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:10 compute-0 NetworkManager[48916]: <info>  [1763784190.0288] manager: (tapa8b62e2a-03): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.032 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.041 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.042 253465 INFO os_vif [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:4c:3e,bridge_name='br-int',has_traffic_filtering=True,id=a8b62e2a-0384-4ffd-a779-f44e0b6673c6,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8b62e2a-03')
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.092 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.092 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.093 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No VIF found with MAC fa:16:3e:76:4c:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.093 253465 INFO nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Using config drive
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.118 253465 DEBUG nova.storage.rbd_utils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 68 op/s
Nov 22 04:03:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Nov 22 04:03:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Nov 22 04:03:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1178786177' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2128182608' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.751 253465 INFO nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Creating config drive at /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa/disk.config
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.757 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxytejdje execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.887 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxytejdje" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.912 253465 DEBUG nova.storage.rbd_utils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:10 compute-0 nova_compute[253461]: 2025-11-22 04:03:10.916 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa/disk.config 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:10 compute-0 affectionate_johnson[284500]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:03:10 compute-0 affectionate_johnson[284500]: --> relative data size: 1.0
Nov 22 04:03:10 compute-0 affectionate_johnson[284500]: --> All data devices are unavailable
Nov 22 04:03:10 compute-0 systemd[1]: libpod-cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0.scope: Deactivated successfully.
Nov 22 04:03:10 compute-0 podman[284464]: 2025-11-22 04:03:10.996949864 +0000 UTC m=+1.455916474 container died cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:03:10 compute-0 systemd[1]: libpod-cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0.scope: Consumed 1.105s CPU time.
Nov 22 04:03:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:11 compute-0 ceph-mon[75011]: pgmap v1519: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.5 MiB/s wr, 68 op/s
Nov 22 04:03:11 compute-0 ceph-mon[75011]: osdmap e373: 3 total, 3 up, 3 in
Nov 22 04:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec92874c57c2fe39b659e20ff2408fd3fbc6c788f756c6507dcf8009c2d90b7e-merged.mount: Deactivated successfully.
Nov 22 04:03:12 compute-0 podman[284464]: 2025-11-22 04:03:12.183342892 +0000 UTC m=+2.642309542 container remove cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:03:12 compute-0 systemd[1]: libpod-conmon-cfddd11c77a04e431bee1144fe93f001031bb26e41aea0c67407a3cfe0f17bc0.scope: Deactivated successfully.
Nov 22 04:03:12 compute-0 sudo[284341]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.237 253465 DEBUG oslo_concurrency.processutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa/disk.config 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.240 253465 INFO nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Deleting local config drive /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa/disk.config because it was imported into RBD.
Nov 22 04:03:12 compute-0 kernel: tapa8b62e2a-03: entered promiscuous mode
Nov 22 04:03:12 compute-0 NetworkManager[48916]: <info>  [1763784192.3093] manager: (tapa8b62e2a-03): new Tun device (/org/freedesktop/NetworkManager/Devices/105)
Nov 22 04:03:12 compute-0 ovn_controller[152691]: 2025-11-22T04:03:12Z|00195|binding|INFO|Claiming lport a8b62e2a-0384-4ffd-a779-f44e0b6673c6 for this chassis.
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.310 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:12 compute-0 ovn_controller[152691]: 2025-11-22T04:03:12Z|00196|binding|INFO|a8b62e2a-0384-4ffd-a779-f44e0b6673c6: Claiming fa:16:3e:76:4c:3e 10.100.0.9
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.321 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:4c:3e 10.100.0.9'], port_security=['fa:16:3e:76:4c:3e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3371e7b7-8ad9-42ef-8a8d-8afa9840abfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fa078025-1027-4d6a-8f11-b270ceaa6a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=a8b62e2a-0384-4ffd-a779-f44e0b6673c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.324 162689 INFO neutron.agent.ovn.metadata.agent [-] Port a8b62e2a-0384-4ffd-a779-f44e0b6673c6 in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 bound to our chassis
Nov 22 04:03:12 compute-0 sudo[284607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.328 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:03:12 compute-0 sudo[284607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:12 compute-0 sudo[284607]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:12 compute-0 ovn_controller[152691]: 2025-11-22T04:03:12Z|00197|binding|INFO|Setting lport a8b62e2a-0384-4ffd-a779-f44e0b6673c6 ovn-installed in OVS
Nov 22 04:03:12 compute-0 ovn_controller[152691]: 2025-11-22T04:03:12Z|00198|binding|INFO|Setting lport a8b62e2a-0384-4ffd-a779-f44e0b6673c6 up in Southbound
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.348 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.349 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[cee70886-a415-47fa-be60-f9f891936dbf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.350 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:12 compute-0 systemd-machined[215728]: New machine qemu-19-instance-00000013.
Nov 22 04:03:12 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Nov 22 04:03:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 94 op/s
Nov 22 04:03:12 compute-0 systemd-udevd[284671]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.396 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[63ea8ab3-0834-4503-a6c2-575a673b3797]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.402 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[741b509f-ffb8-4f80-97fc-a36425aa3e08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:12 compute-0 NetworkManager[48916]: <info>  [1763784192.4138] device (tapa8b62e2a-03): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:03:12 compute-0 sudo[284645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:03:12 compute-0 NetworkManager[48916]: <info>  [1763784192.4151] device (tapa8b62e2a-03): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:03:12 compute-0 sudo[284645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:12 compute-0 sudo[284645]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.436 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fd754a-ce6f-41ce-8dcf-1b5ff0df1b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.460 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[464c5846-a4ac-415e-8707-fe834adc4f12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440403, 'reachable_time': 42317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284688, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.479 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ccac207e-c589-41b4-804e-2bb9c0f0021c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4670b112-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440418, 'tstamp': 440418}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284706, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4670b112-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440421, 'tstamp': 440421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284706, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.482 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.484 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.485 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.486 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.487 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.488 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:12 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:12.488 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:03:12 compute-0 sudo[284683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:12 compute-0 sudo[284683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:12 compute-0 sudo[284683]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.550 253465 DEBUG nova.compute.manager [req-a79968da-c00f-463c-b02d-3e152ce09bc8 req-3f750ebd-a3db-46b9-8f77-7636b0e1c13a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received event network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.551 253465 DEBUG oslo_concurrency.lockutils [req-a79968da-c00f-463c-b02d-3e152ce09bc8 req-3f750ebd-a3db-46b9-8f77-7636b0e1c13a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.551 253465 DEBUG oslo_concurrency.lockutils [req-a79968da-c00f-463c-b02d-3e152ce09bc8 req-3f750ebd-a3db-46b9-8f77-7636b0e1c13a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.552 253465 DEBUG oslo_concurrency.lockutils [req-a79968da-c00f-463c-b02d-3e152ce09bc8 req-3f750ebd-a3db-46b9-8f77-7636b0e1c13a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.552 253465 DEBUG nova.compute.manager [req-a79968da-c00f-463c-b02d-3e152ce09bc8 req-3f750ebd-a3db-46b9-8f77-7636b0e1c13a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Processing event network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:03:12 compute-0 sudo[284711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:03:12 compute-0 sudo[284711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Nov 22 04:03:12 compute-0 ceph-mon[75011]: pgmap v1521: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 94 op/s
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.629 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Nov 22 04:03:12 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.912 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784192.91162, 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.912 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] VM Started (Lifecycle Event)
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.916 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.920 253465 DEBUG nova.virt.libvirt.driver [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.923 253465 INFO nova.virt.libvirt.driver [-] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Instance spawned successfully.
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.923 253465 INFO nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Took 3.49 seconds to spawn the instance on the hypervisor.
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.924 253465 DEBUG nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.932 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.935 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.962 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.963 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784192.914957, 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:03:12 compute-0 nova_compute[253461]: 2025-11-22 04:03:12.963 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] VM Paused (Lifecycle Event)
Nov 22 04:03:12 compute-0 podman[284815]: 2025-11-22 04:03:12.969092711 +0000 UTC m=+0.052828878 container create 502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:03:13 compute-0 systemd[1]: Started libpod-conmon-502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638.scope.
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.004 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.007 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784192.919073, 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.007 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] VM Resumed (Lifecycle Event)
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.019 253465 INFO nova.compute.manager [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Took 10.08 seconds to build instance.
Nov 22 04:03:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.036 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.040 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.044 253465 DEBUG oslo_concurrency.lockutils [None req-67281926-dc45-46d3-ad26-1bc37e7b9f2b 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:13 compute-0 podman[284815]: 2025-11-22 04:03:12.952200181 +0000 UTC m=+0.035936368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:03:13 compute-0 podman[284815]: 2025-11-22 04:03:13.050479168 +0000 UTC m=+0.134215335 container init 502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:03:13 compute-0 podman[284815]: 2025-11-22 04:03:13.061732141 +0000 UTC m=+0.145468308 container start 502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:03:13 compute-0 podman[284815]: 2025-11-22 04:03:13.064968326 +0000 UTC m=+0.148704523 container attach 502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:03:13 compute-0 laughing_mestorf[284831]: 167 167
Nov 22 04:03:13 compute-0 systemd[1]: libpod-502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638.scope: Deactivated successfully.
Nov 22 04:03:13 compute-0 conmon[284831]: conmon 502f387c74b60afb5856 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638.scope/container/memory.events
Nov 22 04:03:13 compute-0 podman[284815]: 2025-11-22 04:03:13.069033771 +0000 UTC m=+0.152769938 container died 502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:03:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7d20987ca79ec2c39928f404b3b615a8760e3658d7a5a276de1a17921a173e0-merged.mount: Deactivated successfully.
Nov 22 04:03:13 compute-0 podman[284815]: 2025-11-22 04:03:13.108943544 +0000 UTC m=+0.192679711 container remove 502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:03:13 compute-0 systemd[1]: libpod-conmon-502f387c74b60afb5856f7751ba7e175d4c455b161ceea023cda6854960a7638.scope: Deactivated successfully.
Nov 22 04:03:13 compute-0 podman[284854]: 2025-11-22 04:03:13.314148585 +0000 UTC m=+0.034362056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.647 253465 DEBUG oslo_concurrency.lockutils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:13 compute-0 nova_compute[253461]: 2025-11-22 04:03:13.648 253465 DEBUG oslo_concurrency.lockutils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.053 253465 DEBUG nova.objects.instance [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:14 compute-0 podman[284854]: 2025-11-22 04:03:14.138483383 +0000 UTC m=+0.858696824 container create 5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:03:14 compute-0 ceph-mon[75011]: osdmap e374: 3 total, 3 up, 3 in
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.184 253465 DEBUG oslo_concurrency.lockutils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:14 compute-0 systemd[1]: Started libpod-conmon-5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369.scope.
Nov 22 04:03:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db38c667eee2a25ae0cd0ef64cff958d615bbaec9f291a17e9e00b3971bed64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db38c667eee2a25ae0cd0ef64cff958d615bbaec9f291a17e9e00b3971bed64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db38c667eee2a25ae0cd0ef64cff958d615bbaec9f291a17e9e00b3971bed64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6db38c667eee2a25ae0cd0ef64cff958d615bbaec9f291a17e9e00b3971bed64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:14 compute-0 podman[284854]: 2025-11-22 04:03:14.285731476 +0000 UTC m=+1.005944937 container init 5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_euclid, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 04:03:14 compute-0 podman[284854]: 2025-11-22 04:03:14.296564874 +0000 UTC m=+1.016778305 container start 5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_euclid, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:14 compute-0 podman[284854]: 2025-11-22 04:03:14.300393987 +0000 UTC m=+1.020607428 container attach 5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:03:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 108 op/s
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.628 253465 DEBUG oslo_concurrency.lockutils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.629 253465 DEBUG oslo_concurrency.lockutils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.629 253465 INFO nova.compute.manager [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attaching volume 84e8b44a-db23-453c-9288-1a8cf478419e to /dev/vdb
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.740 253465 DEBUG nova.compute.manager [req-c742161a-bb12-4e62-921a-c53d853d250f req-6bee3230-402b-4450-a30c-3bfd9bd8c35b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received event network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.740 253465 DEBUG oslo_concurrency.lockutils [req-c742161a-bb12-4e62-921a-c53d853d250f req-6bee3230-402b-4450-a30c-3bfd9bd8c35b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.741 253465 DEBUG oslo_concurrency.lockutils [req-c742161a-bb12-4e62-921a-c53d853d250f req-6bee3230-402b-4450-a30c-3bfd9bd8c35b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.741 253465 DEBUG oslo_concurrency.lockutils [req-c742161a-bb12-4e62-921a-c53d853d250f req-6bee3230-402b-4450-a30c-3bfd9bd8c35b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.741 253465 DEBUG nova.compute.manager [req-c742161a-bb12-4e62-921a-c53d853d250f req-6bee3230-402b-4450-a30c-3bfd9bd8c35b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] No waiting events found dispatching network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.741 253465 WARNING nova.compute.manager [req-c742161a-bb12-4e62-921a-c53d853d250f req-6bee3230-402b-4450-a30c-3bfd9bd8c35b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received unexpected event network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 for instance with vm_state active and task_state None.
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.789 253465 DEBUG os_brick.utils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.790 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.804 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.805 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[ba48198d-5cbb-4928-9d66-d338fbc65f8b]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.806 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.814 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.814 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[5514cd33-8c14-44ce-b3e6-d876525f1b0a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.815 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.825 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.825 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[7157aa55-17e1-406d-8d29-fbbc88e027a2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.826 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[15da3b21-50a0-4741-8d3d-2a1c4555b2c2]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.827 253465 DEBUG oslo_concurrency.processutils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.852 253465 DEBUG oslo_concurrency.processutils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.855 253465 DEBUG os_brick.initiator.connectors.lightos [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.855 253465 DEBUG os_brick.initiator.connectors.lightos [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.855 253465 DEBUG os_brick.initiator.connectors.lightos [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.856 253465 DEBUG os_brick.utils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] <== get_connector_properties: return (65ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:03:14 compute-0 nova_compute[253461]: 2025-11-22 04:03:14.856 253465 DEBUG nova.virt.block_device [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating existing volume attachment record: 81cc3eec-bd08-4697-9b14-922a6e77e079 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:03:15 compute-0 nova_compute[253461]: 2025-11-22 04:03:15.065 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]: {
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:     "0": [
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:         {
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "devices": [
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "/dev/loop3"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             ],
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_name": "ceph_lv0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_size": "21470642176",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "name": "ceph_lv0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "tags": {
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cluster_name": "ceph",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.crush_device_class": "",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.encrypted": "0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osd_id": "0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.type": "block",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.vdo": "0"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             },
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "type": "block",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "vg_name": "ceph_vg0"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:         }
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:     ],
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:     "1": [
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:         {
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "devices": [
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "/dev/loop4"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             ],
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_name": "ceph_lv1",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_size": "21470642176",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "name": "ceph_lv1",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "tags": {
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cluster_name": "ceph",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.crush_device_class": "",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.encrypted": "0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osd_id": "1",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.type": "block",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.vdo": "0"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             },
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "type": "block",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "vg_name": "ceph_vg1"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:         }
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:     ],
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:     "2": [
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:         {
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "devices": [
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "/dev/loop5"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             ],
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_name": "ceph_lv2",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_size": "21470642176",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "name": "ceph_lv2",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "tags": {
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.cluster_name": "ceph",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.crush_device_class": "",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.encrypted": "0",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osd_id": "2",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.type": "block",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:                 "ceph.vdo": "0"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             },
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "type": "block",
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:             "vg_name": "ceph_vg2"
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:         }
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]:     ]
Nov 22 04:03:15 compute-0 quizzical_euclid[284871]: }
Nov 22 04:03:15 compute-0 systemd[1]: libpod-5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369.scope: Deactivated successfully.
Nov 22 04:03:15 compute-0 podman[284854]: 2025-11-22 04:03:15.166803208 +0000 UTC m=+1.887016639 container died 5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:15 compute-0 ceph-mon[75011]: pgmap v1523: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.0 MiB/s wr, 108 op/s
Nov 22 04:03:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6db38c667eee2a25ae0cd0ef64cff958d615bbaec9f291a17e9e00b3971bed64-merged.mount: Deactivated successfully.
Nov 22 04:03:15 compute-0 podman[284854]: 2025-11-22 04:03:15.252368205 +0000 UTC m=+1.972581636 container remove 5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:03:15 compute-0 systemd[1]: libpod-conmon-5d24763fc06c2c23871293b39810fbed6726a08f77ac1685b6531e9784a1d369.scope: Deactivated successfully.
Nov 22 04:03:15 compute-0 sudo[284711]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:15 compute-0 sudo[284901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:15 compute-0 sudo[284901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:15 compute-0 sudo[284901]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:03:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3617978953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:15 compute-0 sudo[284926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:03:15 compute-0 sudo[284926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:15 compute-0 sudo[284926]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:15.495 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:15 compute-0 sudo[284951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:15 compute-0 sudo[284951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:15 compute-0 sudo[284951]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:15 compute-0 nova_compute[253461]: 2025-11-22 04:03:15.570 253465 DEBUG nova.objects.instance [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:15 compute-0 sudo[284976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:03:15 compute-0 sudo[284976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:15 compute-0 nova_compute[253461]: 2025-11-22 04:03:15.598 253465 DEBUG nova.virt.libvirt.driver [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to attach volume 84e8b44a-db23-453c-9288-1a8cf478419e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 04:03:15 compute-0 nova_compute[253461]: 2025-11-22 04:03:15.603 253465 DEBUG nova.virt.libvirt.guest [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 04:03:15 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:03:15 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-84e8b44a-db23-453c-9288-1a8cf478419e">
Nov 22 04:03:15 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:15 compute-0 nova_compute[253461]:   </source>
Nov 22 04:03:15 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 04:03:15 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:03:15 compute-0 nova_compute[253461]:   </auth>
Nov 22 04:03:15 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:03:15 compute-0 nova_compute[253461]:   <serial>84e8b44a-db23-453c-9288-1a8cf478419e</serial>
Nov 22 04:03:15 compute-0 nova_compute[253461]: </disk>
Nov 22 04:03:15 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 04:03:15 compute-0 nova_compute[253461]: 2025-11-22 04:03:15.791 253465 DEBUG nova.virt.libvirt.driver [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:15 compute-0 nova_compute[253461]: 2025-11-22 04:03:15.791 253465 DEBUG nova.virt.libvirt.driver [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:15 compute-0 nova_compute[253461]: 2025-11-22 04:03:15.792 253465 DEBUG nova.virt.libvirt.driver [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:15 compute-0 nova_compute[253461]: 2025-11-22 04:03:15.792 253465 DEBUG nova.virt.libvirt.driver [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No VIF found with MAC fa:16:3e:67:9b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:03:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e374 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:16 compute-0 nova_compute[253461]: 2025-11-22 04:03:16.132 253465 DEBUG oslo_concurrency.lockutils [None req-8da87651-8b9d-44d2-9431-fcdd8301c714 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:16 compute-0 podman[285061]: 2025-11-22 04:03:16.188938312 +0000 UTC m=+0.063546833 container create e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:03:16 compute-0 podman[285061]: 2025-11-22 04:03:16.150379924 +0000 UTC m=+0.024988466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:03:16 compute-0 systemd[1]: Started libpod-conmon-e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1.scope.
Nov 22 04:03:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3617978953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:03:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 139 op/s
Nov 22 04:03:16 compute-0 podman[285061]: 2025-11-22 04:03:16.405861186 +0000 UTC m=+0.280469758 container init e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:03:16 compute-0 podman[285061]: 2025-11-22 04:03:16.417258133 +0000 UTC m=+0.291866675 container start e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:16 compute-0 eloquent_villani[285077]: 167 167
Nov 22 04:03:16 compute-0 systemd[1]: libpod-e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1.scope: Deactivated successfully.
Nov 22 04:03:16 compute-0 podman[285061]: 2025-11-22 04:03:16.472457623 +0000 UTC m=+0.347066145 container attach e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:03:16 compute-0 podman[285061]: 2025-11-22 04:03:16.473558748 +0000 UTC m=+0.348167270 container died e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:03:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c32f04621776cb18f3f5d00d7d97ed4a9c84b9dcf1e1fcbe83ef39d21d93adc0-merged.mount: Deactivated successfully.
Nov 22 04:03:16 compute-0 podman[285061]: 2025-11-22 04:03:16.750576141 +0000 UTC m=+0.625184713 container remove e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_villani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 04:03:16 compute-0 systemd[1]: libpod-conmon-e035787700ecba9b09f8e21f6851e5d40bf4f76a84f266513a43163e029b77c1.scope: Deactivated successfully.
Nov 22 04:03:16 compute-0 nova_compute[253461]: 2025-11-22 04:03:16.875 253465 DEBUG nova.compute.manager [req-af018516-742f-4907-b4ff-502f142e994f req-d269bb6f-6da0-448b-8e94-a955f69c8427 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received event network-changed-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:16 compute-0 nova_compute[253461]: 2025-11-22 04:03:16.876 253465 DEBUG nova.compute.manager [req-af018516-742f-4907-b4ff-502f142e994f req-d269bb6f-6da0-448b-8e94-a955f69c8427 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Refreshing instance network info cache due to event network-changed-a8b62e2a-0384-4ffd-a779-f44e0b6673c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:03:16 compute-0 nova_compute[253461]: 2025-11-22 04:03:16.876 253465 DEBUG oslo_concurrency.lockutils [req-af018516-742f-4907-b4ff-502f142e994f req-d269bb6f-6da0-448b-8e94-a955f69c8427 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:03:16 compute-0 nova_compute[253461]: 2025-11-22 04:03:16.877 253465 DEBUG oslo_concurrency.lockutils [req-af018516-742f-4907-b4ff-502f142e994f req-d269bb6f-6da0-448b-8e94-a955f69c8427 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:03:16 compute-0 nova_compute[253461]: 2025-11-22 04:03:16.877 253465 DEBUG nova.network.neutron [req-af018516-742f-4907-b4ff-502f142e994f req-d269bb6f-6da0-448b-8e94-a955f69c8427 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Refreshing network info cache for port a8b62e2a-0384-4ffd-a779-f44e0b6673c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:03:17 compute-0 podman[285100]: 2025-11-22 04:03:17.065767066 +0000 UTC m=+0.108032417 container create 8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:17 compute-0 podman[285100]: 2025-11-22 04:03:16.994244221 +0000 UTC m=+0.036509552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:03:17 compute-0 systemd[1]: Started libpod-conmon-8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95.scope.
Nov 22 04:03:17 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38509767a978aff934ee355aeec0e1b9a93d83366684639e76737ee97ec3ba72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38509767a978aff934ee355aeec0e1b9a93d83366684639e76737ee97ec3ba72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38509767a978aff934ee355aeec0e1b9a93d83366684639e76737ee97ec3ba72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/38509767a978aff934ee355aeec0e1b9a93d83366684639e76737ee97ec3ba72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:17 compute-0 podman[285100]: 2025-11-22 04:03:17.26765253 +0000 UTC m=+0.309917911 container init 8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:03:17 compute-0 podman[285100]: 2025-11-22 04:03:17.281240579 +0000 UTC m=+0.323505920 container start 8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:03:17 compute-0 podman[285100]: 2025-11-22 04:03:17.331401698 +0000 UTC m=+0.373667049 container attach 8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:03:17 compute-0 ceph-mon[75011]: pgmap v1524: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 139 op/s
Nov 22 04:03:17 compute-0 nova_compute[253461]: 2025-11-22 04:03:17.631 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 127 op/s
Nov 22 04:03:18 compute-0 clever_joliot[285117]: {
Nov 22 04:03:18 compute-0 clever_joliot[285117]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "osd_id": 1,
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "type": "bluestore"
Nov 22 04:03:18 compute-0 clever_joliot[285117]:     },
Nov 22 04:03:18 compute-0 clever_joliot[285117]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "osd_id": 0,
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "type": "bluestore"
Nov 22 04:03:18 compute-0 clever_joliot[285117]:     },
Nov 22 04:03:18 compute-0 clever_joliot[285117]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "osd_id": 2,
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:03:18 compute-0 clever_joliot[285117]:         "type": "bluestore"
Nov 22 04:03:18 compute-0 clever_joliot[285117]:     }
Nov 22 04:03:18 compute-0 clever_joliot[285117]: }
Nov 22 04:03:18 compute-0 podman[285100]: 2025-11-22 04:03:18.434510941 +0000 UTC m=+1.476776252 container died 8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:03:18 compute-0 systemd[1]: libpod-8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95.scope: Deactivated successfully.
Nov 22 04:03:18 compute-0 systemd[1]: libpod-8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95.scope: Consumed 1.092s CPU time.
Nov 22 04:03:18 compute-0 ceph-mon[75011]: pgmap v1525: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.4 MiB/s wr, 127 op/s
Nov 22 04:03:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-38509767a978aff934ee355aeec0e1b9a93d83366684639e76737ee97ec3ba72-merged.mount: Deactivated successfully.
Nov 22 04:03:18 compute-0 podman[285100]: 2025-11-22 04:03:18.913664087 +0000 UTC m=+1.955929388 container remove 8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_joliot, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:03:18 compute-0 sudo[284976]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:03:18 compute-0 systemd[1]: libpod-conmon-8dba3c84cf8fa65249fd30df8615f592cb2bbfbe46fc8dfbf9fdf5f7dffb9d95.scope: Deactivated successfully.
Nov 22 04:03:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:03:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:03:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:03:19 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 00f6cedc-6aa7-4af8-b795-73660b6bf980 does not exist
Nov 22 04:03:19 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ee6e4fdf-9551-4a3b-81b1-57254e3cf85a does not exist
Nov 22 04:03:19 compute-0 sudo[285163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:03:19 compute-0 sudo[285163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:19 compute-0 sudo[285163]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:19 compute-0 sudo[285188]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:03:19 compute-0 sudo[285188]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:03:19 compute-0 sudo[285188]: pam_unix(sudo:session): session closed for user root
Nov 22 04:03:19 compute-0 nova_compute[253461]: 2025-11-22 04:03:19.545 253465 DEBUG nova.network.neutron [req-af018516-742f-4907-b4ff-502f142e994f req-d269bb6f-6da0-448b-8e94-a955f69c8427 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Updated VIF entry in instance network info cache for port a8b62e2a-0384-4ffd-a779-f44e0b6673c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:03:19 compute-0 nova_compute[253461]: 2025-11-22 04:03:19.547 253465 DEBUG nova.network.neutron [req-af018516-742f-4907-b4ff-502f142e994f req-d269bb6f-6da0-448b-8e94-a955f69c8427 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Updating instance_info_cache with network_info: [{"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:19 compute-0 nova_compute[253461]: 2025-11-22 04:03:19.632 253465 DEBUG oslo_concurrency.lockutils [req-af018516-742f-4907-b4ff-502f142e994f req-d269bb6f-6da0-448b-8e94-a955f69c8427 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:03:20 compute-0 nova_compute[253461]: 2025-11-22 04:03:20.068 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Nov 22 04:03:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 158 op/s
Nov 22 04:03:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:03:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:03:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Nov 22 04:03:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Nov 22 04:03:22 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Nov 22 04:03:22 compute-0 nova_compute[253461]: 2025-11-22 04:03:22.635 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:23.014 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:23.015 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:23.016 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:23 compute-0 ceph-mon[75011]: pgmap v1526: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.5 MiB/s rd, 2.8 MiB/s wr, 158 op/s
Nov 22 04:03:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.4 MiB/s rd, 373 KiB/s wr, 118 op/s
Nov 22 04:03:24 compute-0 podman[285213]: 2025-11-22 04:03:24.449972727 +0000 UTC m=+0.112484159 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:03:24 compute-0 podman[285214]: 2025-11-22 04:03:24.486866933 +0000 UTC m=+0.149328311 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Nov 22 04:03:24 compute-0 ceph-mon[75011]: pgmap v1527: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 144 op/s
Nov 22 04:03:24 compute-0 ceph-mon[75011]: osdmap e375: 3 total, 3 up, 3 in
Nov 22 04:03:25 compute-0 nova_compute[253461]: 2025-11-22 04:03:25.071 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:26 compute-0 nova_compute[253461]: 2025-11-22 04:03:26.190 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:26 compute-0 nova_compute[253461]: 2025-11-22 04:03:26.190 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 106 KiB/s wr, 95 op/s
Nov 22 04:03:26 compute-0 nova_compute[253461]: 2025-11-22 04:03:26.538 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:03:26 compute-0 ceph-mon[75011]: pgmap v1529: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.4 MiB/s rd, 373 KiB/s wr, 118 op/s
Nov 22 04:03:27 compute-0 nova_compute[253461]: 2025-11-22 04:03:27.260 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:27 compute-0 nova_compute[253461]: 2025-11-22 04:03:27.261 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:27 compute-0 nova_compute[253461]: 2025-11-22 04:03:27.273 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:03:27 compute-0 nova_compute[253461]: 2025-11-22 04:03:27.273 253465 INFO nova.compute.claims [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:03:27 compute-0 nova_compute[253461]: 2025-11-22 04:03:27.636 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:28 compute-0 nova_compute[253461]: 2025-11-22 04:03:28.007 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:28 compute-0 ceph-mon[75011]: pgmap v1530: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 106 KiB/s wr, 95 op/s
Nov 22 04:03:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.6 MiB/s rd, 89 KiB/s wr, 68 op/s
Nov 22 04:03:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:03:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/977947656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:28 compute-0 nova_compute[253461]: 2025-11-22 04:03:28.675 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:28 compute-0 nova_compute[253461]: 2025-11-22 04:03:28.687 253465 DEBUG nova.compute.provider_tree [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:03:28 compute-0 nova_compute[253461]: 2025-11-22 04:03:28.808 253465 DEBUG nova.scheduler.client.report [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:03:29 compute-0 nova_compute[253461]: 2025-11-22 04:03:29.407 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:29 compute-0 nova_compute[253461]: 2025-11-22 04:03:29.408 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:03:29 compute-0 ceph-mon[75011]: pgmap v1531: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.6 MiB/s rd, 89 KiB/s wr, 68 op/s
Nov 22 04:03:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/977947656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:30 compute-0 nova_compute[253461]: 2025-11-22 04:03:30.074 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 649 KiB/s rd, 349 KiB/s wr, 32 op/s
Nov 22 04:03:30 compute-0 nova_compute[253461]: 2025-11-22 04:03:30.545 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:03:30 compute-0 nova_compute[253461]: 2025-11-22 04:03:30.546 253465 DEBUG nova.network.neutron [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:03:30 compute-0 nova_compute[253461]: 2025-11-22 04:03:30.787 253465 INFO nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:03:30 compute-0 ovn_controller[152691]: 2025-11-22T04:03:30Z|00032|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.9
Nov 22 04:03:30 compute-0 ovn_controller[152691]: 2025-11-22T04:03:30Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:76:4c:3e 10.100.0.9
Nov 22 04:03:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:31 compute-0 nova_compute[253461]: 2025-11-22 04:03:31.746 253465 DEBUG nova.policy [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '26cfaadc9db64dde98981b57d48fd714', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6c34534e935e44e883b5f01b09c03631', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:03:32 compute-0 ceph-mon[75011]: pgmap v1532: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 649 KiB/s rd, 349 KiB/s wr, 32 op/s
Nov 22 04:03:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 988 KiB/s rd, 611 KiB/s wr, 52 op/s
Nov 22 04:03:32 compute-0 nova_compute[253461]: 2025-11-22 04:03:32.636 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:03:32 compute-0 nova_compute[253461]: 2025-11-22 04:03:32.642 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:33 compute-0 ceph-mon[75011]: pgmap v1533: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 988 KiB/s rd, 611 KiB/s wr, 52 op/s
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.703 253465 INFO nova.virt.block_device [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Booting with volume fe14904a-5cd8-45f9-9cb9-937b154bd3de at /dev/vdb
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.910 253465 DEBUG os_brick.utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.911 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.931 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.931 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[ab3bfe89-a84c-4bb8-a5b3-a08d9976a3da]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.933 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.947 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.948 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[717de36e-5053-4423-ba08-d1429c6e9cd8]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.950 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.965 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.965 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[590c8df6-8d16-4446-a2af-446effc20cfc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.967 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[3732fd4e-d5b4-4a44-87c7-a04882d21af7]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:33 compute-0 nova_compute[253461]: 2025-11-22 04:03:33.968 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:34 compute-0 nova_compute[253461]: 2025-11-22 04:03:34.005 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:34 compute-0 nova_compute[253461]: 2025-11-22 04:03:34.009 253465 DEBUG os_brick.initiator.connectors.lightos [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:03:34 compute-0 nova_compute[253461]: 2025-11-22 04:03:34.010 253465 DEBUG os_brick.initiator.connectors.lightos [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:03:34 compute-0 nova_compute[253461]: 2025-11-22 04:03:34.010 253465 DEBUG os_brick.initiator.connectors.lightos [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:03:34 compute-0 nova_compute[253461]: 2025-11-22 04:03:34.011 253465 DEBUG os_brick.utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] <== get_connector_properties: return (100ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:03:34 compute-0 nova_compute[253461]: 2025-11-22 04:03:34.011 253465 DEBUG nova.virt.block_device [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Updating existing volume attachment record: 93b971a7-d4c9-4783-bb1b-fc488818a84e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:03:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 985 KiB/s rd, 515 KiB/s wr, 56 op/s
Nov 22 04:03:35 compute-0 nova_compute[253461]: 2025-11-22 04:03:35.109 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:03:35 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3588527530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:35 compute-0 ceph-mon[75011]: pgmap v1534: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 985 KiB/s rd, 515 KiB/s wr, 56 op/s
Nov 22 04:03:35 compute-0 ovn_controller[152691]: 2025-11-22T04:03:35Z|00034|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.12 does not match offer 10.100.0.9
Nov 22 04:03:35 compute-0 ovn_controller[152691]: 2025-11-22T04:03:35Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:76:4c:3e 10.100.0.9
Nov 22 04:03:35 compute-0 nova_compute[253461]: 2025-11-22 04:03:35.722 253465 DEBUG nova.network.neutron [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Successfully created port: bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:03:35 compute-0 ovn_controller[152691]: 2025-11-22T04:03:35Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:76:4c:3e 10.100.0.9
Nov 22 04:03:35 compute-0 ovn_controller[152691]: 2025-11-22T04:03:35Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:76:4c:3e 10.100.0.9
Nov 22 04:03:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.217 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.220 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.221 253465 INFO nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Creating image(s)
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:03:36
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['vms', 'backups', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'volumes']
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.253 253465 DEBUG nova.storage.rbd_utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] rbd image ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.365 253465 DEBUG nova.storage.rbd_utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] rbd image ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:36 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3588527530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.0 MiB/s rd, 513 KiB/s wr, 63 op/s
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.496 253465 DEBUG nova.storage.rbd_utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] rbd image ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.501 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:03:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.647 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.649 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.651 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.651 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.686 253465 DEBUG nova.storage.rbd_utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] rbd image ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.691 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.896 253465 DEBUG nova.network.neutron [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Successfully updated port: bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.971 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.971 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquired lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:03:36 compute-0 nova_compute[253461]: 2025-11-22 04:03:36.972 253465 DEBUG nova.network.neutron [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:03:37 compute-0 nova_compute[253461]: 2025-11-22 04:03:37.120 253465 DEBUG nova.compute.manager [req-ec814303-af1a-49e4-b3a0-d06be8fe66e0 req-98334e52-c39a-4e05-8979-fb16b56a92be f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received event network-changed-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:37 compute-0 nova_compute[253461]: 2025-11-22 04:03:37.120 253465 DEBUG nova.compute.manager [req-ec814303-af1a-49e4-b3a0-d06be8fe66e0 req-98334e52-c39a-4e05-8979-fb16b56a92be f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Refreshing instance network info cache due to event network-changed-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:03:37 compute-0 nova_compute[253461]: 2025-11-22 04:03:37.121 253465 DEBUG oslo_concurrency.lockutils [req-ec814303-af1a-49e4-b3a0-d06be8fe66e0 req-98334e52-c39a-4e05-8979-fb16b56a92be f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:03:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Nov 22 04:03:37 compute-0 nova_compute[253461]: 2025-11-22 04:03:37.503 253465 DEBUG nova.network.neutron [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:03:37 compute-0 ceph-mon[75011]: pgmap v1535: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.0 MiB/s rd, 513 KiB/s wr, 63 op/s
Nov 22 04:03:37 compute-0 nova_compute[253461]: 2025-11-22 04:03:37.686 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Nov 22 04:03:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Nov 22 04:03:38 compute-0 nova_compute[253461]: 2025-11-22 04:03:38.315 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 883 KiB/s rd, 626 KiB/s wr, 70 op/s
Nov 22 04:03:38 compute-0 nova_compute[253461]: 2025-11-22 04:03:38.400 253465 DEBUG nova.storage.rbd_utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] resizing rbd image ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 04:03:38 compute-0 podman[285384]: 2025-11-22 04:03:38.453355102 +0000 UTC m=+0.118115949 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:03:38 compute-0 nova_compute[253461]: 2025-11-22 04:03:38.542 253465 DEBUG nova.network.neutron [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Updating instance_info_cache with network_info: [{"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:38 compute-0 nova_compute[253461]: 2025-11-22 04:03:38.627 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Releasing lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:03:38 compute-0 nova_compute[253461]: 2025-11-22 04:03:38.628 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Instance network_info: |[{"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:03:38 compute-0 nova_compute[253461]: 2025-11-22 04:03:38.628 253465 DEBUG oslo_concurrency.lockutils [req-ec814303-af1a-49e4-b3a0-d06be8fe66e0 req-98334e52-c39a-4e05-8979-fb16b56a92be f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:03:38 compute-0 nova_compute[253461]: 2025-11-22 04:03:38.629 253465 DEBUG nova.network.neutron [req-ec814303-af1a-49e4-b3a0-d06be8fe66e0 req-98334e52-c39a-4e05-8979-fb16b56a92be f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Refreshing network info cache for port bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:03:39 compute-0 ceph-mon[75011]: osdmap e376: 3 total, 3 up, 3 in
Nov 22 04:03:39 compute-0 ceph-mon[75011]: pgmap v1537: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 883 KiB/s rd, 626 KiB/s wr, 70 op/s
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.256 253465 DEBUG nova.objects.instance [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lazy-loading 'migration_context' on Instance uuid ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.323 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.323 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Ensure instance console log exists: /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.324 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.324 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.325 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.330 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Start _get_guest_xml network_info=[{"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': -1, 'attachment_id': '93b971a7-d4c9-4783-bb1b-fc488818a84e', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-fe14904a-5cd8-45f9-9cb9-937b154bd3de', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'fe14904a-5cd8-45f9-9cb9-937b154bd3de', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb', 'attached_at': '', 'detached_at': '', 'volume_id': 'fe14904a-5cd8-45f9-9cb9-937b154bd3de', 'serial': 'fe14904a-5cd8-45f9-9cb9-937b154bd3de'}, 'device_type': 'disk', 'mount_device': '/dev/vdb', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.337 253465 WARNING nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.348 253465 DEBUG nova.virt.libvirt.host [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.349 253465 DEBUG nova.virt.libvirt.host [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.353 253465 DEBUG nova.virt.libvirt.host [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.354 253465 DEBUG nova.virt.libvirt.host [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.354 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.354 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.355 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.355 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.356 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.356 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.356 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.357 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.357 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.357 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.358 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.358 253465 DEBUG nova.virt.hardware [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.362 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.717 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.717 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.717 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:03:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:03:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3539293668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.883 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.913 253465 DEBUG nova.storage.rbd_utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] rbd image ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.917 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.996 253465 DEBUG nova.network.neutron [req-ec814303-af1a-49e4-b3a0-d06be8fe66e0 req-98334e52-c39a-4e05-8979-fb16b56a92be f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Updated VIF entry in instance network info cache for port bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:03:39 compute-0 nova_compute[253461]: 2025-11-22 04:03:39.997 253465 DEBUG nova.network.neutron [req-ec814303-af1a-49e4-b3a0-d06be8fe66e0 req-98334e52-c39a-4e05-8979-fb16b56a92be f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Updating instance_info_cache with network_info: [{"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.113 253465 DEBUG oslo_concurrency.lockutils [req-ec814303-af1a-49e4-b3a0-d06be8fe66e0 req-98334e52-c39a-4e05-8979-fb16b56a92be f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.161 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3539293668' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 663 KiB/s rd, 1.5 MiB/s wr, 84 op/s
Nov 22 04:03:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:03:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503785463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.493 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.582 253465 DEBUG nova.virt.libvirt.vif [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:03:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-18199089',display_name='tempest-instance-18199089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-18199089',id=20,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7WVaUzte4eck/32cI5/D3b6IWBj9NHa/z9P6h607tL1j9YRB0NqLr79roduhGB1Q1SIokAGX+Z/nYy/K43tUvXF/SwdNwwDMJD28IZ0C/bzNSB6t/xWJEDxdGM3unfsw==',key_name='tempest-keypair-679965368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c34534e935e44e883b5f01b09c03631',ramdisk_id='',reservation_id='r-2pk1sn2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-769621883',owner_user_name='tempest-VolumesBackupsTest-769621883-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='26cfaadc9db64dde98981b57d48fd714',uuid=ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.583 253465 DEBUG nova.network.os_vif_util [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Converting VIF {"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.584 253465 DEBUG nova.network.os_vif_util [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:be:44,bridge_name='br-int',has_traffic_filtering=True,id=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b,network=Network(776bc55d-f481-40b2-b547-d9145682b578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5ae7d5-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.586 253465 DEBUG nova.objects.instance [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lazy-loading 'pci_devices' on Instance uuid ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.657 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <uuid>ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb</uuid>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <name>instance-00000014</name>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <nova:name>tempest-instance-18199089</nova:name>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:03:39</nova:creationTime>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <nova:user uuid="26cfaadc9db64dde98981b57d48fd714">tempest-VolumesBackupsTest-769621883-project-member</nova:user>
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <nova:project uuid="6c34534e935e44e883b5f01b09c03631">tempest-VolumesBackupsTest-769621883</nova:project>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <nova:port uuid="bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b">
Nov 22 04:03:40 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <system>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <entry name="serial">ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb</entry>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <entry name="uuid">ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb</entry>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </system>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <os>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   </os>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <features>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   </features>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk">
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </source>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk.config">
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </source>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-fe14904a-5cd8-45f9-9cb9-937b154bd3de">
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </source>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:03:40 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <target dev="vdb" bus="virtio"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <serial>fe14904a-5cd8-45f9-9cb9-937b154bd3de</serial>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:2f:be:44"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <target dev="tapbf5ae7d5-ec"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb/console.log" append="off"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <video>
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </video>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:03:40 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:03:40 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:03:40 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:03:40 compute-0 nova_compute[253461]: </domain>
Nov 22 04:03:40 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.657 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Preparing to wait for external event network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.658 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.658 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.658 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.659 253465 DEBUG nova.virt.libvirt.vif [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:03:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-18199089',display_name='tempest-instance-18199089',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-18199089',id=20,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7WVaUzte4eck/32cI5/D3b6IWBj9NHa/z9P6h607tL1j9YRB0NqLr79roduhGB1Q1SIokAGX+Z/nYy/K43tUvXF/SwdNwwDMJD28IZ0C/bzNSB6t/xWJEDxdGM3unfsw==',key_name='tempest-keypair-679965368',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6c34534e935e44e883b5f01b09c03631',ramdisk_id='',reservation_id='r-2pk1sn2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-769621883',owner_user_name='tempest-VolumesBackupsTest-769621883-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:03:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='26cfaadc9db64dde98981b57d48fd714',uuid=ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.659 253465 DEBUG nova.network.os_vif_util [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Converting VIF {"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.660 253465 DEBUG nova.network.os_vif_util [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2f:be:44,bridge_name='br-int',has_traffic_filtering=True,id=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b,network=Network(776bc55d-f481-40b2-b547-d9145682b578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5ae7d5-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.661 253465 DEBUG os_vif [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:be:44,bridge_name='br-int',has_traffic_filtering=True,id=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b,network=Network(776bc55d-f481-40b2-b547-d9145682b578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5ae7d5-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.661 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.662 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.662 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.666 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbf5ae7d5-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.667 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbf5ae7d5-ec, col_values=(('external_ids', {'iface-id': 'bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2f:be:44', 'vm-uuid': 'ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.669 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:40 compute-0 NetworkManager[48916]: <info>  [1763784220.6702] manager: (tapbf5ae7d5-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.672 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.679 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.680 253465 INFO os_vif [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2f:be:44,bridge_name='br-int',has_traffic_filtering=True,id=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b,network=Network(776bc55d-f481-40b2-b547-d9145682b578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5ae7d5-ec')
Nov 22 04:03:40 compute-0 nova_compute[253461]: 2025-11-22 04:03:40.928 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating instance_info_cache with network_info: [{"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.015 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-f916655a-aa1c-4071-b05b-7bd2a8661ce0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.016 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.017 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.018 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.023 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.023 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.024 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.024 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] No VIF found with MAC fa:16:3e:2f:be:44, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.024 253465 INFO nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Using config drive
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.051 253465 DEBUG nova.storage.rbd_utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] rbd image ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.201 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.201 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.202 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.202 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.203 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:41 compute-0 ceph-mon[75011]: pgmap v1538: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 663 KiB/s rd, 1.5 MiB/s wr, 84 op/s
Nov 22 04:03:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/503785463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:03:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2038979144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:41 compute-0 nova_compute[253461]: 2025-11-22 04:03:41.917 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.715s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 22 04:03:42 compute-0 nova_compute[253461]: 2025-11-22 04:03:42.690 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:42 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2038979144' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:42 compute-0 ceph-mon[75011]: pgmap v1539: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 331 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Nov 22 04:03:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 147 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 22 04:03:44 compute-0 ceph-mon[75011]: pgmap v1540: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 147 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Nov 22 04:03:45 compute-0 nova_compute[253461]: 2025-11-22 04:03:45.669 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e376 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 77 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011134097032297974 of space, bias 1.0, pg target 0.3340229109689392 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.034910535504981445 of space, bias 1.0, pg target 10.473160651494434 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003461242226671876 of space, bias 1.0, pg target 0.10037602457348441 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006663670272514163 of space, bias 1.0, pg target 0.1932464379029107 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 16)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:03:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Nov 22 04:03:46 compute-0 ceph-mon[75011]: pgmap v1541: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 77 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Nov 22 04:03:47 compute-0 nova_compute[253461]: 2025-11-22 04:03:47.692 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 73 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 22 04:03:48 compute-0 ceph-mon[75011]: pgmap v1542: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 73 KiB/s rd, 2.1 MiB/s wr, 53 op/s
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.977 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.978 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.983 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.983 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.983 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.988 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.988 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.995 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.996 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:48 compute-0 nova_compute[253461]: 2025-11-22 04:03:48.996 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.073 253465 INFO nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Creating config drive at /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb/disk.config
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.087 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmpxcazeg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.225 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmpxcazeg" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.253 253465 DEBUG nova.storage.rbd_utils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] rbd image ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.257 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb/disk.config ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.302 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.304 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3829MB free_disk=59.9316291809082GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.304 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.304 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.400 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 30f0e486-2dc6-492c-9891-5f02237d7435 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.401 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.401 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.401 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.402 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.402 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.517 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.601 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.603 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.604 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.604 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.605 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.608 253465 INFO nova.compute.manager [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Terminating instance
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.610 253465 DEBUG nova.compute.manager [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:03:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Nov 22 04:03:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Nov 22 04:03:49 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.779 253465 DEBUG oslo_concurrency.processutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb/disk.config ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.780 253465 INFO nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Deleting local config drive /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb/disk.config because it was imported into RBD.
Nov 22 04:03:49 compute-0 NetworkManager[48916]: <info>  [1763784229.8677] manager: (tapbf5ae7d5-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Nov 22 04:03:49 compute-0 kernel: tapbf5ae7d5-ec: entered promiscuous mode
Nov 22 04:03:49 compute-0 ovn_controller[152691]: 2025-11-22T04:03:49Z|00199|binding|INFO|Claiming lport bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b for this chassis.
Nov 22 04:03:49 compute-0 ovn_controller[152691]: 2025-11-22T04:03:49Z|00200|binding|INFO|bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b: Claiming fa:16:3e:2f:be:44 10.100.0.13
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.878 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:49 compute-0 systemd-machined[215728]: New machine qemu-20-instance-00000014.
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.907 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:be:44 10.100.0.13'], port_security=['fa:16:3e:2f:be:44 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-776bc55d-f481-40b2-b547-d9145682b578', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c34534e935e44e883b5f01b09c03631', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03dd4866-f386-47ce-ae14-e17586ea2c60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aed504bc-f70d-4127-9419-e4d4c6cd3ca6, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.909 162689 INFO neutron.agent.ovn.metadata.agent [-] Port bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b in datapath 776bc55d-f481-40b2-b547-d9145682b578 bound to our chassis
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.913 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 776bc55d-f481-40b2-b547-d9145682b578
Nov 22 04:03:49 compute-0 ovn_controller[152691]: 2025-11-22T04:03:49Z|00201|binding|INFO|Setting lport bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b ovn-installed in OVS
Nov 22 04:03:49 compute-0 ovn_controller[152691]: 2025-11-22T04:03:49Z|00202|binding|INFO|Setting lport bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b up in Southbound
Nov 22 04:03:49 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Nov 22 04:03:49 compute-0 nova_compute[253461]: 2025-11-22 04:03:49.928 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.931 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c44f75bf-e8da-4bec-811d-098ac5e94d29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.932 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap776bc55d-f1 in ovnmeta-776bc55d-f481-40b2-b547-d9145682b578 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.935 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap776bc55d-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.935 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[29b042e1-0795-4827-a786-94cd15926c65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.937 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[91c8a520-aa34-4a54-8e66-5145c406e752]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.958 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[a74a5834-f371-4555-b856-80d203cbac6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:49 compute-0 systemd-udevd[285660]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:03:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:49.988 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ac8d6366-8b08-4214-8ad5-d2d41c913cef]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 kernel: tapa8b62e2a-03 (unregistering): left promiscuous mode
Nov 22 04:03:50 compute-0 NetworkManager[48916]: <info>  [1763784230.0115] device (tapa8b62e2a-03): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:03:50 compute-0 NetworkManager[48916]: <info>  [1763784230.0183] device (tapbf5ae7d5-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:03:50 compute-0 NetworkManager[48916]: <info>  [1763784230.0192] device (tapbf5ae7d5-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:03:50 compute-0 ovn_controller[152691]: 2025-11-22T04:03:50Z|00203|binding|INFO|Releasing lport a8b62e2a-0384-4ffd-a779-f44e0b6673c6 from this chassis (sb_readonly=0)
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.029 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:50 compute-0 ovn_controller[152691]: 2025-11-22T04:03:50Z|00204|binding|INFO|Setting lport a8b62e2a-0384-4ffd-a779-f44e0b6673c6 down in Southbound
Nov 22 04:03:50 compute-0 ovn_controller[152691]: 2025-11-22T04:03:50Z|00205|binding|INFO|Removing iface tapa8b62e2a-03 ovn-installed in OVS
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.047 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.048 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[02ae1f0b-1e44-4dc9-b7a5-567326703c4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 NetworkManager[48916]: <info>  [1763784230.0572] manager: (tap776bc55d-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.055 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4b199777-901a-4d3d-aea1-a6b1a55a4127]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Nov 22 04:03:50 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 15.107s CPU time.
Nov 22 04:03:50 compute-0 systemd-machined[215728]: Machine qemu-19-instance-00000013 terminated.
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.071 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:4c:3e 10.100.0.9'], port_security=['fa:16:3e:76:4c:3e 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '3371e7b7-8ad9-42ef-8a8d-8afa9840abfa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fa078025-1027-4d6a-8f11-b270ceaa6a1c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=a8b62e2a-0384-4ffd-a779-f44e0b6673c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.101 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[83078d1d-6104-4e36-8422-e4d78f23a67e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.104 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0d0467-2d56-4ed7-b65e-43bfd79cb10a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:03:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948400098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.132 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.139 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:03:50 compute-0 NetworkManager[48916]: <info>  [1763784230.1421] device (tap776bc55d-f0): carrier: link connected
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.149 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[da6a2612-2a81-4e2c-bb29-103eb61fe2c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.167 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7ff960-edce-4c2b-9873-2946e43a71d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap776bc55d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:4c:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448274, 'reachable_time': 19293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285693, 'error': None, 'target': 'ovnmeta-776bc55d-f481-40b2-b547-d9145682b578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.184 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e48a39b1-7fba-4385-8361-2fa9d19fb1c2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:4ce4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448274, 'tstamp': 448274}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285694, 'error': None, 'target': 'ovnmeta-776bc55d-f481-40b2-b547-d9145682b578', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.205 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5cefab-2ea0-4acf-b8df-879fa5deebf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap776bc55d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:4c:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448274, 'reachable_time': 19293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 285702, 'error': None, 'target': 'ovnmeta-776bc55d-f481-40b2-b547-d9145682b578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 NetworkManager[48916]: <info>  [1763784230.2297] manager: (tapa8b62e2a-03): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.242 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.242 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a31e9b0a-8037-4272-ad6c-37c9f1d64ace]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.249 253465 INFO nova.virt.libvirt.driver [-] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Instance destroyed successfully.
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.250 253465 DEBUG nova.objects.instance [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'resources' on Instance uuid 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.290 253465 DEBUG nova.virt.libvirt.vif [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:03:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-87060425',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-87060425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-87060425',id=19,image_ref='ad2fe85a-6178-45f7-8e3e-f68b71bf07a9',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQv/esmYGdRQZC3ac7bYtX1moFbWotxG+nzpJyHh3CYDqWlyJWHthvqj91YRodckkzdS5+YrwrlfoMYUi8LZM2LFxBHoZmxokPnRnngd5iIrS8THJAy29ohgPc20jGlrA==',key_name='tempest-keypair-1381765859',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:03:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-sj2q5n40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-1584219565',image_owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:03:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=3371e7b7-8ad9-42ef-8a8d-8afa9840abfa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.291 253465 DEBUG nova.network.os_vif_util [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "address": "fa:16:3e:76:4c:3e", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8b62e2a-03", "ovs_interfaceid": "a8b62e2a-0384-4ffd-a779-f44e0b6673c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.292 253465 DEBUG nova.network.os_vif_util [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:76:4c:3e,bridge_name='br-int',has_traffic_filtering=True,id=a8b62e2a-0384-4ffd-a779-f44e0b6673c6,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8b62e2a-03') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.293 253465 DEBUG os_vif [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:4c:3e,bridge_name='br-int',has_traffic_filtering=True,id=a8b62e2a-0384-4ffd-a779-f44e0b6673c6,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8b62e2a-03') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.294 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.295 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8b62e2a-03, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.297 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.298 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.993s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.298 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.300 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.303 253465 INFO os_vif [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:76:4c:3e,bridge_name='br-int',has_traffic_filtering=True,id=a8b62e2a-0384-4ffd-a779-f44e0b6673c6,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8b62e2a-03')
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.317 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[48e613e6-b080-4a7a-9fcf-570dee84f3b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.319 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap776bc55d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.319 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.319 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap776bc55d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.322 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:50 compute-0 NetworkManager[48916]: <info>  [1763784230.3223] manager: (tap776bc55d-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Nov 22 04:03:50 compute-0 kernel: tap776bc55d-f0: entered promiscuous mode
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.324 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.326 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap776bc55d-f0, col_values=(('external_ids', {'iface-id': 'ef25ece2-fc9e-40ae-a8e2-52dd2d7bdb09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:50 compute-0 ovn_controller[152691]: 2025-11-22T04:03:50Z|00206|binding|INFO|Releasing lport ef25ece2-fc9e-40ae-a8e2-52dd2d7bdb09 from this chassis (sb_readonly=0)
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.329 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.355 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.356 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/776bc55d-f481-40b2-b547-d9145682b578.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/776bc55d-f481-40b2-b547-d9145682b578.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.357 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d81c4bba-81ba-4a99-815f-0a477556f532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.359 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-776bc55d-f481-40b2-b547-d9145682b578
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/776bc55d-f481-40b2-b547-d9145682b578.pid.haproxy
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 776bc55d-f481-40b2-b547-d9145682b578
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:03:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:50.360 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-776bc55d-f481-40b2-b547-d9145682b578', 'env', 'PROCESS_TAG=haproxy-776bc55d-f481-40b2-b547-d9145682b578', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/776bc55d-f481-40b2-b547-d9145682b578.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:03:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 MiB/s wr, 27 op/s
Nov 22 04:03:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:03:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6754 writes, 30K keys, 6754 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6754 writes, 6754 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2000 writes, 9335 keys, 2000 commit groups, 1.0 writes per commit group, ingest: 11.95 MB, 0.02 MB/s
                                           Interval WAL: 2000 writes, 2000 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     50.3      0.68              0.13        16    0.043       0      0       0.0       0.0
                                             L6      1/0    9.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.4    130.1    106.6      1.09              0.39        15    0.072     73K   8462       0.0       0.0
                                            Sum      1/0    9.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.4     79.8     84.9      1.77              0.51        31    0.057     73K   8462       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     90.8     94.1      0.48              0.16         8    0.060     24K   2635       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    130.1    106.6      1.09              0.39        15    0.072     73K   8462       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     50.4      0.68              0.13        15    0.045       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.034, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.15 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 1.8 seconds
                                           Interval compaction: 0.04 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5574942991f0#2 capacity: 304.00 MB usage: 15.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000138 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1029,14.78 MB,4.86028%) FilterBlock(32,208.11 KB,0.0668526%) IndexBlock(32,411.17 KB,0.132084%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.497 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784230.4972272, ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.498 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] VM Started (Lifecycle Event)
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.563 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.567 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784230.497304, ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.567 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] VM Paused (Lifecycle Event)
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.711 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.711 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.711 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.712 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.712 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.712 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.712 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.835 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:03:50 compute-0 nova_compute[253461]: 2025-11-22 04:03:50.839 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:03:50 compute-0 podman[285816]: 2025-11-22 04:03:50.805056211 +0000 UTC m=+0.030593053 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:03:50 compute-0 ceph-mon[75011]: osdmap e377: 3 total, 3 up, 3 in
Nov 22 04:03:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3948400098' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:50 compute-0 ceph-mon[75011]: pgmap v1544: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 19 KiB/s rd, 1.1 MiB/s wr, 27 op/s
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.064 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.068 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.068 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:03:51 compute-0 podman[285816]: 2025-11-22 04:03:51.09065558 +0000 UTC m=+0.316192362 container create 50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.103 253465 DEBUG oslo_concurrency.lockutils [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.104 253465 DEBUG oslo_concurrency.lockutils [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.121 253465 INFO nova.compute.manager [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Detaching volume 84e8b44a-db23-453c-9288-1a8cf478419e
Nov 22 04:03:51 compute-0 systemd[1]: Started libpod-conmon-50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c.scope.
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.154 253465 DEBUG nova.compute.manager [req-c84e6193-acf0-4502-8b31-6bd37af6fd2f req-f7513875-8743-48aa-8738-e59b694aa99e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received event network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.154 253465 DEBUG oslo_concurrency.lockutils [req-c84e6193-acf0-4502-8b31-6bd37af6fd2f req-f7513875-8743-48aa-8738-e59b694aa99e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.154 253465 DEBUG oslo_concurrency.lockutils [req-c84e6193-acf0-4502-8b31-6bd37af6fd2f req-f7513875-8743-48aa-8738-e59b694aa99e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.155 253465 DEBUG oslo_concurrency.lockutils [req-c84e6193-acf0-4502-8b31-6bd37af6fd2f req-f7513875-8743-48aa-8738-e59b694aa99e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.155 253465 DEBUG nova.compute.manager [req-c84e6193-acf0-4502-8b31-6bd37af6fd2f req-f7513875-8743-48aa-8738-e59b694aa99e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Processing event network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.155 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.161 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784231.161283, ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.161 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] VM Resumed (Lifecycle Event)
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.165 253465 DEBUG nova.compute.manager [req-45bcdc92-2c77-4e66-9e97-4bf587598668 req-856961c5-fadc-448e-b1a6-c987c47c5225 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received event network-vif-unplugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.166 253465 DEBUG oslo_concurrency.lockutils [req-45bcdc92-2c77-4e66-9e97-4bf587598668 req-856961c5-fadc-448e-b1a6-c987c47c5225 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.166 253465 DEBUG oslo_concurrency.lockutils [req-45bcdc92-2c77-4e66-9e97-4bf587598668 req-856961c5-fadc-448e-b1a6-c987c47c5225 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.166 253465 DEBUG oslo_concurrency.lockutils [req-45bcdc92-2c77-4e66-9e97-4bf587598668 req-856961c5-fadc-448e-b1a6-c987c47c5225 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.166 253465 DEBUG nova.compute.manager [req-45bcdc92-2c77-4e66-9e97-4bf587598668 req-856961c5-fadc-448e-b1a6-c987c47c5225 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] No waiting events found dispatching network-vif-unplugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.166 253465 DEBUG nova.compute.manager [req-45bcdc92-2c77-4e66-9e97-4bf587598668 req-856961c5-fadc-448e-b1a6-c987c47c5225 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received event network-vif-unplugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.167 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.171 253465 INFO nova.virt.libvirt.driver [-] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Instance spawned successfully.
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.171 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:03:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aff45aef3ccdb78687e1c0aa2ad7767959907ab1afffef6bf93f9c2d3bebb39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.184 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.189 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.194 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.195 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.195 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.195 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.196 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.196 253465 DEBUG nova.virt.libvirt.driver [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:03:51 compute-0 podman[285816]: 2025-11-22 04:03:51.197544166 +0000 UTC m=+0.423080978 container init 50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:03:51 compute-0 podman[285816]: 2025-11-22 04:03:51.203321029 +0000 UTC m=+0.428857811 container start 50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.213 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:03:51 compute-0 neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578[285831]: [NOTICE]   (285835) : New worker (285837) forked
Nov 22 04:03:51 compute-0 neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578[285831]: [NOTICE]   (285835) : Loading success.
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.263 162689 INFO neutron.agent.ovn.metadata.agent [-] Port a8b62e2a-0384-4ffd-a779-f44e0b6673c6 in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.265 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.268 253465 INFO nova.virt.block_device [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to driver detach volume 84e8b44a-db23-453c-9288-1a8cf478419e from mountpoint /dev/vdb
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.278 253465 INFO nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Took 15.06 seconds to spawn the instance on the hypervisor.
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.279 253465 DEBUG nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.280 253465 DEBUG nova.virt.libvirt.driver [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Attempting to detach device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.280 253465 DEBUG nova.virt.libvirt.guest [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-84e8b44a-db23-453c-9288-1a8cf478419e">
Nov 22 04:03:51 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   </source>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <serial>84e8b44a-db23-453c-9288-1a8cf478419e</serial>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:03:51 compute-0 nova_compute[253461]: </disk>
Nov 22 04:03:51 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.285 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4d85d1f0-a676-4d53-b7f2-556f3b8d2d7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.293 253465 INFO nova.virt.libvirt.driver [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Deleting instance files /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa_del
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.294 253465 INFO nova.virt.libvirt.driver [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Deletion of /var/lib/nova/instances/3371e7b7-8ad9-42ef-8a8d-8afa9840abfa_del complete
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.299 253465 INFO nova.virt.libvirt.driver [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully detached device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the persistent domain config.
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.299 253465 DEBUG nova.virt.libvirt.driver [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.300 253465 DEBUG nova.virt.libvirt.guest [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-84e8b44a-db23-453c-9288-1a8cf478419e">
Nov 22 04:03:51 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   </source>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <serial>84e8b44a-db23-453c-9288-1a8cf478419e</serial>
Nov 22 04:03:51 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:03:51 compute-0 nova_compute[253461]: </disk>
Nov 22 04:03:51 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.324 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4627a6-af51-4236-ab55-bc508efdfabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.327 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[236d2907-bf7d-4115-96eb-9cb9ee6fcd12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.352 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[2908bde4-1577-42fd-a6d8-8b359fd9aa49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.368 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[181fd991-2189-433d-998a-e138b086598a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440403, 'reachable_time': 42317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285852, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.375 253465 INFO nova.compute.manager [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Took 24.15 seconds to build instance.
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.377 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763784231.3764634, f916655a-aa1c-4071-b05b-7bd2a8661ce0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.377 253465 DEBUG nova.virt.libvirt.driver [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.380 253465 INFO nova.virt.libvirt.driver [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully detached device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the live domain config.
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.380 253465 INFO nova.compute.manager [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Took 1.77 seconds to destroy the instance on the hypervisor.
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.380 253465 DEBUG oslo.service.loopingcall [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.381 253465 DEBUG nova.compute.manager [-] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.381 253465 DEBUG nova.network.neutron [-] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.389 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c772880c-2703-4593-a7ce-75403b8af1ab]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4670b112-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440418, 'tstamp': 440418}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285855, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4670b112-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440421, 'tstamp': 440421}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285855, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.392 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.393 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.395 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.395 253465 DEBUG oslo_concurrency.lockutils [None req-7b5fbc85-1104-4517-9ecd-ea9e75468404 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 25.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.396 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.397 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.397 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.397 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:03:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:51.398 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.640 253465 DEBUG nova.objects.instance [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:51 compute-0 nova_compute[253461]: 2025-11-22 04:03:51.690 253465 DEBUG oslo_concurrency.lockutils [None req-b5172fa0-6738-4641-bb03-5d196197deca a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:52 compute-0 nova_compute[253461]: 2025-11-22 04:03:52.185 253465 DEBUG nova.network.neutron [-] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:52 compute-0 nova_compute[253461]: 2025-11-22 04:03:52.204 253465 INFO nova.compute.manager [-] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Took 0.82 seconds to deallocate network for instance.
Nov 22 04:03:52 compute-0 nova_compute[253461]: 2025-11-22 04:03:52.384 253465 INFO nova.compute.manager [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Took 0.18 seconds to detach 1 volumes for instance.
Nov 22 04:03:52 compute-0 nova_compute[253461]: 2025-11-22 04:03:52.385 253465 DEBUG nova.compute.manager [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Deleting volume: 3add484d-4e04-495a-8e14-b8c72a0a6ae5 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 22 04:03:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 144 KiB/s rd, 16 KiB/s wr, 29 op/s
Nov 22 04:03:52 compute-0 nova_compute[253461]: 2025-11-22 04:03:52.638 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:52 compute-0 nova_compute[253461]: 2025-11-22 04:03:52.639 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:52 compute-0 nova_compute[253461]: 2025-11-22 04:03:52.727 253465 DEBUG oslo_concurrency.processutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:52 compute-0 nova_compute[253461]: 2025-11-22 04:03:52.756 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:03:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3676589449' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:03:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:03:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3676589449' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:03:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:03:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508839034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.282 253465 DEBUG nova.compute.manager [req-c362acef-26d0-4020-8367-929a114fdfd1 req-260334ca-17ef-4273-8051-5f2fbc32a289 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received event network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.283 253465 DEBUG oslo_concurrency.lockutils [req-c362acef-26d0-4020-8367-929a114fdfd1 req-260334ca-17ef-4273-8051-5f2fbc32a289 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.284 253465 DEBUG oslo_concurrency.lockutils [req-c362acef-26d0-4020-8367-929a114fdfd1 req-260334ca-17ef-4273-8051-5f2fbc32a289 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.284 253465 DEBUG oslo_concurrency.lockutils [req-c362acef-26d0-4020-8367-929a114fdfd1 req-260334ca-17ef-4273-8051-5f2fbc32a289 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.284 253465 DEBUG nova.compute.manager [req-c362acef-26d0-4020-8367-929a114fdfd1 req-260334ca-17ef-4273-8051-5f2fbc32a289 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] No waiting events found dispatching network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.285 253465 WARNING nova.compute.manager [req-c362acef-26d0-4020-8367-929a114fdfd1 req-260334ca-17ef-4273-8051-5f2fbc32a289 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received unexpected event network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b for instance with vm_state active and task_state None.
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.300 253465 DEBUG oslo_concurrency.processutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.306 253465 DEBUG nova.compute.provider_tree [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.310 253465 DEBUG nova.compute.manager [req-0947b56b-b104-4a81-81b3-757d4594937b req-e0f86305-68ee-4402-a9e0-e81191f860a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received event network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.311 253465 DEBUG oslo_concurrency.lockutils [req-0947b56b-b104-4a81-81b3-757d4594937b req-e0f86305-68ee-4402-a9e0-e81191f860a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.311 253465 DEBUG oslo_concurrency.lockutils [req-0947b56b-b104-4a81-81b3-757d4594937b req-e0f86305-68ee-4402-a9e0-e81191f860a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.312 253465 DEBUG oslo_concurrency.lockutils [req-0947b56b-b104-4a81-81b3-757d4594937b req-e0f86305-68ee-4402-a9e0-e81191f860a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.312 253465 DEBUG nova.compute.manager [req-0947b56b-b104-4a81-81b3-757d4594937b req-e0f86305-68ee-4402-a9e0-e81191f860a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] No waiting events found dispatching network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.312 253465 WARNING nova.compute.manager [req-0947b56b-b104-4a81-81b3-757d4594937b req-e0f86305-68ee-4402-a9e0-e81191f860a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received unexpected event network-vif-plugged-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 for instance with vm_state deleted and task_state None.
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.313 253465 DEBUG nova.compute.manager [req-0947b56b-b104-4a81-81b3-757d4594937b req-e0f86305-68ee-4402-a9e0-e81191f860a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Received event network-vif-deleted-a8b62e2a-0384-4ffd-a779-f44e0b6673c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.333 253465 DEBUG nova.scheduler.client.report [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.362 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.398 253465 INFO nova.scheduler.client.report [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Deleted allocations for instance 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa
Nov 22 04:03:53 compute-0 ceph-mon[75011]: pgmap v1545: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 144 KiB/s rd, 16 KiB/s wr, 29 op/s
Nov 22 04:03:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3676589449' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:03:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3676589449' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:03:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3508839034' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:03:53 compute-0 nova_compute[253461]: 2025-11-22 04:03:53.464 253465 DEBUG oslo_concurrency.lockutils [None req-5edc7937-2fa9-486c-9dad-c2210c7e2a6f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "3371e7b7-8ad9-42ef-8a8d-8afa9840abfa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.352 253465 DEBUG oslo_concurrency.lockutils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.353 253465 DEBUG oslo_concurrency.lockutils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.371 253465 DEBUG nova.objects.instance [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 854 KiB/s rd, 27 KiB/s wr, 81 op/s
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.410 253465 DEBUG oslo_concurrency.lockutils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:54 compute-0 ceph-mon[75011]: pgmap v1546: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 854 KiB/s rd, 27 KiB/s wr, 81 op/s
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.650 253465 DEBUG oslo_concurrency.lockutils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.654 253465 DEBUG oslo_concurrency.lockutils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.655 253465 INFO nova.compute.manager [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attaching volume d7803436-0547-4ad3-bf97-a8025d1d8ea8 to /dev/vdb
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.796 253465 DEBUG os_brick.utils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.797 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.808 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.809 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[3abc5275-914d-430c-94a6-cdb642118c01]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.810 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.817 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.817 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[34d13472-ae26-4dfc-a251-436b79b5a48f]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.819 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.828 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.828 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[a1623123-64c3-4a86-8ca9-a391c358b930]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.830 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[2876dd7d-ff32-4d3a-8fd1-b44febfa97f0]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.830 253465 DEBUG oslo_concurrency.processutils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.854 253465 DEBUG oslo_concurrency.processutils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.856 253465 DEBUG os_brick.initiator.connectors.lightos [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.856 253465 DEBUG os_brick.initiator.connectors.lightos [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.857 253465 DEBUG os_brick.initiator.connectors.lightos [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.857 253465 DEBUG os_brick.utils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] <== get_connector_properties: return (60ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:03:54 compute-0 nova_compute[253461]: 2025-11-22 04:03:54.858 253465 DEBUG nova.virt.block_device [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating existing volume attachment record: 52d90197-62dc-413b-b6f9-c62b321a7a05 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.297 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.423 253465 DEBUG nova.compute.manager [req-48063b4f-1e5e-4893-88dd-e51ba12d4b99 req-6f7cbd1b-4acf-4cc8-9308-8c1065935cf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received event network-changed-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.423 253465 DEBUG nova.compute.manager [req-48063b4f-1e5e-4893-88dd-e51ba12d4b99 req-6f7cbd1b-4acf-4cc8-9308-8c1065935cf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Refreshing instance network info cache due to event network-changed-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.423 253465 DEBUG oslo_concurrency.lockutils [req-48063b4f-1e5e-4893-88dd-e51ba12d4b99 req-6f7cbd1b-4acf-4cc8-9308-8c1065935cf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.423 253465 DEBUG oslo_concurrency.lockutils [req-48063b4f-1e5e-4893-88dd-e51ba12d4b99 req-6f7cbd1b-4acf-4cc8-9308-8c1065935cf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.423 253465 DEBUG nova.network.neutron [req-48063b4f-1e5e-4893-88dd-e51ba12d4b99 req-6f7cbd1b-4acf-4cc8-9308-8c1065935cf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Refreshing network info cache for port bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:03:55 compute-0 podman[285885]: 2025-11-22 04:03:55.432108257 +0000 UTC m=+0.100992912 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:03:55 compute-0 podman[285886]: 2025-11-22 04:03:55.472080819 +0000 UTC m=+0.134449849 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:03:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Nov 22 04:03:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Nov 22 04:03:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Nov 22 04:03:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:03:55 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465539359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.674 253465 DEBUG nova.objects.instance [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.696 253465 DEBUG nova.virt.libvirt.driver [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to attach volume d7803436-0547-4ad3-bf97-a8025d1d8ea8 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.700 253465 DEBUG nova.virt.libvirt.guest [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 04:03:55 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:03:55 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-d7803436-0547-4ad3-bf97-a8025d1d8ea8">
Nov 22 04:03:55 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:55 compute-0 nova_compute[253461]:   </source>
Nov 22 04:03:55 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 04:03:55 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:03:55 compute-0 nova_compute[253461]:   </auth>
Nov 22 04:03:55 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:03:55 compute-0 nova_compute[253461]:   <serial>d7803436-0547-4ad3-bf97-a8025d1d8ea8</serial>
Nov 22 04:03:55 compute-0 nova_compute[253461]: </disk>
Nov 22 04:03:55 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.862 253465 DEBUG nova.virt.libvirt.driver [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.863 253465 DEBUG nova.virt.libvirt.driver [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.863 253465 DEBUG nova.virt.libvirt.driver [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:03:55 compute-0 nova_compute[253461]: 2025-11-22 04:03:55.863 253465 DEBUG nova.virt.libvirt.driver [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No VIF found with MAC fa:16:3e:67:9b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.060 253465 DEBUG oslo_concurrency.lockutils [None req-ee237cbb-a005-4065-85e2-219a01b17455 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:03:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.8 MiB/s rd, 29 KiB/s wr, 152 op/s
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.490 253465 DEBUG nova.network.neutron [req-48063b4f-1e5e-4893-88dd-e51ba12d4b99 req-6f7cbd1b-4acf-4cc8-9308-8c1065935cf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Updated VIF entry in instance network info cache for port bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.491 253465 DEBUG nova.network.neutron [req-48063b4f-1e5e-4893-88dd-e51ba12d4b99 req-6f7cbd1b-4acf-4cc8-9308-8c1065935cf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Updating instance_info_cache with network_info: [{"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:56 compute-0 ceph-mon[75011]: osdmap e378: 3 total, 3 up, 3 in
Nov 22 04:03:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/465539359' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:03:56 compute-0 ceph-mon[75011]: pgmap v1548: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.8 MiB/s rd, 29 KiB/s wr, 152 op/s
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.519 253465 DEBUG oslo_concurrency.lockutils [req-48063b4f-1e5e-4893-88dd-e51ba12d4b99 req-6f7cbd1b-4acf-4cc8-9308-8c1065935cf8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.803 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "30f0e486-2dc6-492c-9891-5f02237d7435" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.804 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.805 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.805 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.806 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.808 253465 INFO nova.compute.manager [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Terminating instance
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.810 253465 DEBUG nova.compute.manager [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:03:56 compute-0 kernel: tapb394a80d-18 (unregistering): left promiscuous mode
Nov 22 04:03:56 compute-0 NetworkManager[48916]: <info>  [1763784236.8779] device (tapb394a80d-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.886 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:56 compute-0 ovn_controller[152691]: 2025-11-22T04:03:56Z|00207|binding|INFO|Releasing lport b394a80d-1857-49b1-bd4f-a2a675cc7ebe from this chassis (sb_readonly=0)
Nov 22 04:03:56 compute-0 ovn_controller[152691]: 2025-11-22T04:03:56Z|00208|binding|INFO|Setting lport b394a80d-1857-49b1-bd4f-a2a675cc7ebe down in Southbound
Nov 22 04:03:56 compute-0 ovn_controller[152691]: 2025-11-22T04:03:56Z|00209|binding|INFO|Removing iface tapb394a80d-18 ovn-installed in OVS
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.891 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:56.902 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:6c:3c 10.100.0.12'], port_security=['fa:16:3e:85:6c:3c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30f0e486-2dc6-492c-9891-5f02237d7435', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20bce94a-76bb-4cce-8d86-d3a6c4976306', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b394a80d-1857-49b1-bd4f-a2a675cc7ebe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:03:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:56.904 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b394a80d-1857-49b1-bd4f-a2a675cc7ebe in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:03:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:56.907 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4670b112-9f63-4a03-8d79-91f581c69c03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:03:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:56.908 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9236c960-a464-4778-9bc5-c6fe6d79c751]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:56.908 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace which is not needed anymore
Nov 22 04:03:56 compute-0 nova_compute[253461]: 2025-11-22 04:03:56.931 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:56 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Nov 22 04:03:56 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 17.706s CPU time.
Nov 22 04:03:56 compute-0 systemd-machined[215728]: Machine qemu-17-instance-00000011 terminated.
Nov 22 04:03:57 compute-0 kernel: tapb394a80d-18: entered promiscuous mode
Nov 22 04:03:57 compute-0 NetworkManager[48916]: <info>  [1763784237.0299] manager: (tapb394a80d-18): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Nov 22 04:03:57 compute-0 ovn_controller[152691]: 2025-11-22T04:03:57Z|00210|binding|INFO|Claiming lport b394a80d-1857-49b1-bd4f-a2a675cc7ebe for this chassis.
Nov 22 04:03:57 compute-0 ovn_controller[152691]: 2025-11-22T04:03:57Z|00211|binding|INFO|b394a80d-1857-49b1-bd4f-a2a675cc7ebe: Claiming fa:16:3e:85:6c:3c 10.100.0.12
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.031 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:57 compute-0 kernel: tapb394a80d-18 (unregistering): left promiscuous mode
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.055 253465 INFO nova.virt.libvirt.driver [-] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Instance destroyed successfully.
Nov 22 04:03:57 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[283783]: [NOTICE]   (283787) : haproxy version is 2.8.14-c23fe91
Nov 22 04:03:57 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[283783]: [NOTICE]   (283787) : path to executable is /usr/sbin/haproxy
Nov 22 04:03:57 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[283783]: [WARNING]  (283787) : Exiting Master process...
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.061 253465 DEBUG nova.objects.instance [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'resources' on Instance uuid 30f0e486-2dc6-492c-9891-5f02237d7435 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:57 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[283783]: [WARNING]  (283787) : Exiting Master process...
Nov 22 04:03:57 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[283783]: [ALERT]    (283787) : Current worker (283789) exited with code 143 (Terminated)
Nov 22 04:03:57 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[283783]: [WARNING]  (283787) : All workers exited. Exiting... (0)
Nov 22 04:03:57 compute-0 ovn_controller[152691]: 2025-11-22T04:03:57Z|00212|if_status|INFO|Dropped 2 log messages in last 288 seconds (most recently, 288 seconds ago) due to excessive rate
Nov 22 04:03:57 compute-0 ovn_controller[152691]: 2025-11-22T04:03:57Z|00213|if_status|INFO|Not setting lport b394a80d-1857-49b1-bd4f-a2a675cc7ebe down as sb is readonly
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.066 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:57 compute-0 systemd[1]: libpod-9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f.scope: Deactivated successfully.
Nov 22 04:03:57 compute-0 podman[285972]: 2025-11-22 04:03:57.076134436 +0000 UTC m=+0.065747583 container died 9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:03:57 compute-0 ovn_controller[152691]: 2025-11-22T04:03:57Z|00214|binding|INFO|Releasing lport b394a80d-1857-49b1-bd4f-a2a675cc7ebe from this chassis (sb_readonly=0)
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.096 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:6c:3c 10.100.0.12'], port_security=['fa:16:3e:85:6c:3c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30f0e486-2dc6-492c-9891-5f02237d7435', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20bce94a-76bb-4cce-8d86-d3a6c4976306', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b394a80d-1857-49b1-bd4f-a2a675cc7ebe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.101 253465 DEBUG nova.virt.libvirt.vif [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1287261252',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1287261252',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1287261252',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBA0wizjV88v9pjFVdz4W0Dqu87LyHlDZL/mj+Xssyoqdm5h1EI/pY5eZoXAS94VRdlC25e0MWyvAUI01U92avGCuXRAJMD+18vkkRHL8+54054r1q8yWW+jLr1jlNKumHg==',key_name='tempest-keypair-1850478184',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:02:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-2rdi0c72',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:02:33Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=30f0e486-2dc6-492c-9891-5f02237d7435,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.102 253465 DEBUG nova.network.os_vif_util [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "address": "fa:16:3e:85:6c:3c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb394a80d-18", "ovs_interfaceid": "b394a80d-1857-49b1-bd4f-a2a675cc7ebe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.104 253465 DEBUG nova.network.os_vif_util [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:6c:3c,bridge_name='br-int',has_traffic_filtering=True,id=b394a80d-1857-49b1-bd4f-a2a675cc7ebe,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb394a80d-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.104 253465 DEBUG os_vif [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:6c:3c,bridge_name='br-int',has_traffic_filtering=True,id=b394a80d-1857-49b1-bd4f-a2a675cc7ebe,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb394a80d-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f-userdata-shm.mount: Deactivated successfully.
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.108 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.109 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb394a80d-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6c5f4cb1dd56dd164ea6b95b83df09f83cddb450258c784b5f4cd99808f1431-merged.mount: Deactivated successfully.
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.112 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.114 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.118 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:6c:3c 10.100.0.12'], port_security=['fa:16:3e:85:6c:3c 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '30f0e486-2dc6-492c-9891-5f02237d7435', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '20bce94a-76bb-4cce-8d86-d3a6c4976306', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=b394a80d-1857-49b1-bd4f-a2a675cc7ebe) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.121 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:57 compute-0 podman[285972]: 2025-11-22 04:03:57.121459677 +0000 UTC m=+0.111072824 container cleanup 9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2)
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.127 253465 INFO os_vif [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:6c:3c,bridge_name='br-int',has_traffic_filtering=True,id=b394a80d-1857-49b1-bd4f-a2a675cc7ebe,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb394a80d-18')
Nov 22 04:03:57 compute-0 systemd[1]: libpod-conmon-9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f.scope: Deactivated successfully.
Nov 22 04:03:57 compute-0 podman[286012]: 2025-11-22 04:03:57.1915352 +0000 UTC m=+0.049921796 container remove 9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.200 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[23edcca1-79cf-43b7-9dbe-c79b6080254d]: (4, ('Sat Nov 22 04:03:56 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f)\n9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f\nSat Nov 22 04:03:57 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f)\n9fa0b03309887a84759ca28799c81776a77638d51b8c8c7076cbcb67834dbe2f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.201 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4dce73e9-2287-470c-aa06-823d2e8d9ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.202 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.203 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:57 compute-0 kernel: tap4670b112-90: left promiscuous mode
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.220 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.226 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[45521a81-ac57-430c-b2ee-91e9989916df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.240 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d85a2af0-176c-4b24-8f7d-fc3c457dffac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.241 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[cee68eec-afee-4710-8176-32ab867c64b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.257 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[886947f7-08bf-4c5f-a49e-7469c3f04be1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440394, 'reachable_time': 27313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286045, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d4670b112\x2d9f63\x2d4a03\x2d8d79\x2d91f581c69c03.mount: Deactivated successfully.
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.262 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.263 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[4ec8be76-bda8-403c-bf2d-f6f207efda0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.263 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b394a80d-1857-49b1-bd4f-a2a675cc7ebe in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.265 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4670b112-9f63-4a03-8d79-91f581c69c03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.268 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fc2ebffc-8d13-45d9-bb15-06dcc592181e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.268 162689 INFO neutron.agent.ovn.metadata.agent [-] Port b394a80d-1857-49b1-bd4f-a2a675cc7ebe in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.270 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4670b112-9f63-4a03-8d79-91f581c69c03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:03:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:03:57.270 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fcbb38fc-5ba7-4bfe-8bfd-4af98be60bf1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.406 253465 INFO nova.virt.libvirt.driver [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Deleting instance files /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435_del
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.418 253465 INFO nova.virt.libvirt.driver [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Deletion of /var/lib/nova/instances/30f0e486-2dc6-492c-9891-5f02237d7435_del complete
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.539 253465 INFO nova.compute.manager [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Took 0.73 seconds to destroy the instance on the hypervisor.
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.542 253465 DEBUG oslo.service.loopingcall [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.543 253465 DEBUG nova.compute.manager [-] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.543 253465 DEBUG nova.network.neutron [-] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.731 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.802 253465 DEBUG nova.compute.manager [req-e2b4bdd3-af0f-4390-a606-818e7af03738 req-15ce1619-80a3-40cf-b281-0485b5bd1617 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received event network-vif-unplugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.803 253465 DEBUG oslo_concurrency.lockutils [req-e2b4bdd3-af0f-4390-a606-818e7af03738 req-15ce1619-80a3-40cf-b281-0485b5bd1617 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.803 253465 DEBUG oslo_concurrency.lockutils [req-e2b4bdd3-af0f-4390-a606-818e7af03738 req-15ce1619-80a3-40cf-b281-0485b5bd1617 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.804 253465 DEBUG oslo_concurrency.lockutils [req-e2b4bdd3-af0f-4390-a606-818e7af03738 req-15ce1619-80a3-40cf-b281-0485b5bd1617 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.804 253465 DEBUG nova.compute.manager [req-e2b4bdd3-af0f-4390-a606-818e7af03738 req-15ce1619-80a3-40cf-b281-0485b5bd1617 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] No waiting events found dispatching network-vif-unplugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:03:57 compute-0 nova_compute[253461]: 2025-11-22 04:03:57.804 253465 DEBUG nova.compute.manager [req-e2b4bdd3-af0f-4390-a606-818e7af03738 req-15ce1619-80a3-40cf-b281-0485b5bd1617 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received event network-vif-unplugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:03:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 26 KiB/s wr, 181 op/s
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.222 253465 DEBUG oslo_concurrency.lockutils [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.223 253465 DEBUG oslo_concurrency.lockutils [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.249 253465 INFO nova.compute.manager [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Detaching volume d7803436-0547-4ad3-bf97-a8025d1d8ea8
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.253 253465 DEBUG nova.network.neutron [-] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.276 253465 INFO nova.compute.manager [-] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Took 1.73 seconds to deallocate network for instance.
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.405 253465 DEBUG nova.compute.manager [req-776ec700-2213-411d-8675-09be748f127f req-181a9114-d8df-4233-8f7c-df382d225bbf f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received event network-vif-deleted-b394a80d-1857-49b1-bd4f-a2a675cc7ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.422 253465 INFO nova.virt.block_device [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to driver detach volume d7803436-0547-4ad3-bf97-a8025d1d8ea8 from mountpoint /dev/vdb
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.434 253465 DEBUG nova.virt.libvirt.driver [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Attempting to detach device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.436 253465 DEBUG nova.virt.libvirt.guest [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-d7803436-0547-4ad3-bf97-a8025d1d8ea8">
Nov 22 04:03:59 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   </source>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <serial>d7803436-0547-4ad3-bf97-a8025d1d8ea8</serial>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:03:59 compute-0 nova_compute[253461]: </disk>
Nov 22 04:03:59 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.444 253465 INFO nova.virt.libvirt.driver [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully detached device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the persistent domain config.
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.445 253465 DEBUG nova.virt.libvirt.driver [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.446 253465 DEBUG nova.virt.libvirt.guest [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-d7803436-0547-4ad3-bf97-a8025d1d8ea8">
Nov 22 04:03:59 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   </source>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <serial>d7803436-0547-4ad3-bf97-a8025d1d8ea8</serial>
Nov 22 04:03:59 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:03:59 compute-0 nova_compute[253461]: </disk>
Nov 22 04:03:59 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:03:59 compute-0 ceph-mon[75011]: pgmap v1549: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.7 MiB/s rd, 26 KiB/s wr, 181 op/s
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.469 253465 INFO nova.compute.manager [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Took 0.19 seconds to detach 1 volumes for instance.
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.471 253465 DEBUG nova.compute.manager [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Deleting volume: c8851b72-0ea0-4abf-b3c9-07e9e110de7d _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.575 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763784239.5752974, f916655a-aa1c-4071-b05b-7bd2a8661ce0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.577 253465 DEBUG nova.virt.libvirt.driver [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.580 253465 INFO nova.virt.libvirt.driver [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully detached device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the live domain config.
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.716 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.717 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.784 253465 DEBUG nova.objects.instance [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.800 253465 DEBUG oslo_concurrency.processutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.840 253465 DEBUG oslo_concurrency.lockutils [None req-4fccb244-7d69-45f9-800f-3faf556841c4 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.888 253465 DEBUG nova.compute.manager [req-54b7b43c-7643-4975-b728-60261be250a9 req-d1564e26-e6f5-45c3-871c-6a42c9634caa f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received event network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.889 253465 DEBUG oslo_concurrency.lockutils [req-54b7b43c-7643-4975-b728-60261be250a9 req-d1564e26-e6f5-45c3-871c-6a42c9634caa f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.889 253465 DEBUG oslo_concurrency.lockutils [req-54b7b43c-7643-4975-b728-60261be250a9 req-d1564e26-e6f5-45c3-871c-6a42c9634caa f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.890 253465 DEBUG oslo_concurrency.lockutils [req-54b7b43c-7643-4975-b728-60261be250a9 req-d1564e26-e6f5-45c3-871c-6a42c9634caa f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.890 253465 DEBUG nova.compute.manager [req-54b7b43c-7643-4975-b728-60261be250a9 req-d1564e26-e6f5-45c3-871c-6a42c9634caa f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] No waiting events found dispatching network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:03:59 compute-0 nova_compute[253461]: 2025-11-22 04:03:59.890 253465 WARNING nova.compute.manager [req-54b7b43c-7643-4975-b728-60261be250a9 req-d1564e26-e6f5-45c3-871c-6a42c9634caa f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Received unexpected event network-vif-plugged-b394a80d-1857-49b1-bd4f-a2a675cc7ebe for instance with vm_state deleted and task_state None.
Nov 22 04:04:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133449611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3133449611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3716369285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3716369285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/779864445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:00 compute-0 nova_compute[253461]: 2025-11-22 04:04:00.336 253465 DEBUG oslo_concurrency.processutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:00 compute-0 nova_compute[253461]: 2025-11-22 04:04:00.342 253465 DEBUG nova.compute.provider_tree [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:04:00 compute-0 nova_compute[253461]: 2025-11-22 04:04:00.369 253465 DEBUG nova.scheduler.client.report [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:04:00 compute-0 nova_compute[253461]: 2025-11-22 04:04:00.394 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 24 KiB/s wr, 189 op/s
Nov 22 04:04:00 compute-0 nova_compute[253461]: 2025-11-22 04:04:00.453 253465 INFO nova.scheduler.client.report [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Deleted allocations for instance 30f0e486-2dc6-492c-9891-5f02237d7435
Nov 22 04:04:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3133449611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3133449611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3716369285' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3716369285' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/779864445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:00 compute-0 ceph-mon[75011]: pgmap v1550: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 24 KiB/s wr, 189 op/s
Nov 22 04:04:00 compute-0 nova_compute[253461]: 2025-11-22 04:04:00.552 253465 DEBUG oslo_concurrency.lockutils [None req-838659f6-a667-4efd-9da1-8012eec5a6fb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "30f0e486-2dc6-492c-9891-5f02237d7435" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Nov 22 04:04:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Nov 22 04:04:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Nov 22 04:04:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:01.400 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.114 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:02 compute-0 ceph-mon[75011]: osdmap e379: 3 total, 3 up, 3 in
Nov 22 04:04:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 167 op/s
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.601 253465 DEBUG oslo_concurrency.lockutils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.602 253465 DEBUG oslo_concurrency.lockutils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.618 253465 DEBUG nova.objects.instance [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.660 253465 DEBUG oslo_concurrency.lockutils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.787 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.869 253465 DEBUG oslo_concurrency.lockutils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.870 253465 DEBUG oslo_concurrency.lockutils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:02 compute-0 nova_compute[253461]: 2025-11-22 04:04:02.870 253465 INFO nova.compute.manager [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attaching volume 0f383e03-e86f-42f0-884c-e26929f8bf00 to /dev/vdb
Nov 22 04:04:03 compute-0 ceph-mon[75011]: pgmap v1552: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 88 KiB/s wr, 167 op/s
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.181 253465 DEBUG os_brick.utils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.183 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.191 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.192 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5ae3f6-e354-4566-84e5-c352d42718fd]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.193 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.199 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.006s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.200 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad24fcb-30d5-4a5d-8b62-e9e7a8cb2861]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.201 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.209 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.210 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[16671a5c-4cbe-4721-9350-06275b952d06]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.211 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[597c5582-4fe7-445d-b8f5-16a43e7bb4fc]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.212 253465 DEBUG oslo_concurrency.processutils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.246 253465 DEBUG oslo_concurrency.processutils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.251 253465 DEBUG os_brick.initiator.connectors.lightos [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.252 253465 DEBUG os_brick.initiator.connectors.lightos [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.252 253465 DEBUG os_brick.initiator.connectors.lightos [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.253 253465 DEBUG os_brick.utils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:04:03 compute-0 nova_compute[253461]: 2025-11-22 04:04:03.254 253465 DEBUG nova.virt.block_device [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating existing volume attachment record: 0c964289-98e7-4866-9944-4eb1750ae628 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:04:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:04:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2056312523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:04 compute-0 nova_compute[253461]: 2025-11-22 04:04:04.049 253465 DEBUG nova.objects.instance [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:04 compute-0 nova_compute[253461]: 2025-11-22 04:04:04.082 253465 DEBUG nova.virt.libvirt.driver [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to attach volume 0f383e03-e86f-42f0-884c-e26929f8bf00 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 04:04:04 compute-0 nova_compute[253461]: 2025-11-22 04:04:04.084 253465 DEBUG nova.virt.libvirt.guest [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 04:04:04 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:04:04 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-0f383e03-e86f-42f0-884c-e26929f8bf00">
Nov 22 04:04:04 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:04:04 compute-0 nova_compute[253461]:   </source>
Nov 22 04:04:04 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 04:04:04 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:04:04 compute-0 nova_compute[253461]:   </auth>
Nov 22 04:04:04 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:04:04 compute-0 nova_compute[253461]:   <serial>0f383e03-e86f-42f0-884c-e26929f8bf00</serial>
Nov 22 04:04:04 compute-0 nova_compute[253461]: </disk>
Nov 22 04:04:04 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 04:04:04 compute-0 nova_compute[253461]: 2025-11-22 04:04:04.236 253465 DEBUG nova.virt.libvirt.driver [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:04:04 compute-0 nova_compute[253461]: 2025-11-22 04:04:04.237 253465 DEBUG nova.virt.libvirt.driver [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:04:04 compute-0 nova_compute[253461]: 2025-11-22 04:04:04.237 253465 DEBUG nova.virt.libvirt.driver [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:04:04 compute-0 nova_compute[253461]: 2025-11-22 04:04:04.238 253465 DEBUG nova.virt.libvirt.driver [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No VIF found with MAC fa:16:3e:67:9b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:04:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Nov 22 04:04:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2056312523' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Nov 22 04:04:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Nov 22 04:04:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 199 op/s
Nov 22 04:04:04 compute-0 nova_compute[253461]: 2025-11-22 04:04:04.477 253465 DEBUG oslo_concurrency.lockutils [None req-67be8877-963c-4410-a901-fd34fe0a662c a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:05 compute-0 ovn_controller[152691]: 2025-11-22T04:04:05Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2f:be:44 10.100.0.13
Nov 22 04:04:05 compute-0 ovn_controller[152691]: 2025-11-22T04:04:05Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2f:be:44 10.100.0.13
Nov 22 04:04:05 compute-0 nova_compute[253461]: 2025-11-22 04:04:05.247 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784230.2461421, 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:04:05 compute-0 nova_compute[253461]: 2025-11-22 04:04:05.248 253465 INFO nova.compute.manager [-] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] VM Stopped (Lifecycle Event)
Nov 22 04:04:05 compute-0 nova_compute[253461]: 2025-11-22 04:04:05.264 253465 DEBUG nova.compute.manager [None req-62a7b1aa-d637-490b-80b3-9df7fc31f302 - - - - - -] [instance: 3371e7b7-8ad9-42ef-8a8d-8afa9840abfa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:04:05 compute-0 ceph-mon[75011]: osdmap e380: 3 total, 3 up, 3 in
Nov 22 04:04:05 compute-0 ceph-mon[75011]: pgmap v1554: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.2 MiB/s wr, 199 op/s
Nov 22 04:04:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 525 KiB/s rd, 2.5 MiB/s wr, 192 op/s
Nov 22 04:04:06 compute-0 ceph-mon[75011]: pgmap v1555: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 525 KiB/s rd, 2.5 MiB/s wr, 192 op/s
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.029 253465 DEBUG oslo_concurrency.lockutils [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.029 253465 DEBUG oslo_concurrency.lockutils [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.048 253465 INFO nova.compute.manager [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Detaching volume 0f383e03-e86f-42f0-884c-e26929f8bf00
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.117 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.191 253465 INFO nova.virt.block_device [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to driver detach volume 0f383e03-e86f-42f0-884c-e26929f8bf00 from mountpoint /dev/vdb
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.202 253465 DEBUG nova.virt.libvirt.driver [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Attempting to detach device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.202 253465 DEBUG nova.virt.libvirt.guest [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-0f383e03-e86f-42f0-884c-e26929f8bf00">
Nov 22 04:04:07 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   </source>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <serial>0f383e03-e86f-42f0-884c-e26929f8bf00</serial>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:04:07 compute-0 nova_compute[253461]: </disk>
Nov 22 04:04:07 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.210 253465 INFO nova.virt.libvirt.driver [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully detached device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the persistent domain config.
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.211 253465 DEBUG nova.virt.libvirt.driver [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.211 253465 DEBUG nova.virt.libvirt.guest [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-0f383e03-e86f-42f0-884c-e26929f8bf00">
Nov 22 04:04:07 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   </source>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <serial>0f383e03-e86f-42f0-884c-e26929f8bf00</serial>
Nov 22 04:04:07 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:04:07 compute-0 nova_compute[253461]: </disk>
Nov 22 04:04:07 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.341 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763784247.3405278, f916655a-aa1c-4071-b05b-7bd2a8661ce0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.343 253465 DEBUG nova.virt.libvirt.driver [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.346 253465 INFO nova.virt.libvirt.driver [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully detached device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the live domain config.
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.571 253465 DEBUG nova.objects.instance [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.606 253465 DEBUG oslo_concurrency.lockutils [None req-c10478ed-1cce-437a-b3f9-79f07d822b6b a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:07 compute-0 nova_compute[253461]: 2025-11-22 04:04:07.790 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 637 KiB/s rd, 3.2 MiB/s wr, 174 op/s
Nov 22 04:04:08 compute-0 ceph-mon[75011]: pgmap v1556: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 637 KiB/s rd, 3.2 MiB/s wr, 174 op/s
Nov 22 04:04:09 compute-0 podman[286100]: 2025-11-22 04:04:09.414979716 +0000 UTC m=+0.082679021 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 723 KiB/s rd, 2.9 MiB/s wr, 218 op/s
Nov 22 04:04:10 compute-0 nova_compute[253461]: 2025-11-22 04:04:10.463 253465 DEBUG oslo_concurrency.lockutils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:10 compute-0 nova_compute[253461]: 2025-11-22 04:04:10.464 253465 DEBUG oslo_concurrency.lockutils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:10 compute-0 nova_compute[253461]: 2025-11-22 04:04:10.496 253465 DEBUG nova.objects.instance [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:10 compute-0 ceph-mon[75011]: pgmap v1557: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 723 KiB/s rd, 2.9 MiB/s wr, 218 op/s
Nov 22 04:04:10 compute-0 nova_compute[253461]: 2025-11-22 04:04:10.595 253465 DEBUG oslo_concurrency.lockutils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:10 compute-0 nova_compute[253461]: 2025-11-22 04:04:10.863 253465 DEBUG oslo_concurrency.lockutils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:10 compute-0 nova_compute[253461]: 2025-11-22 04:04:10.864 253465 DEBUG oslo_concurrency.lockutils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:10 compute-0 nova_compute[253461]: 2025-11-22 04:04:10.864 253465 INFO nova.compute.manager [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attaching volume 541e4f8e-9294-433f-bcf1-39b52170a39a to /dev/vdb
Nov 22 04:04:10 compute-0 nova_compute[253461]: 2025-11-22 04:04:10.999 253465 DEBUG os_brick.utils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.000 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.012 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.012 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[5496df5a-d0a2-4c36-b2fb-852b055dbe1d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.014 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.021 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.021 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[32b8ea38-c328-41b5-b958-b34b787c9bf0]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.023 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.030 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.030 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fea641-0b98-4392-946f-0e71ed861d56]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.032 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[9e30a483-493a-4084-8c7f-0d8d9a09163e]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.033 253465 DEBUG oslo_concurrency.processutils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.060 253465 DEBUG oslo_concurrency.processutils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.064 253465 DEBUG os_brick.initiator.connectors.lightos [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.065 253465 DEBUG os_brick.initiator.connectors.lightos [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.065 253465 DEBUG os_brick.initiator.connectors.lightos [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.066 253465 DEBUG os_brick.utils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.066 253465 DEBUG nova.virt.block_device [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating existing volume attachment record: ba31b627-1937-414a-91a0-11c43e9a3226 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:04:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e380 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Nov 22 04:04:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Nov 22 04:04:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Nov 22 04:04:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:04:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1281758689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.905 253465 DEBUG nova.objects.instance [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.929 253465 DEBUG nova.virt.libvirt.driver [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to attach volume 541e4f8e-9294-433f-bcf1-39b52170a39a with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 04:04:11 compute-0 nova_compute[253461]: 2025-11-22 04:04:11.932 253465 DEBUG nova.virt.libvirt.guest [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 04:04:11 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:04:11 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-541e4f8e-9294-433f-bcf1-39b52170a39a">
Nov 22 04:04:11 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:04:11 compute-0 nova_compute[253461]:   </source>
Nov 22 04:04:11 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 04:04:11 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:04:11 compute-0 nova_compute[253461]:   </auth>
Nov 22 04:04:11 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:04:11 compute-0 nova_compute[253461]:   <serial>541e4f8e-9294-433f-bcf1-39b52170a39a</serial>
Nov 22 04:04:11 compute-0 nova_compute[253461]: </disk>
Nov 22 04:04:11 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.047 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784237.0461237, 30f0e486-2dc6-492c-9891-5f02237d7435 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.048 253465 INFO nova.compute.manager [-] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] VM Stopped (Lifecycle Event)
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.078 253465 DEBUG nova.compute.manager [None req-7a6d2ae0-56f4-421a-9000-0c9e99abd703 - - - - - -] [instance: 30f0e486-2dc6-492c-9891-5f02237d7435] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.120 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:12 compute-0 ceph-mon[75011]: osdmap e381: 3 total, 3 up, 3 in
Nov 22 04:04:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1281758689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.162 253465 DEBUG nova.virt.libvirt.driver [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.163 253465 DEBUG nova.virt.libvirt.driver [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.163 253465 DEBUG nova.virt.libvirt.driver [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.163 253465 DEBUG nova.virt.libvirt.driver [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] No VIF found with MAC fa:16:3e:67:9b:8d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:04:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 143 op/s
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.599 253465 DEBUG oslo_concurrency.lockutils [None req-50d58b40-7c2b-40a4-8632-aed9d938c4a6 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:12 compute-0 nova_compute[253461]: 2025-11-22 04:04:12.794 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:13 compute-0 ceph-mon[75011]: pgmap v1559: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 143 op/s
Nov 22 04:04:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 1003 KiB/s wr, 128 op/s
Nov 22 04:04:14 compute-0 ceph-mon[75011]: pgmap v1560: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 1003 KiB/s wr, 128 op/s
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.044 253465 DEBUG oslo_concurrency.lockutils [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.044 253465 DEBUG oslo_concurrency.lockutils [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.057 253465 INFO nova.compute.manager [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Detaching volume 541e4f8e-9294-433f-bcf1-39b52170a39a
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.206 253465 INFO nova.virt.block_device [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Attempting to driver detach volume 541e4f8e-9294-433f-bcf1-39b52170a39a from mountpoint /dev/vdb
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.218 253465 DEBUG nova.virt.libvirt.driver [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Attempting to detach device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.219 253465 DEBUG nova.virt.libvirt.guest [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-541e4f8e-9294-433f-bcf1-39b52170a39a">
Nov 22 04:04:15 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   </source>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <serial>541e4f8e-9294-433f-bcf1-39b52170a39a</serial>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:04:15 compute-0 nova_compute[253461]: </disk>
Nov 22 04:04:15 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.229 253465 INFO nova.virt.libvirt.driver [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully detached device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the persistent domain config.
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.229 253465 DEBUG nova.virt.libvirt.driver [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.230 253465 DEBUG nova.virt.libvirt.guest [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-541e4f8e-9294-433f-bcf1-39b52170a39a">
Nov 22 04:04:15 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   </source>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <serial>541e4f8e-9294-433f-bcf1-39b52170a39a</serial>
Nov 22 04:04:15 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:04:15 compute-0 nova_compute[253461]: </disk>
Nov 22 04:04:15 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.370 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763784255.3702567, f916655a-aa1c-4071-b05b-7bd2a8661ce0 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.373 253465 DEBUG nova.virt.libvirt.driver [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.378 253465 INFO nova.virt.libvirt.driver [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully detached device vdb from instance f916655a-aa1c-4071-b05b-7bd2a8661ce0 from the live domain config.
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.622 253465 DEBUG nova.objects.instance [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'flavor' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:15 compute-0 nova_compute[253461]: 2025-11-22 04:04:15.682 253465 DEBUG oslo_concurrency.lockutils [None req-e5aa2bd8-d09b-4d5e-b22a-5b63214b9fff a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 770 KiB/s wr, 107 op/s
Nov 22 04:04:16 compute-0 ceph-mon[75011]: pgmap v1561: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.4 MiB/s rd, 770 KiB/s wr, 107 op/s
Nov 22 04:04:17 compute-0 nova_compute[253461]: 2025-11-22 04:04:17.123 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1420573138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1420573138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1420573138' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1420573138' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:17 compute-0 nova_compute[253461]: 2025-11-22 04:04:17.796 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 97 op/s
Nov 22 04:04:18 compute-0 ceph-mon[75011]: pgmap v1562: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 MiB/s wr, 97 op/s
Nov 22 04:04:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3139198657' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3139198657' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:19 compute-0 sudo[286151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:19 compute-0 sudo[286151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:19 compute-0 sudo[286151]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:19 compute-0 sudo[286176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:04:19 compute-0 sudo[286176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:19 compute-0 sudo[286176]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3139198657' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3139198657' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:19 compute-0 sudo[286201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:19 compute-0 sudo[286201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:19 compute-0 sudo[286201]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:19 compute-0 sudo[286226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:04:19 compute-0 sudo[286226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:20 compute-0 sudo[286226]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:04:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:04:20 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:04:20 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:04:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3573130375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b4a12d21-0921-4ad2-946d-232f5d2baa02 does not exist
Nov 22 04:04:20 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f2c70964-7848-4282-8a55-c52795048b73 does not exist
Nov 22 04:04:20 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ae6ca1ad-e312-428a-a8c7-964b000f23eb does not exist
Nov 22 04:04:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:04:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3573130375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:04:20 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:04:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:04:20 compute-0 sudo[286284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:20 compute-0 sudo[286284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 105 op/s
Nov 22 04:04:20 compute-0 sudo[286284]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:20 compute-0 sudo[286309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:04:20 compute-0 sudo[286309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:20 compute-0 sudo[286309]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:04:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3573130375' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3573130375' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:04:20 compute-0 ceph-mon[75011]: pgmap v1563: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 105 op/s
Nov 22 04:04:20 compute-0 sudo[286334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:20 compute-0 sudo[286334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:20 compute-0 sudo[286334]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:20 compute-0 sudo[286359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:04:20 compute-0 sudo[286359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:20 compute-0 nova_compute[253461]: 2025-11-22 04:04:20.916 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "0a4b1bbf-edde-478c-91f0-40e5825475fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:20 compute-0 nova_compute[253461]: 2025-11-22 04:04:20.919 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:20 compute-0 nova_compute[253461]: 2025-11-22 04:04:20.946 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.027 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.028 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.035 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.035 253465 INFO nova.compute.claims [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:04:21 compute-0 podman[286425]: 2025-11-22 04:04:21.066697283 +0000 UTC m=+0.049187369 container create ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Nov 22 04:04:21 compute-0 systemd[1]: Started libpod-conmon-ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a.scope.
Nov 22 04:04:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:04:21 compute-0 podman[286425]: 2025-11-22 04:04:21.046093525 +0000 UTC m=+0.028583581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:21 compute-0 podman[286425]: 2025-11-22 04:04:21.157257311 +0000 UTC m=+0.139747437 container init ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:04:21 compute-0 podman[286425]: 2025-11-22 04:04:21.166481373 +0000 UTC m=+0.148971429 container start ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.165 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:21 compute-0 podman[286425]: 2025-11-22 04:04:21.17479017 +0000 UTC m=+0.157280266 container attach ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:04:21 compute-0 epic_easley[286441]: 167 167
Nov 22 04:04:21 compute-0 systemd[1]: libpod-ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a.scope: Deactivated successfully.
Nov 22 04:04:21 compute-0 conmon[286441]: conmon ddfa5f7f1895f4df4a92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a.scope/container/memory.events
Nov 22 04:04:21 compute-0 podman[286425]: 2025-11-22 04:04:21.179873168 +0000 UTC m=+0.162363234 container died ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:04:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a7f9d26474e08fd431b45722bc9272874f9f592b616881b85615f7bae79c354-merged.mount: Deactivated successfully.
Nov 22 04:04:21 compute-0 podman[286425]: 2025-11-22 04:04:21.254347928 +0000 UTC m=+0.236837984 container remove ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:04:21 compute-0 systemd[1]: libpod-conmon-ddfa5f7f1895f4df4a922a8053c79c82ef052c3f0b48d69db116bda1695bf64a.scope: Deactivated successfully.
Nov 22 04:04:21 compute-0 podman[286487]: 2025-11-22 04:04:21.526682916 +0000 UTC m=+0.076885092 container create 6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:04:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Nov 22 04:04:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Nov 22 04:04:21 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Nov 22 04:04:21 compute-0 systemd[1]: Started libpod-conmon-6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09.scope.
Nov 22 04:04:21 compute-0 podman[286487]: 2025-11-22 04:04:21.494691103 +0000 UTC m=+0.044893329 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:21 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d118d17fbd842cba303fbd3036fd75adc4e25bb3858f45cb441ea1822cdbcf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d118d17fbd842cba303fbd3036fd75adc4e25bb3858f45cb441ea1822cdbcf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d118d17fbd842cba303fbd3036fd75adc4e25bb3858f45cb441ea1822cdbcf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d118d17fbd842cba303fbd3036fd75adc4e25bb3858f45cb441ea1822cdbcf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d118d17fbd842cba303fbd3036fd75adc4e25bb3858f45cb441ea1822cdbcf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:21 compute-0 podman[286487]: 2025-11-22 04:04:21.637379292 +0000 UTC m=+0.187581528 container init 6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:04:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1612902221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:21 compute-0 podman[286487]: 2025-11-22 04:04:21.652600062 +0000 UTC m=+0.202802239 container start 6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:04:21 compute-0 podman[286487]: 2025-11-22 04:04:21.657622935 +0000 UTC m=+0.207825121 container attach 6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.658 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.670 253465 DEBUG nova.compute.provider_tree [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.694 253465 DEBUG nova.scheduler.client.report [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.717 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.718 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.774 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.774 253465 DEBUG nova.network.neutron [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:04:21 compute-0 nova_compute[253461]: 2025-11-22 04:04:21.878 253465 INFO nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.027 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.069 253465 INFO nova.virt.block_device [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Booting with volume 48cf9701-6d3b-4714-8ead-2dd4dcc19ce1 at /dev/vda
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.127 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.254 253465 DEBUG os_brick.utils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.255 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.268 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.269 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[51710fab-fdef-49f2-b49e-2c57348d1892]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.269 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.276 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.277 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[b490eb0a-40cb-4eee-bebe-a0f50741e91d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.278 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.287 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.288 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[c9bd9d10-54ed-4e10-a201-31dcaa7c1633]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.290 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5bba71-bf92-4360-867d-37d1d989179b]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.290 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.315 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.319 253465 DEBUG os_brick.initiator.connectors.lightos [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.320 253465 DEBUG os_brick.initiator.connectors.lightos [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.320 253465 DEBUG os_brick.initiator.connectors.lightos [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.321 253465 DEBUG os_brick.utils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.322 253465 DEBUG nova.virt.block_device [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updating existing volume attachment record: fad7fbff-7a1f-43c9-9e17-72550add7991 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:04:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 984 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.426 253465 DEBUG nova.policy [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45ccef35c0c843a59c9dfd0eb67190a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '83cc5de7368b40b984b51f781e85343c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:04:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Nov 22 04:04:22 compute-0 ceph-mon[75011]: osdmap e382: 3 total, 3 up, 3 in
Nov 22 04:04:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1612902221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:22 compute-0 ceph-mon[75011]: pgmap v1565: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 984 KiB/s rd, 2.2 MiB/s wr, 131 op/s
Nov 22 04:04:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Nov 22 04:04:22 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Nov 22 04:04:22 compute-0 nova_compute[253461]: 2025-11-22 04:04:22.833 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:22 compute-0 angry_euler[286504]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:04:22 compute-0 angry_euler[286504]: --> relative data size: 1.0
Nov 22 04:04:22 compute-0 angry_euler[286504]: --> All data devices are unavailable
Nov 22 04:04:22 compute-0 systemd[1]: libpod-6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09.scope: Deactivated successfully.
Nov 22 04:04:22 compute-0 podman[286487]: 2025-11-22 04:04:22.910082044 +0000 UTC m=+1.460284190 container died 6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:04:22 compute-0 systemd[1]: libpod-6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09.scope: Consumed 1.144s CPU time.
Nov 22 04:04:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d118d17fbd842cba303fbd3036fd75adc4e25bb3858f45cb441ea1822cdbcf-merged.mount: Deactivated successfully.
Nov 22 04:04:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:04:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3636093866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:23.015 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:23.016 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:23.017 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:23 compute-0 podman[286487]: 2025-11-22 04:04:23.020338495 +0000 UTC m=+1.570540671 container remove 6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_euler, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:04:23 compute-0 systemd[1]: libpod-conmon-6011742535d2b721111df79c10f5a877afa783f34bf9e43fbd006f7062799c09.scope: Deactivated successfully.
Nov 22 04:04:23 compute-0 sudo[286359]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:23 compute-0 sudo[286556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:23 compute-0 sudo[286556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:23 compute-0 sudo[286556]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:23 compute-0 sudo[286581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:04:23 compute-0 sudo[286581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:23 compute-0 sudo[286581]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.328 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.331 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.331 253465 INFO nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Creating image(s)
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.332 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.333 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Ensure instance console log exists: /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.334 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.334 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.335 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:23 compute-0 sudo[286606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:23 compute-0 sudo[286606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:23 compute-0 sudo[286606]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:23 compute-0 nova_compute[253461]: 2025-11-22 04:04:23.416 253465 DEBUG nova.network.neutron [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Successfully created port: 49fa071c-33be-4850-a57f-35030627daa3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:04:23 compute-0 sudo[286631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:04:23 compute-0 sudo[286631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:23 compute-0 ceph-mon[75011]: osdmap e383: 3 total, 3 up, 3 in
Nov 22 04:04:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3636093866' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:23 compute-0 podman[286696]: 2025-11-22 04:04:23.91576841 +0000 UTC m=+0.061479493 container create 08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:04:23 compute-0 systemd[1]: Started libpod-conmon-08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3.scope.
Nov 22 04:04:23 compute-0 podman[286696]: 2025-11-22 04:04:23.892388661 +0000 UTC m=+0.038099744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:04:24 compute-0 podman[286696]: 2025-11-22 04:04:24.018801903 +0000 UTC m=+0.164512986 container init 08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chebyshev, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:24 compute-0 podman[286696]: 2025-11-22 04:04:24.025329264 +0000 UTC m=+0.171040327 container start 08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chebyshev, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:04:24 compute-0 podman[286696]: 2025-11-22 04:04:24.02908758 +0000 UTC m=+0.174798713 container attach 08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:04:24 compute-0 friendly_chebyshev[286713]: 167 167
Nov 22 04:04:24 compute-0 systemd[1]: libpod-08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3.scope: Deactivated successfully.
Nov 22 04:04:24 compute-0 podman[286696]: 2025-11-22 04:04:24.030947033 +0000 UTC m=+0.176658117 container died 08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 04:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e9560a202a1a3b23d96c7be312c9619a36679b16069d6922de628ed8826dda-merged.mount: Deactivated successfully.
Nov 22 04:04:24 compute-0 podman[286696]: 2025-11-22 04:04:24.084022081 +0000 UTC m=+0.229733124 container remove 08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_chebyshev, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:24 compute-0 systemd[1]: libpod-conmon-08860784f2c3ae27d9475b96ba26363b8df84ff02d0c3d8b9be45982e7fa0cc3.scope: Deactivated successfully.
Nov 22 04:04:24 compute-0 podman[286737]: 2025-11-22 04:04:24.250858862 +0000 UTC m=+0.036675497 container create df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:04:24 compute-0 nova_compute[253461]: 2025-11-22 04:04:24.265 253465 DEBUG nova.network.neutron [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Successfully updated port: 49fa071c-33be-4850-a57f-35030627daa3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:04:24 compute-0 nova_compute[253461]: 2025-11-22 04:04:24.287 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:04:24 compute-0 nova_compute[253461]: 2025-11-22 04:04:24.288 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquired lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:04:24 compute-0 nova_compute[253461]: 2025-11-22 04:04:24.288 253465 DEBUG nova.network.neutron [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:04:24 compute-0 systemd[1]: Started libpod-conmon-df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6.scope.
Nov 22 04:04:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe7505ef2d46fc9d2d8c918bf7ed1366ba00dd0e9dcb64355dc9801116b1617/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe7505ef2d46fc9d2d8c918bf7ed1366ba00dd0e9dcb64355dc9801116b1617/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe7505ef2d46fc9d2d8c918bf7ed1366ba00dd0e9dcb64355dc9801116b1617/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fe7505ef2d46fc9d2d8c918bf7ed1366ba00dd0e9dcb64355dc9801116b1617/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:24 compute-0 podman[286737]: 2025-11-22 04:04:24.235942411 +0000 UTC m=+0.021759056 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:24 compute-0 podman[286737]: 2025-11-22 04:04:24.338322613 +0000 UTC m=+0.124139248 container init df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kowalevski, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:04:24 compute-0 nova_compute[253461]: 2025-11-22 04:04:24.349 253465 DEBUG nova.compute.manager [req-dd57bf81-207a-40e0-9369-fa0323e63b38 req-0d15a621-b709-4b07-a18f-63c6f3b14f4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received event network-changed-49fa071c-33be-4850-a57f-35030627daa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:24 compute-0 nova_compute[253461]: 2025-11-22 04:04:24.350 253465 DEBUG nova.compute.manager [req-dd57bf81-207a-40e0-9369-fa0323e63b38 req-0d15a621-b709-4b07-a18f-63c6f3b14f4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Refreshing instance network info cache due to event network-changed-49fa071c-33be-4850-a57f-35030627daa3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:04:24 compute-0 nova_compute[253461]: 2025-11-22 04:04:24.350 253465 DEBUG oslo_concurrency.lockutils [req-dd57bf81-207a-40e0-9369-fa0323e63b38 req-0d15a621-b709-4b07-a18f-63c6f3b14f4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:04:24 compute-0 podman[286737]: 2025-11-22 04:04:24.353643119 +0000 UTC m=+0.139459755 container start df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:04:24 compute-0 podman[286737]: 2025-11-22 04:04:24.357913626 +0000 UTC m=+0.143730282 container attach df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kowalevski, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:04:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1567: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 185 KiB/s rd, 2.7 MiB/s wr, 146 op/s
Nov 22 04:04:24 compute-0 nova_compute[253461]: 2025-11-22 04:04:24.417 253465 DEBUG nova.network.neutron [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:04:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Nov 22 04:04:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Nov 22 04:04:24 compute-0 ceph-mon[75011]: pgmap v1567: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 185 KiB/s rd, 2.7 MiB/s wr, 146 op/s
Nov 22 04:04:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]: {
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:     "0": [
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:         {
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "devices": [
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "/dev/loop3"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             ],
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_name": "ceph_lv0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_size": "21470642176",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "name": "ceph_lv0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "tags": {
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cluster_name": "ceph",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.crush_device_class": "",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.encrypted": "0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osd_id": "0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.type": "block",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.vdo": "0"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             },
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "type": "block",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "vg_name": "ceph_vg0"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:         }
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:     ],
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:     "1": [
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:         {
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "devices": [
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "/dev/loop4"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             ],
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_name": "ceph_lv1",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_size": "21470642176",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "name": "ceph_lv1",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "tags": {
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cluster_name": "ceph",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.crush_device_class": "",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.encrypted": "0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osd_id": "1",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.type": "block",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.vdo": "0"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             },
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "type": "block",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "vg_name": "ceph_vg1"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:         }
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:     ],
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:     "2": [
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:         {
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "devices": [
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "/dev/loop5"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             ],
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_name": "ceph_lv2",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_size": "21470642176",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "name": "ceph_lv2",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "tags": {
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.cluster_name": "ceph",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.crush_device_class": "",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.encrypted": "0",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osd_id": "2",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.type": "block",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:                 "ceph.vdo": "0"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             },
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "type": "block",
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:             "vg_name": "ceph_vg2"
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:         }
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]:     ]
Nov 22 04:04:25 compute-0 funny_kowalevski[286754]: }
Nov 22 04:04:25 compute-0 systemd[1]: libpod-df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6.scope: Deactivated successfully.
Nov 22 04:04:25 compute-0 podman[286737]: 2025-11-22 04:04:25.128498694 +0000 UTC m=+0.914315329 container died df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kowalevski, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fe7505ef2d46fc9d2d8c918bf7ed1366ba00dd0e9dcb64355dc9801116b1617-merged.mount: Deactivated successfully.
Nov 22 04:04:25 compute-0 podman[286737]: 2025-11-22 04:04:25.202562803 +0000 UTC m=+0.988379438 container remove df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:04:25 compute-0 systemd[1]: libpod-conmon-df20cc16774b750f68e9d1c5736dcc957010e44949f60c3f8160ce8bd875d0e6.scope: Deactivated successfully.
Nov 22 04:04:25 compute-0 sudo[286631]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.251 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.252 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.253 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.253 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.254 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.256 253465 INFO nova.compute.manager [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Terminating instance
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.258 253465 DEBUG nova.compute.manager [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:04:25 compute-0 kernel: tapbf5ae7d5-ec (unregistering): left promiscuous mode
Nov 22 04:04:25 compute-0 NetworkManager[48916]: <info>  [1763784265.3321] device (tapbf5ae7d5-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:04:25 compute-0 sudo[286774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:25 compute-0 ovn_controller[152691]: 2025-11-22T04:04:25Z|00215|binding|INFO|Releasing lport bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b from this chassis (sb_readonly=0)
Nov 22 04:04:25 compute-0 ovn_controller[152691]: 2025-11-22T04:04:25Z|00216|binding|INFO|Setting lport bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b down in Southbound
Nov 22 04:04:25 compute-0 ovn_controller[152691]: 2025-11-22T04:04:25Z|00217|binding|INFO|Removing iface tapbf5ae7d5-ec ovn-installed in OVS
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.343 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 sudo[286774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:25 compute-0 sudo[286774]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.354 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2f:be:44 10.100.0.13'], port_security=['fa:16:3e:2f:be:44 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-776bc55d-f481-40b2-b547-d9145682b578', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6c34534e935e44e883b5f01b09c03631', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03dd4866-f386-47ce-ae14-e17586ea2c60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.200'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=aed504bc-f70d-4127-9419-e4d4c6cd3ca6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.356 162689 INFO neutron.agent.ovn.metadata.agent [-] Port bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b in datapath 776bc55d-f481-40b2-b547-d9145682b578 unbound from our chassis
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.360 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 776bc55d-f481-40b2-b547-d9145682b578, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.362 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[dec04c73-48a1-4a20-9780-539560bac45b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.363 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-776bc55d-f481-40b2-b547-d9145682b578 namespace which is not needed anymore
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.371 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Nov 22 04:04:25 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 14.755s CPU time.
Nov 22 04:04:25 compute-0 systemd-machined[215728]: Machine qemu-20-instance-00000014 terminated.
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.438 253465 DEBUG nova.network.neutron [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updating instance_info_cache with network_info: [{"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:04:25 compute-0 sudo[286801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:04:25 compute-0 sudo[286801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:25 compute-0 sudo[286801]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.475 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Releasing lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.475 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Instance network_info: |[{"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.475 253465 DEBUG oslo_concurrency.lockutils [req-dd57bf81-207a-40e0-9369-fa0323e63b38 req-0d15a621-b709-4b07-a18f-63c6f3b14f4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.475 253465 DEBUG nova.network.neutron [req-dd57bf81-207a-40e0-9369-fa0323e63b38 req-0d15a621-b709-4b07-a18f-63c6f3b14f4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Refreshing network info cache for port 49fa071c-33be-4850-a57f-35030627daa3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.479 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Start _get_guest_xml network_info=[{"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': 'fad7fbff-7a1f-43c9-9e17-72550add7991', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-48cf9701-6d3b-4714-8ead-2dd4dcc19ce1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '48cf9701-6d3b-4714-8ead-2dd4dcc19ce1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '0a4b1bbf-edde-478c-91f0-40e5825475fd', 'attached_at': '', 'detached_at': '', 'volume_id': '48cf9701-6d3b-4714-8ead-2dd4dcc19ce1', 'serial': '48cf9701-6d3b-4714-8ead-2dd4dcc19ce1'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.482 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.489 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.505 253465 WARNING nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.509 253465 INFO nova.virt.libvirt.driver [-] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Instance destroyed successfully.
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.510 253465 DEBUG nova.objects.instance [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lazy-loading 'resources' on Instance uuid ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.517 253465 DEBUG nova.virt.libvirt.host [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.517 253465 DEBUG nova.virt.libvirt.host [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:04:25 compute-0 neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578[285831]: [NOTICE]   (285835) : haproxy version is 2.8.14-c23fe91
Nov 22 04:04:25 compute-0 neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578[285831]: [NOTICE]   (285835) : path to executable is /usr/sbin/haproxy
Nov 22 04:04:25 compute-0 neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578[285831]: [WARNING]  (285835) : Exiting Master process...
Nov 22 04:04:25 compute-0 neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578[285831]: [WARNING]  (285835) : Exiting Master process...
Nov 22 04:04:25 compute-0 neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578[285831]: [ALERT]    (285835) : Current worker (285837) exited with code 143 (Terminated)
Nov 22 04:04:25 compute-0 neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578[285831]: [WARNING]  (285835) : All workers exited. Exiting... (0)
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.522 253465 DEBUG nova.virt.libvirt.vif [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:03:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-18199089',display_name='tempest-instance-18199089',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-18199089',id=20,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD7WVaUzte4eck/32cI5/D3b6IWBj9NHa/z9P6h607tL1j9YRB0NqLr79roduhGB1Q1SIokAGX+Z/nYy/K43tUvXF/SwdNwwDMJD28IZ0C/bzNSB6t/xWJEDxdGM3unfsw==',key_name='tempest-keypair-679965368',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:03:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6c34534e935e44e883b5f01b09c03631',ramdisk_id='',reservation_id='r-2pk1sn2l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-769621883',owner_user_name='tempest-VolumesBackupsTest-769621883-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:03:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='26cfaadc9db64dde98981b57d48fd714',uuid=ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.522 253465 DEBUG nova.network.os_vif_util [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Converting VIF {"id": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "address": "fa:16:3e:2f:be:44", "network": {"id": "776bc55d-f481-40b2-b547-d9145682b578", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1618808475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6c34534e935e44e883b5f01b09c03631", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf5ae7d5-ec", "ovs_interfaceid": "bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.523 253465 DEBUG nova.network.os_vif_util [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2f:be:44,bridge_name='br-int',has_traffic_filtering=True,id=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b,network=Network(776bc55d-f481-40b2-b547-d9145682b578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5ae7d5-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:04:25 compute-0 systemd[1]: libpod-50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c.scope: Deactivated successfully.
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.523 253465 DEBUG os_vif [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:be:44,bridge_name='br-int',has_traffic_filtering=True,id=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b,network=Network(776bc55d-f481-40b2-b547-d9145682b578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5ae7d5-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.528 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.528 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbf5ae7d5-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.530 253465 DEBUG nova.virt.libvirt.host [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.530 253465 DEBUG nova.virt.libvirt.host [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.531 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.531 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.531 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:04:25 compute-0 podman[286847]: 2025-11-22 04:04:25.531893955 +0000 UTC m=+0.067983281 container died 50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.531 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.532 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.532 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.532 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.532 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.532 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.533 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.533 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.533 253465 DEBUG nova.virt.hardware [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:04:25 compute-0 sudo[286860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:25 compute-0 sudo[286860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:25 compute-0 sudo[286860]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c-userdata-shm.mount: Deactivated successfully.
Nov 22 04:04:25 compute-0 podman[286846]: 2025-11-22 04:04:25.574592759 +0000 UTC m=+0.086678123 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aff45aef3ccdb78687e1c0aa2ad7767959907ab1afffef6bf93f9c2d3bebb39-merged.mount: Deactivated successfully.
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.580 253465 DEBUG nova.storage.rbd_utils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 0a4b1bbf-edde-478c-91f0-40e5825475fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.587 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:25 compute-0 podman[286847]: 2025-11-22 04:04:25.587563541 +0000 UTC m=+0.123652857 container cleanup 50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:04:25 compute-0 systemd[1]: libpod-conmon-50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c.scope: Deactivated successfully.
Nov 22 04:04:25 compute-0 ceph-mon[75011]: osdmap e384: 3 total, 3 up, 3 in
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.626 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.630 253465 DEBUG nova.compute.manager [req-8e5dc371-32a7-4063-a4bf-ced5dd9cbbc8 req-b14aeb49-2a07-4d0c-ac5c-c6b32a4b8052 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received event network-vif-unplugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.631 253465 DEBUG oslo_concurrency.lockutils [req-8e5dc371-32a7-4063-a4bf-ced5dd9cbbc8 req-b14aeb49-2a07-4d0c-ac5c-c6b32a4b8052 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.631 253465 DEBUG oslo_concurrency.lockutils [req-8e5dc371-32a7-4063-a4bf-ced5dd9cbbc8 req-b14aeb49-2a07-4d0c-ac5c-c6b32a4b8052 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.632 253465 DEBUG oslo_concurrency.lockutils [req-8e5dc371-32a7-4063-a4bf-ced5dd9cbbc8 req-b14aeb49-2a07-4d0c-ac5c-c6b32a4b8052 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.632 253465 DEBUG nova.compute.manager [req-8e5dc371-32a7-4063-a4bf-ced5dd9cbbc8 req-b14aeb49-2a07-4d0c-ac5c-c6b32a4b8052 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] No waiting events found dispatching network-vif-unplugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.632 253465 DEBUG nova.compute.manager [req-8e5dc371-32a7-4063-a4bf-ced5dd9cbbc8 req-b14aeb49-2a07-4d0c-ac5c-c6b32a4b8052 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received event network-vif-unplugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.633 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:04:25 compute-0 sudo[286951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.636 253465 INFO os_vif [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2f:be:44,bridge_name='br-int',has_traffic_filtering=True,id=bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b,network=Network(776bc55d-f481-40b2-b547-d9145682b578),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbf5ae7d5-ec')
Nov 22 04:04:25 compute-0 sudo[286951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:25 compute-0 podman[286905]: 2025-11-22 04:04:25.65586872 +0000 UTC m=+0.110011726 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS)
Nov 22 04:04:25 compute-0 podman[286967]: 2025-11-22 04:04:25.68247988 +0000 UTC m=+0.069485734 container remove 50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.691 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[16f41190-41dd-42c1-83b6-67cf25b9c58f]: (4, ('Sat Nov 22 04:04:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578 (50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c)\n50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c\nSat Nov 22 04:04:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-776bc55d-f481-40b2-b547-d9145682b578 (50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c)\n50afb204fbba70ed18d31a404967adff0e1d481a2818d9b519910a9858f5954c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.694 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[516554a1-57bd-4648-99a1-8de56986d345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.695 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap776bc55d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.696 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 kernel: tap776bc55d-f0: left promiscuous mode
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.698 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.703 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[26fdcdad-82f8-4dec-9161-709869334007]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.717 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8b06548c-ec2e-4b16-bc5d-d64511cc9e8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:25 compute-0 nova_compute[253461]: 2025-11-22 04:04:25.718 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.719 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d535e3ae-5f40-4bca-8ec9-6cf9b38d314f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.734 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3af60d7f-a24f-402c-9ce9-de7dc1d7fe61]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448263, 'reachable_time': 22839, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287026, 'error': None, 'target': 'ovnmeta-776bc55d-f481-40b2-b547-d9145682b578', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.738 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-776bc55d-f481-40b2-b547-d9145682b578 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:04:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:25.738 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[b4352e65-9adf-4eb3-9543-608db946d7b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d776bc55d\x2df481\x2d40b2\x2db547\x2dd9145682b578.mount: Deactivated successfully.
Nov 22 04:04:26 compute-0 podman[287086]: 2025-11-22 04:04:26.011557949 +0000 UTC m=+0.064429097 container create 967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:04:26 compute-0 podman[287086]: 2025-11-22 04:04:25.981580068 +0000 UTC m=+0.034451247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:26 compute-0 systemd[1]: Started libpod-conmon-967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443.scope.
Nov 22 04:04:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:04:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3485643974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:04:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e384 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.128 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:26 compute-0 podman[287086]: 2025-11-22 04:04:26.14256994 +0000 UTC m=+0.195441068 container init 967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:04:26 compute-0 podman[287086]: 2025-11-22 04:04:26.150488835 +0000 UTC m=+0.203359963 container start 967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:04:26 compute-0 quirky_shaw[287103]: 167 167
Nov 22 04:04:26 compute-0 systemd[1]: libpod-967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443.scope: Deactivated successfully.
Nov 22 04:04:26 compute-0 conmon[287103]: conmon 967671a37ea67af4fe4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443.scope/container/memory.events
Nov 22 04:04:26 compute-0 podman[287086]: 2025-11-22 04:04:26.157470455 +0000 UTC m=+0.210341653 container attach 967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.157 253465 DEBUG nova.virt.libvirt.vif [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:04:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1435026467',display_name='tempest-TestVolumeBootPattern-server-1435026467',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1435026467',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-i3dcxenv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:04:22Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=0a4b1bbf-edde-478c-91f0-40e5825475fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:04:26 compute-0 podman[287086]: 2025-11-22 04:04:26.158111407 +0000 UTC m=+0.210982515 container died 967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.157 253465 DEBUG nova.network.os_vif_util [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.158 253465 DEBUG nova.network.os_vif_util [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:24:58,bridge_name='br-int',has_traffic_filtering=True,id=49fa071c-33be-4850-a57f-35030627daa3,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49fa071c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.160 253465 DEBUG nova.objects.instance [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'pci_devices' on Instance uuid 0a4b1bbf-edde-478c-91f0-40e5825475fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.184 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <uuid>0a4b1bbf-edde-478c-91f0-40e5825475fd</uuid>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <name>instance-00000015</name>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <nova:name>tempest-TestVolumeBootPattern-server-1435026467</nova:name>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:04:25</nova:creationTime>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <nova:user uuid="45ccef35c0c843a59c9dfd0eb67190a6">tempest-TestVolumeBootPattern-1584219565-project-member</nova:user>
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <nova:project uuid="83cc5de7368b40b984b51f781e85343c">tempest-TestVolumeBootPattern-1584219565</nova:project>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <nova:port uuid="49fa071c-33be-4850-a57f-35030627daa3">
Nov 22 04:04:26 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <system>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <entry name="serial">0a4b1bbf-edde-478c-91f0-40e5825475fd</entry>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <entry name="uuid">0a4b1bbf-edde-478c-91f0-40e5825475fd</entry>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </system>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <os>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   </os>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <features>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   </features>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/0a4b1bbf-edde-478c-91f0-40e5825475fd_disk.config">
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       </source>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-48cf9701-6d3b-4714-8ead-2dd4dcc19ce1">
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       </source>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:04:26 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <serial>48cf9701-6d3b-4714-8ead-2dd4dcc19ce1</serial>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:a1:24:58"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <target dev="tap49fa071c-33"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd/console.log" append="off"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <video>
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </video>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:04:26 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:04:26 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:04:26 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:04:26 compute-0 nova_compute[253461]: </domain>
Nov 22 04:04:26 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.186 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Preparing to wait for external event network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.186 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.186 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.186 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.188 253465 DEBUG nova.virt.libvirt.vif [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:04:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1435026467',display_name='tempest-TestVolumeBootPattern-server-1435026467',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1435026467',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-i3dcxenv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:04:22Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=0a4b1bbf-edde-478c-91f0-40e5825475fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.188 253465 DEBUG nova.network.os_vif_util [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.189 253465 DEBUG nova.network.os_vif_util [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:24:58,bridge_name='br-int',has_traffic_filtering=True,id=49fa071c-33be-4850-a57f-35030627daa3,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49fa071c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.190 253465 DEBUG os_vif [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:24:58,bridge_name='br-int',has_traffic_filtering=True,id=49fa071c-33be-4850-a57f-35030627daa3,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49fa071c-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.191 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.192 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.192 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-239eebca34f35f53db65e054c92b99c829cc81e5dd62a19d62eb6c6a668825fa-merged.mount: Deactivated successfully.
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.205 253465 INFO nova.virt.libvirt.driver [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Deleting instance files /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_del
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.206 253465 INFO nova.virt.libvirt.driver [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Deletion of /var/lib/nova/instances/ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb_del complete
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.210 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.211 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49fa071c-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.211 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap49fa071c-33, col_values=(('external_ids', {'iface-id': '49fa071c-33be-4850-a57f-35030627daa3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:24:58', 'vm-uuid': '0a4b1bbf-edde-478c-91f0-40e5825475fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:26 compute-0 podman[287086]: 2025-11-22 04:04:26.214444667 +0000 UTC m=+0.267315785 container remove 967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.214 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:26 compute-0 NetworkManager[48916]: <info>  [1763784266.2156] manager: (tap49fa071c-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.217 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.225 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.227 253465 INFO os_vif [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:24:58,bridge_name='br-int',has_traffic_filtering=True,id=49fa071c-33be-4850-a57f-35030627daa3,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49fa071c-33')
Nov 22 04:04:26 compute-0 systemd[1]: libpod-conmon-967671a37ea67af4fe4b0dc82d1f82de42c38cf792b3134152fa6d05a85a0443.scope: Deactivated successfully.
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.277 253465 INFO nova.compute.manager [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Took 1.02 seconds to destroy the instance on the hypervisor.
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.278 253465 DEBUG oslo.service.loopingcall [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.279 253465 DEBUG nova.compute.manager [-] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.279 253465 DEBUG nova.network.neutron [-] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.300 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.300 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.301 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No VIF found with MAC fa:16:3e:a1:24:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.301 253465 INFO nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Using config drive
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.324 253465 DEBUG nova.storage.rbd_utils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 0a4b1bbf-edde-478c-91f0-40e5825475fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:04:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 94 KiB/s rd, 8.0 KiB/s wr, 125 op/s
Nov 22 04:04:26 compute-0 podman[287148]: 2025-11-22 04:04:26.48667113 +0000 UTC m=+0.090484758 container create 70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_euler, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:04:26 compute-0 podman[287148]: 2025-11-22 04:04:26.439469682 +0000 UTC m=+0.043283351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:04:26 compute-0 systemd[1]: Started libpod-conmon-70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba.scope.
Nov 22 04:04:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1727192226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1727192226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:26 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b4d0c31b6fd2e320367d2e25ddd59634feaa0d367f87f145f443e647b9c11d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b4d0c31b6fd2e320367d2e25ddd59634feaa0d367f87f145f443e647b9c11d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b4d0c31b6fd2e320367d2e25ddd59634feaa0d367f87f145f443e647b9c11d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53b4d0c31b6fd2e320367d2e25ddd59634feaa0d367f87f145f443e647b9c11d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3485643974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:26 compute-0 ceph-mon[75011]: pgmap v1569: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 94 KiB/s rd, 8.0 KiB/s wr, 125 op/s
Nov 22 04:04:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1727192226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1727192226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:26 compute-0 podman[287148]: 2025-11-22 04:04:26.623747704 +0000 UTC m=+0.227561312 container init 70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:04:26 compute-0 podman[287148]: 2025-11-22 04:04:26.634455257 +0000 UTC m=+0.238268845 container start 70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_euler, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:04:26 compute-0 podman[287148]: 2025-11-22 04:04:26.638122086 +0000 UTC m=+0.241935674 container attach 70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.741 253465 DEBUG nova.network.neutron [req-dd57bf81-207a-40e0-9369-fa0323e63b38 req-0d15a621-b709-4b07-a18f-63c6f3b14f4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updated VIF entry in instance network info cache for port 49fa071c-33be-4850-a57f-35030627daa3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.743 253465 DEBUG nova.network.neutron [req-dd57bf81-207a-40e0-9369-fa0323e63b38 req-0d15a621-b709-4b07-a18f-63c6f3b14f4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updating instance_info_cache with network_info: [{"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.749 253465 INFO nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Creating config drive at /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd/disk.config
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.759 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnadqaj1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.790 253465 DEBUG oslo_concurrency.lockutils [req-dd57bf81-207a-40e0-9369-fa0323e63b38 req-0d15a621-b709-4b07-a18f-63c6f3b14f4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.897 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfnadqaj1" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.936 253465 DEBUG nova.storage.rbd_utils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 0a4b1bbf-edde-478c-91f0-40e5825475fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:04:26 compute-0 nova_compute[253461]: 2025-11-22 04:04:26.940 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd/disk.config 0a4b1bbf-edde-478c-91f0-40e5825475fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.113 253465 DEBUG oslo_concurrency.processutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd/disk.config 0a4b1bbf-edde-478c-91f0-40e5825475fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.115 253465 INFO nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Deleting local config drive /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd/disk.config because it was imported into RBD.
Nov 22 04:04:27 compute-0 kernel: tap49fa071c-33: entered promiscuous mode
Nov 22 04:04:27 compute-0 ovn_controller[152691]: 2025-11-22T04:04:27Z|00218|binding|INFO|Claiming lport 49fa071c-33be-4850-a57f-35030627daa3 for this chassis.
Nov 22 04:04:27 compute-0 ovn_controller[152691]: 2025-11-22T04:04:27Z|00219|binding|INFO|49fa071c-33be-4850-a57f-35030627daa3: Claiming fa:16:3e:a1:24:58 10.100.0.9
Nov 22 04:04:27 compute-0 NetworkManager[48916]: <info>  [1763784267.1901] manager: (tap49fa071c-33): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Nov 22 04:04:27 compute-0 systemd-udevd[286807]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.192 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.198 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:24:58 10.100.0.9'], port_security=['fa:16:3e:a1:24:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0a4b1bbf-edde-478c-91f0-40e5825475fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '374de8a5-1500-46fb-adf8-2bb87fa0ef15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=49fa071c-33be-4850-a57f-35030627daa3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.199 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 49fa071c-33be-4850-a57f-35030627daa3 in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 bound to our chassis
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.201 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:04:27 compute-0 ovn_controller[152691]: 2025-11-22T04:04:27Z|00220|binding|INFO|Setting lport 49fa071c-33be-4850-a57f-35030627daa3 ovn-installed in OVS
Nov 22 04:04:27 compute-0 ovn_controller[152691]: 2025-11-22T04:04:27Z|00221|binding|INFO|Setting lport 49fa071c-33be-4850-a57f-35030627daa3 up in Southbound
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.210 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 NetworkManager[48916]: <info>  [1763784267.2121] device (tap49fa071c-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:04:27 compute-0 NetworkManager[48916]: <info>  [1763784267.2138] device (tap49fa071c-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.227 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[eba2dae6-257b-40ed-8553-93fa51adb233]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.228 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4670b112-91 in ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.231 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4670b112-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.231 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8f91b97c-37bb-45ca-8f93-f71cb2f9dfcd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.232 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[bd27f204-201a-418a-b6e0-e5d0290de06c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 systemd-machined[215728]: New machine qemu-21-instance-00000015.
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.250 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7c7d5c-1539-4ba2-9c49-62e98f4255df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.280 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[43225bd9-186d-40a8-992f-899256932842]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.326 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[f14c4609-9cf8-40b9-81ca-9d446281c8d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 NetworkManager[48916]: <info>  [1763784267.3366] manager: (tap4670b112-90): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.335 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f56736a4-2d1c-4235-ab3a-a48dfc0e06f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.391 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[6e949344-38b9-4120-b377-63ef9c163d59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.396 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[302fbe54-ef10-4049-856a-b3a86065b563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 NetworkManager[48916]: <info>  [1763784267.4358] device (tap4670b112-90): carrier: link connected
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.452 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a6196aed-70f1-4915-ba0a-ad556555678d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.480 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8ee75601-651e-45ac-82e6-6628822d5079]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452003, 'reachable_time': 17272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287267, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.502 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3f0ac0-7b65-4adb-88f9-cf4591a11a3e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:43a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 452003, 'tstamp': 452003}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287271, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.527 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2af01491-358d-481a-bb15-2f9457be9cd2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 452003, 'reachable_time': 17272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287274, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.580 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.581 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.581 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.582 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.582 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.585 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e904e321-5778-41c0-b694-8db732d346d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.586 253465 INFO nova.compute.manager [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Terminating instance
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.588 253465 DEBUG nova.compute.manager [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:04:27 compute-0 kernel: tap905dd436-f2 (unregistering): left promiscuous mode
Nov 22 04:04:27 compute-0 NetworkManager[48916]: <info>  [1763784267.6326] device (tap905dd436-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.640 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 ovn_controller[152691]: 2025-11-22T04:04:27Z|00222|binding|INFO|Releasing lport 905dd436-f21d-4498-9bc7-a159e966bc32 from this chassis (sb_readonly=0)
Nov 22 04:04:27 compute-0 ovn_controller[152691]: 2025-11-22T04:04:27Z|00223|binding|INFO|Setting lport 905dd436-f21d-4498-9bc7-a159e966bc32 down in Southbound
Nov 22 04:04:27 compute-0 ovn_controller[152691]: 2025-11-22T04:04:27Z|00224|binding|INFO|Removing iface tap905dd436-f2 ovn-installed in OVS
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.662 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 serene_euler[287165]: {
Nov 22 04:04:27 compute-0 serene_euler[287165]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "osd_id": 1,
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "type": "bluestore"
Nov 22 04:04:27 compute-0 serene_euler[287165]:     },
Nov 22 04:04:27 compute-0 serene_euler[287165]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "osd_id": 0,
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "type": "bluestore"
Nov 22 04:04:27 compute-0 serene_euler[287165]:     },
Nov 22 04:04:27 compute-0 serene_euler[287165]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "osd_id": 2,
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:04:27 compute-0 serene_euler[287165]:         "type": "bluestore"
Nov 22 04:04:27 compute-0 serene_euler[287165]:     }
Nov 22 04:04:27 compute-0 serene_euler[287165]: }
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.683 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:9b:8d 10.100.0.11'], port_security=['fa:16:3e:67:9b:8d 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f916655a-aa1c-4071-b05b-7bd2a8661ce0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-20419c46-b854-4274-a893-985996c423ff', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ed2837d5c0344b88b5ba7799c801241', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d5a7c53-cf71-48a7-9702-55f86ae6b22a d4a0b50e-e25d-45d7-8066-0806f23d5429', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=dc364e99-58eb-4fc0-816d-2e7face6b382, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=905dd436-f21d-4498-9bc7-a159e966bc32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:04:27 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Nov 22 04:04:27 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 22.026s CPU time.
Nov 22 04:04:27 compute-0 systemd-machined[215728]: Machine qemu-18-instance-00000012 terminated.
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.686 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5c563513-2163-474e-aea5-c7d4e47149ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.700 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.701 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.701 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.702 253465 DEBUG nova.compute.manager [req-5b1be440-499c-4db9-9dd9-a27c86149edb req-c2e07438-2519-4ddc-84db-8647df95b807 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received event network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.702 253465 DEBUG oslo_concurrency.lockutils [req-5b1be440-499c-4db9-9dd9-a27c86149edb req-c2e07438-2519-4ddc-84db-8647df95b807 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.703 253465 DEBUG oslo_concurrency.lockutils [req-5b1be440-499c-4db9-9dd9-a27c86149edb req-c2e07438-2519-4ddc-84db-8647df95b807 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.703 253465 DEBUG oslo_concurrency.lockutils [req-5b1be440-499c-4db9-9dd9-a27c86149edb req-c2e07438-2519-4ddc-84db-8647df95b807 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.703 253465 DEBUG nova.compute.manager [req-5b1be440-499c-4db9-9dd9-a27c86149edb req-c2e07438-2519-4ddc-84db-8647df95b807 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] No waiting events found dispatching network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.703 253465 WARNING nova.compute.manager [req-5b1be440-499c-4db9-9dd9-a27c86149edb req-c2e07438-2519-4ddc-84db-8647df95b807 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received unexpected event network-vif-plugged-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b for instance with vm_state active and task_state deleting.
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.705 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 NetworkManager[48916]: <info>  [1763784267.7060] manager: (tap4670b112-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Nov 22 04:04:27 compute-0 kernel: tap4670b112-90: entered promiscuous mode
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.718 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.720 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.721 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 ovn_controller[152691]: 2025-11-22T04:04:27Z|00225|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:04:27 compute-0 systemd[1]: libpod-70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba.scope: Deactivated successfully.
Nov 22 04:04:27 compute-0 systemd[1]: libpod-70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba.scope: Consumed 1.047s CPU time.
Nov 22 04:04:27 compute-0 podman[287148]: 2025-11-22 04:04:27.734842454 +0000 UTC m=+1.338656062 container died 70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_euler, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.739 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.747 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.752 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.753 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6b9c0840-b764-47c6-bbdc-2177368262c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.754 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:04:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:27.755 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'env', 'PROCESS_TAG=haproxy-4670b112-9f63-4a03-8d79-91f581c69c03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4670b112-9f63-4a03-8d79-91f581c69c03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:04:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-53b4d0c31b6fd2e320367d2e25ddd59634feaa0d367f87f145f443e647b9c11d-merged.mount: Deactivated successfully.
Nov 22 04:04:27 compute-0 NetworkManager[48916]: <info>  [1763784267.8098] manager: (tap905dd436-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/116)
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.813 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.818 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 podman[287148]: 2025-11-22 04:04:27.819704661 +0000 UTC m=+1.423518269 container remove 70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.830 253465 INFO nova.virt.libvirt.driver [-] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Instance destroyed successfully.
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.831 253465 DEBUG nova.objects.instance [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lazy-loading 'resources' on Instance uuid f916655a-aa1c-4071-b05b-7bd2a8661ce0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:27 compute-0 systemd[1]: libpod-conmon-70cba9c81c812944b5b461bc1972cbaa33da6d3badbdc0a507a085bf22eacaba.scope: Deactivated successfully.
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.835 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.852 253465 DEBUG nova.virt.libvirt.vif [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-448994149',display_name='tempest-SnapshotDataIntegrityTests-server-448994149',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-448994149',id=18,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMdHPvqdgfYANQmokSC7429xVzDKgMwZE4oiZaPOpXH5lgO3KNV4xaut64/pEBvzIQTnQWSGFpIS7A+K3rfQ+++WPw0I8OMiD86CFB9DXTD6TBgfwIpCH8imYNPR9HbvfQ==',key_name='tempest-keypair-1058175192',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:02:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ed2837d5c0344b88b5ba7799c801241',ramdisk_id='',reservation_id='r-3wk7snjy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-58717939',owner_user_name='tempest-SnapshotDataIntegrityTests-58717939-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:02:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a2ea2fdf84c34961a57ed463c6daa7ba',uuid=f916655a-aa1c-4071-b05b-7bd2a8661ce0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.853 253465 DEBUG nova.network.os_vif_util [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Converting VIF {"id": "905dd436-f21d-4498-9bc7-a159e966bc32", "address": "fa:16:3e:67:9b:8d", "network": {"id": "20419c46-b854-4274-a893-985996c423ff", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-67120831-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ed2837d5c0344b88b5ba7799c801241", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap905dd436-f2", "ovs_interfaceid": "905dd436-f21d-4498-9bc7-a159e966bc32", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.854 253465 DEBUG nova.network.os_vif_util [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:67:9b:8d,bridge_name='br-int',has_traffic_filtering=True,id=905dd436-f21d-4498-9bc7-a159e966bc32,network=Network(20419c46-b854-4274-a893-985996c423ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap905dd436-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.855 253465 DEBUG os_vif [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:9b:8d,bridge_name='br-int',has_traffic_filtering=True,id=905dd436-f21d-4498-9bc7-a159e966bc32,network=Network(20419c46-b854-4274-a893-985996c423ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap905dd436-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:04:27 compute-0 sudo[286951]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.858 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.861 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap905dd436-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.862 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.864 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:27 compute-0 nova_compute[253461]: 2025-11-22 04:04:27.867 253465 INFO os_vif [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:67:9b:8d,bridge_name='br-int',has_traffic_filtering=True,id=905dd436-f21d-4498-9bc7-a159e966bc32,network=Network(20419c46-b854-4274-a893-985996c423ff),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap905dd436-f2')
Nov 22 04:04:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:04:27 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:04:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:04:27 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:04:27 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7506a998-e04d-465a-983c-dfd967f82693 does not exist
Nov 22 04:04:27 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev abd903a6-d262-4d05-bfa0-1fe5366fc476 does not exist
Nov 22 04:04:27 compute-0 sudo[287373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:04:27 compute-0 sudo[287373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:27 compute-0 sudo[287373]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.007 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784268.0070796, 0a4b1bbf-edde-478c-91f0-40e5825475fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.008 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] VM Started (Lifecycle Event)
Nov 22 04:04:28 compute-0 sudo[287408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:04:28 compute-0 sudo[287408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:04:28 compute-0 sudo[287408]: pam_unix(sudo:session): session closed for user root
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.033 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.038 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784268.0071814, 0a4b1bbf-edde-478c-91f0-40e5825475fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.039 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] VM Paused (Lifecycle Event)
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.055 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.058 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.076 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:04:28 compute-0 podman[287454]: 2025-11-22 04:04:28.195408221 +0000 UTC m=+0.060919455 container create 6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:04:28 compute-0 systemd[1]: Started libpod-conmon-6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203.scope.
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.239 253465 DEBUG nova.network.neutron [-] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:04:28 compute-0 podman[287454]: 2025-11-22 04:04:28.167616466 +0000 UTC m=+0.033127719 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:04:28 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.265 253465 INFO nova.compute.manager [-] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Took 1.99 seconds to deallocate network for instance.
Nov 22 04:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bba8c504d4e77b41e0f45e2edaf538b7f391e16fff53a1547e82600d332186ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:04:28 compute-0 podman[287454]: 2025-11-22 04:04:28.292141638 +0000 UTC m=+0.157652872 container init 6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:04:28 compute-0 podman[287454]: 2025-11-22 04:04:28.30260596 +0000 UTC m=+0.168117193 container start 6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.318 253465 INFO nova.virt.libvirt.driver [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Deleting instance files /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0_del
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.319 253465 INFO nova.virt.libvirt.driver [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Deletion of /var/lib/nova/instances/f916655a-aa1c-4071-b05b-7bd2a8661ce0_del complete
Nov 22 04:04:28 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[287470]: [NOTICE]   (287474) : New worker (287476) forked
Nov 22 04:04:28 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[287470]: [NOTICE]   (287474) : Loading success.
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.359 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 905dd436-f21d-4498-9bc7-a159e966bc32 in datapath 20419c46-b854-4274-a893-985996c423ff unbound from our chassis
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.362 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 20419c46-b854-4274-a893-985996c423ff, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.363 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e26cfb73-c6c7-4f25-9607-fafe186eca7b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.364 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-20419c46-b854-4274-a893-985996c423ff namespace which is not needed anymore
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.381 253465 INFO nova.compute.manager [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Took 0.79 seconds to destroy the instance on the hypervisor.
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.381 253465 DEBUG oslo.service.loopingcall [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.382 253465 DEBUG nova.compute.manager [-] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.382 253465 DEBUG nova.network.neutron [-] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:04:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 90 KiB/s rd, 8.6 KiB/s wr, 126 op/s
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.500 253465 INFO nova.compute.manager [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Took 0.23 seconds to detach 1 volumes for instance.
Nov 22 04:04:28 compute-0 neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff[283960]: [NOTICE]   (283964) : haproxy version is 2.8.14-c23fe91
Nov 22 04:04:28 compute-0 neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff[283960]: [NOTICE]   (283964) : path to executable is /usr/sbin/haproxy
Nov 22 04:04:28 compute-0 neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff[283960]: [WARNING]  (283964) : Exiting Master process...
Nov 22 04:04:28 compute-0 neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff[283960]: [WARNING]  (283964) : Exiting Master process...
Nov 22 04:04:28 compute-0 neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff[283960]: [ALERT]    (283964) : Current worker (283966) exited with code 143 (Terminated)
Nov 22 04:04:28 compute-0 neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff[283960]: [WARNING]  (283964) : All workers exited. Exiting... (0)
Nov 22 04:04:28 compute-0 systemd[1]: libpod-0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6.scope: Deactivated successfully.
Nov 22 04:04:28 compute-0 podman[287502]: 2025-11-22 04:04:28.515140482 +0000 UTC m=+0.055489019 container died 0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6-userdata-shm.mount: Deactivated successfully.
Nov 22 04:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ec5dd98917944b48bfdf0bd09091d5989677044325f1b3fe05517be241af43a-merged.mount: Deactivated successfully.
Nov 22 04:04:28 compute-0 podman[287502]: 2025-11-22 04:04:28.55961527 +0000 UTC m=+0.099963838 container cleanup 0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:04:28 compute-0 systemd[1]: libpod-conmon-0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6.scope: Deactivated successfully.
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.585 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.586 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.601 253465 DEBUG nova.compute.manager [req-a42a11d0-f13d-47db-bec4-2dcc0caf7198 req-1d69274a-a30e-46f3-93f1-9e4d8c1a0fff f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Received event network-vif-deleted-bf5ae7d5-ec0c-4807-ae2f-17f838ffc40b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:28 compute-0 podman[287528]: 2025-11-22 04:04:28.643566363 +0000 UTC m=+0.059989958 container remove 0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.653 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb60538-868c-4600-8ffc-b2d2f39cfd45]: (4, ('Sat Nov 22 04:04:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff (0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6)\n0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6\nSat Nov 22 04:04:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-20419c46-b854-4274-a893-985996c423ff (0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6)\n0562053ced1820bc2a04ff2039474afd519adbdfef910f7b305b8ca73ca67fd6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.656 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e0dcddf5-b20e-4cbf-a3da-c27686c2d8ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.657 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap20419c46-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.660 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:28 compute-0 kernel: tap20419c46-b0: left promiscuous mode
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.693 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.699 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d1babf1f-f545-4bf5-91c4-21890d855703]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:28 compute-0 nova_compute[253461]: 2025-11-22 04:04:28.713 253465 DEBUG oslo_concurrency.processutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.716 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e59cf2bc-1b6f-4934-9d1b-63468a178399]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.718 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[64a64f91-1512-4d22-934a-44cfa0e0b7dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.745 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf77093-034f-424f-a944-d19c5c418ebf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440591, 'reachable_time': 37778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287541, 'error': None, 'target': 'ovnmeta-20419c46-b854-4274-a893-985996c423ff', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.749 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-20419c46-b854-4274-a893-985996c423ff deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:04:28 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:28.749 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[80c297d1-622a-4f95-9537-95356078c9f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d20419c46\x2db854\x2d4274\x2da893\x2d985996c423ff.mount: Deactivated successfully.
Nov 22 04:04:28 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:04:28 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:04:28 compute-0 ceph-mon[75011]: pgmap v1570: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 90 KiB/s rd, 8.6 KiB/s wr, 126 op/s
Nov 22 04:04:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1053597634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.150 253465 DEBUG oslo_concurrency.processutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.158 253465 DEBUG nova.compute.provider_tree [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.184 253465 DEBUG nova.scheduler.client.report [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.212 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.216 253465 DEBUG nova.network.neutron [-] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.240 253465 INFO nova.compute.manager [-] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Took 0.86 seconds to deallocate network for instance.
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.250 253465 INFO nova.scheduler.client.report [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Deleted allocations for instance ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.299 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.300 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.327 253465 DEBUG oslo_concurrency.lockutils [None req-e437d057-efab-4135-9573-8b13eca65860 26cfaadc9db64dde98981b57d48fd714 6c34534e935e44e883b5f01b09c03631 - - default default] Lock "ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.376 253465 DEBUG oslo_concurrency.processutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:04:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3759695831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1644753581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.815 253465 DEBUG oslo_concurrency.processutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.821 253465 DEBUG nova.compute.provider_tree [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.837 253465 DEBUG nova.scheduler.client.report [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.868 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Nov 22 04:04:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.893 253465 INFO nova.scheduler.client.report [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Deleted allocations for instance f916655a-aa1c-4071-b05b-7bd2a8661ce0
Nov 22 04:04:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Nov 22 04:04:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1053597634' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3759695831' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1644753581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.909 253465 DEBUG nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received event network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.910 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.912 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.912 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.912 253465 DEBUG nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Processing event network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.912 253465 DEBUG nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received event network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.912 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.912 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.913 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.913 253465 DEBUG nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] No waiting events found dispatching network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.913 253465 WARNING nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received unexpected event network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 for instance with vm_state building and task_state spawning.
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.913 253465 DEBUG nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received event network-vif-unplugged-905dd436-f21d-4498-9bc7-a159e966bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.913 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.913 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.914 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.914 253465 DEBUG nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] No waiting events found dispatching network-vif-unplugged-905dd436-f21d-4498-9bc7-a159e966bc32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.914 253465 WARNING nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received unexpected event network-vif-unplugged-905dd436-f21d-4498-9bc7-a159e966bc32 for instance with vm_state deleted and task_state None.
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.914 253465 DEBUG nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received event network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.914 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.914 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.915 253465 DEBUG oslo_concurrency.lockutils [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.915 253465 DEBUG nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] No waiting events found dispatching network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.915 253465 WARNING nova.compute.manager [req-bf2e8de2-07e5-4e2b-b4cf-354f22ca831f req-dcb5c1aa-e5ed-46cf-a642-557dcc0a7fb7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received unexpected event network-vif-plugged-905dd436-f21d-4498-9bc7-a159e966bc32 for instance with vm_state deleted and task_state None.
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.918 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.922 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.924 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784269.9237828, 0a4b1bbf-edde-478c-91f0-40e5825475fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.924 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] VM Resumed (Lifecycle Event)
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.928 253465 INFO nova.virt.libvirt.driver [-] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Instance spawned successfully.
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.928 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.956 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.967 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.974 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.975 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.975 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.976 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.977 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.977 253465 DEBUG nova.virt.libvirt.driver [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:04:29 compute-0 nova_compute[253461]: 2025-11-22 04:04:29.986 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:04:30 compute-0 nova_compute[253461]: 2025-11-22 04:04:30.006 253465 DEBUG oslo_concurrency.lockutils [None req-79dbecd8-0192-417a-9d1a-9828a9f48d61 a2ea2fdf84c34961a57ed463c6daa7ba 2ed2837d5c0344b88b5ba7799c801241 - - default default] Lock "f916655a-aa1c-4071-b05b-7bd2a8661ce0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:30 compute-0 nova_compute[253461]: 2025-11-22 04:04:30.044 253465 INFO nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Took 6.72 seconds to spawn the instance on the hypervisor.
Nov 22 04:04:30 compute-0 nova_compute[253461]: 2025-11-22 04:04:30.045 253465 DEBUG nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:04:30 compute-0 nova_compute[253461]: 2025-11-22 04:04:30.136 253465 INFO nova.compute.manager [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Took 9.13 seconds to build instance.
Nov 22 04:04:30 compute-0 nova_compute[253461]: 2025-11-22 04:04:30.166 253465 DEBUG oslo_concurrency.lockutils [None req-73b696ec-e8b5-45d4-ad47-187cf39f0ceb 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 117 KiB/s rd, 9.7 KiB/s wr, 163 op/s
Nov 22 04:04:30 compute-0 nova_compute[253461]: 2025-11-22 04:04:30.744 253465 DEBUG nova.compute.manager [req-7ee845ed-d120-44f5-89f2-de477330255a req-0995ffb9-885d-4c83-a49d-8b9ab44e69a5 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Received event network-vif-deleted-905dd436-f21d-4498-9bc7-a159e966bc32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Nov 22 04:04:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Nov 22 04:04:30 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Nov 22 04:04:30 compute-0 ceph-mon[75011]: osdmap e385: 3 total, 3 up, 3 in
Nov 22 04:04:30 compute-0 ceph-mon[75011]: pgmap v1572: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 117 KiB/s rd, 9.7 KiB/s wr, 163 op/s
Nov 22 04:04:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e386 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Nov 22 04:04:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Nov 22 04:04:31 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Nov 22 04:04:31 compute-0 ceph-mon[75011]: osdmap e386: 3 total, 3 up, 3 in
Nov 22 04:04:31 compute-0 ceph-mon[75011]: osdmap e387: 3 total, 3 up, 3 in
Nov 22 04:04:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.2 MiB/s wr, 232 op/s
Nov 22 04:04:32 compute-0 nova_compute[253461]: 2025-11-22 04:04:32.838 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:32 compute-0 nova_compute[253461]: 2025-11-22 04:04:32.864 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Nov 22 04:04:32 compute-0 ceph-mon[75011]: pgmap v1575: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.2 MiB/s wr, 232 op/s
Nov 22 04:04:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Nov 22 04:04:32 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Nov 22 04:04:33 compute-0 ceph-mon[75011]: osdmap e388: 3 total, 3 up, 3 in
Nov 22 04:04:34 compute-0 nova_compute[253461]: 2025-11-22 04:04:34.030 253465 DEBUG nova.compute.manager [req-a86affa6-59e1-40dc-8d06-3a811f25c6e6 req-fe2e915d-894d-44be-89ed-a39ab35ce8ca f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received event network-changed-49fa071c-33be-4850-a57f-35030627daa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:34 compute-0 nova_compute[253461]: 2025-11-22 04:04:34.031 253465 DEBUG nova.compute.manager [req-a86affa6-59e1-40dc-8d06-3a811f25c6e6 req-fe2e915d-894d-44be-89ed-a39ab35ce8ca f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Refreshing instance network info cache due to event network-changed-49fa071c-33be-4850-a57f-35030627daa3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:04:34 compute-0 nova_compute[253461]: 2025-11-22 04:04:34.031 253465 DEBUG oslo_concurrency.lockutils [req-a86affa6-59e1-40dc-8d06-3a811f25c6e6 req-fe2e915d-894d-44be-89ed-a39ab35ce8ca f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:04:34 compute-0 nova_compute[253461]: 2025-11-22 04:04:34.032 253465 DEBUG oslo_concurrency.lockutils [req-a86affa6-59e1-40dc-8d06-3a811f25c6e6 req-fe2e915d-894d-44be-89ed-a39ab35ce8ca f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:04:34 compute-0 nova_compute[253461]: 2025-11-22 04:04:34.032 253465 DEBUG nova.network.neutron [req-a86affa6-59e1-40dc-8d06-3a811f25c6e6 req-fe2e915d-894d-44be-89ed-a39ab35ce8ca f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Refreshing network info cache for port 49fa071c-33be-4850-a57f-35030627daa3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:04:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 9.0 MiB/s rd, 4.1 MiB/s wr, 236 op/s
Nov 22 04:04:34 compute-0 ceph-mon[75011]: pgmap v1577: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 9.0 MiB/s rd, 4.1 MiB/s wr, 236 op/s
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:34.968861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784274968900, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2576, "num_deletes": 521, "total_data_size": 3404219, "memory_usage": 3468424, "flush_reason": "Manual Compaction"}
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784274992738, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3339621, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29225, "largest_seqno": 31800, "table_properties": {"data_size": 3328263, "index_size": 6892, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 27327, "raw_average_key_size": 20, "raw_value_size": 3303167, "raw_average_value_size": 2457, "num_data_blocks": 300, "num_entries": 1344, "num_filter_entries": 1344, "num_deletions": 521, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784095, "oldest_key_time": 1763784095, "file_creation_time": 1763784274, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 23943 microseconds, and 12025 cpu microseconds.
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:34.992800) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3339621 bytes OK
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:34.992825) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:34.997394) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:34.997446) EVENT_LOG_v1 {"time_micros": 1763784274997411, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:34.997470) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3392146, prev total WAL file size 3392146, number of live WAL files 2.
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:34.999000) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3261KB)], [62(9232KB)]
Nov 22 04:04:34 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784274999049, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12793593, "oldest_snapshot_seqno": -1}
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6108 keys, 10877928 bytes, temperature: kUnknown
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784275103602, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10877928, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10830395, "index_size": 31167, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 153799, "raw_average_key_size": 25, "raw_value_size": 10713843, "raw_average_value_size": 1754, "num_data_blocks": 1255, "num_entries": 6108, "num_filter_entries": 6108, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784274, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:35.103923) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10877928 bytes
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:35.105892) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.2 rd, 103.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.0 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(7.1) write-amplify(3.3) OK, records in: 7162, records dropped: 1054 output_compression: NoCompression
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:35.105927) EVENT_LOG_v1 {"time_micros": 1763784275105911, "job": 34, "event": "compaction_finished", "compaction_time_micros": 104667, "compaction_time_cpu_micros": 47384, "output_level": 6, "num_output_files": 1, "total_output_size": 10877928, "num_input_records": 7162, "num_output_records": 6108, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784275107264, "job": 34, "event": "table_file_deletion", "file_number": 64}
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784275110049, "job": 34, "event": "table_file_deletion", "file_number": 62}
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:34.998844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:35.110180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:35.110188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:35.110192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:35.110195) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:04:35 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:04:35.110199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:04:35 compute-0 ovn_controller[152691]: 2025-11-22T04:04:35Z|00226|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:04:35 compute-0 nova_compute[253461]: 2025-11-22 04:04:35.829 253465 DEBUG nova.network.neutron [req-a86affa6-59e1-40dc-8d06-3a811f25c6e6 req-fe2e915d-894d-44be-89ed-a39ab35ce8ca f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updated VIF entry in instance network info cache for port 49fa071c-33be-4850-a57f-35030627daa3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:04:35 compute-0 nova_compute[253461]: 2025-11-22 04:04:35.830 253465 DEBUG nova.network.neutron [req-a86affa6-59e1-40dc-8d06-3a811f25c6e6 req-fe2e915d-894d-44be-89ed-a39ab35ce8ca f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updating instance_info_cache with network_info: [{"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:04:35 compute-0 nova_compute[253461]: 2025-11-22 04:04:35.853 253465 DEBUG oslo_concurrency.lockutils [req-a86affa6-59e1-40dc-8d06-3a811f25c6e6 req-fe2e915d-894d-44be-89ed-a39ab35ce8ca f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:04:35 compute-0 nova_compute[253461]: 2025-11-22 04:04:35.854 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Nov 22 04:04:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Nov 22 04:04:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Nov 22 04:04:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:04:36
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'backups', 'images', 'cephfs.cephfs.data']
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 8.8 MiB/s rd, 4.6 MiB/s wr, 309 op/s
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:04:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:04:36 compute-0 ceph-mon[75011]: osdmap e389: 3 total, 3 up, 3 in
Nov 22 04:04:36 compute-0 ceph-mon[75011]: pgmap v1579: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 8.8 MiB/s rd, 4.6 MiB/s wr, 309 op/s
Nov 22 04:04:37 compute-0 nova_compute[253461]: 2025-11-22 04:04:37.843 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:37 compute-0 nova_compute[253461]: 2025-11-22 04:04:37.866 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 6.7 MiB/s rd, 3.5 MiB/s wr, 238 op/s
Nov 22 04:04:38 compute-0 ceph-mon[75011]: pgmap v1580: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 6.7 MiB/s rd, 3.5 MiB/s wr, 238 op/s
Nov 22 04:04:39 compute-0 ovn_controller[152691]: 2025-11-22T04:04:39Z|00227|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:04:39 compute-0 nova_compute[253461]: 2025-11-22 04:04:39.353 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:39 compute-0 nova_compute[253461]: 2025-11-22 04:04:39.442 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:39 compute-0 nova_compute[253461]: 2025-11-22 04:04:39.443 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 04:04:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Nov 22 04:04:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Nov 22 04:04:39 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Nov 22 04:04:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 186 op/s
Nov 22 04:04:40 compute-0 podman[287589]: 2025-11-22 04:04:40.42319527 +0000 UTC m=+0.093460184 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.445 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.445 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.446 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.508 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784265.498194, ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.509 253465 INFO nova.compute.manager [-] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] VM Stopped (Lifecycle Event)
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.529 253465 DEBUG nova.compute.manager [None req-6d8c5b62-5a91-42eb-9dbf-1d7df2c1ec39 - - - - - -] [instance: ff34c2ac-f9cc-4e82-9c13-9833fd9c6dfb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:04:40 compute-0 ceph-mon[75011]: osdmap e390: 3 total, 3 up, 3 in
Nov 22 04:04:40 compute-0 ceph-mon[75011]: pgmap v1582: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 4.3 MiB/s rd, 2.5 MiB/s wr, 186 op/s
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.804 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.804 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.805 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:04:40 compute-0 nova_compute[253461]: 2025-11-22 04:04:40.805 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0a4b1bbf-edde-478c-91f0-40e5825475fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Nov 22 04:04:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Nov 22 04:04:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Nov 22 04:04:42 compute-0 ceph-mon[75011]: osdmap e391: 3 total, 3 up, 3 in
Nov 22 04:04:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 204 KiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 22 04:04:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:04:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2175524805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:42 compute-0 nova_compute[253461]: 2025-11-22 04:04:42.826 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784267.8244538, f916655a-aa1c-4071-b05b-7bd2a8661ce0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:04:42 compute-0 nova_compute[253461]: 2025-11-22 04:04:42.827 253465 INFO nova.compute.manager [-] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] VM Stopped (Lifecycle Event)
Nov 22 04:04:42 compute-0 nova_compute[253461]: 2025-11-22 04:04:42.880 253465 DEBUG nova.compute.manager [None req-bc8021dd-f929-485a-b082-2ce066e69d9b - - - - - -] [instance: f916655a-aa1c-4071-b05b-7bd2a8661ce0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:04:42 compute-0 nova_compute[253461]: 2025-11-22 04:04:42.881 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:43 compute-0 ceph-mon[75011]: pgmap v1584: 305 pgs: 305 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 204 KiB/s rd, 1.7 MiB/s wr, 88 op/s
Nov 22 04:04:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2175524805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:43 compute-0 ovn_controller[152691]: 2025-11-22T04:04:43Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a1:24:58 10.100.0.9
Nov 22 04:04:43 compute-0 ovn_controller[152691]: 2025-11-22T04:04:43Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a1:24:58 10.100.0.9
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.092 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updating instance_info_cache with network_info: [{"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.118 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-0a4b1bbf-edde-478c-91f0-40e5825475fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.118 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.119 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.119 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.120 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.161 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.161 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.161 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.162 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.162 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 442 KiB/s rd, 9.1 MiB/s wr, 156 op/s
Nov 22 04:04:44 compute-0 ceph-mon[75011]: pgmap v1585: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 442 KiB/s rd, 9.1 MiB/s wr, 156 op/s
Nov 22 04:04:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1231389695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.708 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.812 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:04:44 compute-0 nova_compute[253461]: 2025-11-22 04:04:44.812 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:04:45 compute-0 nova_compute[253461]: 2025-11-22 04:04:45.122 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:04:45 compute-0 nova_compute[253461]: 2025-11-22 04:04:45.124 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4261MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:04:45 compute-0 nova_compute[253461]: 2025-11-22 04:04:45.125 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:45 compute-0 nova_compute[253461]: 2025-11-22 04:04:45.125 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:45 compute-0 nova_compute[253461]: 2025-11-22 04:04:45.366 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 0a4b1bbf-edde-478c-91f0-40e5825475fd actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:04:45 compute-0 nova_compute[253461]: 2025-11-22 04:04:45.367 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:04:45 compute-0 nova_compute[253461]: 2025-11-22 04:04:45.367 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:04:45 compute-0 nova_compute[253461]: 2025-11-22 04:04:45.578 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1231389695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:46 compute-0 sshd-session[287633]: Invalid user admin from 27.79.43.64 port 53684
Nov 22 04:04:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2806794727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.158 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.166 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.189 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.225 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.226 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:46 compute-0 sshd-session[287633]: Connection closed by invalid user admin 27.79.43.64 port 53684 [preauth]
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 482 KiB/s rd, 18 MiB/s wr, 186 op/s
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0351989829556554 of space, bias 1.0, pg target 10.55969488669662 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0021489843800872564 of space, bias 1.0, pg target 0.6232054702253044 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.19319111398710687 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0005901217685745913 quantized to 16 (current 16)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.376522107182392e-05 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006270043791105033 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:04:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.538 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.539 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.539 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.539 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.539 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:46 compute-0 nova_compute[253461]: 2025-11-22 04:04:46.540 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:04:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2806794727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:46 compute-0 ceph-mon[75011]: pgmap v1586: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 482 KiB/s rd, 18 MiB/s wr, 186 op/s
Nov 22 04:04:47 compute-0 nova_compute[253461]: 2025-11-22 04:04:47.881 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:47 compute-0 nova_compute[253461]: 2025-11-22 04:04:47.884 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 508 KiB/s rd, 36 MiB/s wr, 189 op/s
Nov 22 04:04:48 compute-0 ceph-mon[75011]: pgmap v1587: 305 pgs: 305 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 508 KiB/s rd, 36 MiB/s wr, 189 op/s
Nov 22 04:04:49 compute-0 nova_compute[253461]: 2025-11-22 04:04:49.425 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 2.9 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 501 KiB/s rd, 67 MiB/s wr, 262 op/s
Nov 22 04:04:50 compute-0 ceph-mon[75011]: pgmap v1588: 305 pgs: 305 active+clean; 2.9 GiB data, 3.1 GiB used, 57 GiB / 60 GiB avail; 501 KiB/s rd, 67 MiB/s wr, 262 op/s
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.574 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "0a4b1bbf-edde-478c-91f0-40e5825475fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.574 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.575 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.575 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.575 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.576 253465 INFO nova.compute.manager [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Terminating instance
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.577 253465 DEBUG nova.compute.manager [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:04:50 compute-0 kernel: tap49fa071c-33 (unregistering): left promiscuous mode
Nov 22 04:04:50 compute-0 NetworkManager[48916]: <info>  [1763784290.8909] device (tap49fa071c-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:04:50 compute-0 ovn_controller[152691]: 2025-11-22T04:04:50Z|00228|binding|INFO|Releasing lport 49fa071c-33be-4850-a57f-35030627daa3 from this chassis (sb_readonly=0)
Nov 22 04:04:50 compute-0 ovn_controller[152691]: 2025-11-22T04:04:50Z|00229|binding|INFO|Setting lport 49fa071c-33be-4850-a57f-35030627daa3 down in Southbound
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.899 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:50 compute-0 ovn_controller[152691]: 2025-11-22T04:04:50Z|00230|binding|INFO|Removing iface tap49fa071c-33 ovn-installed in OVS
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.901 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.904 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:50.921 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:24:58 10.100.0.9'], port_security=['fa:16:3e:a1:24:58 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '0a4b1bbf-edde-478c-91f0-40e5825475fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '374de8a5-1500-46fb-adf8-2bb87fa0ef15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=49fa071c-33be-4850-a57f-35030627daa3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:04:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:50.924 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 49fa071c-33be-4850-a57f-35030627daa3 in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:04:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:50.926 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4670b112-9f63-4a03-8d79-91f581c69c03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:04:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:50.928 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[13b8e16f-ec83-40f4-8de9-953eff62100b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:50.929 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace which is not needed anymore
Nov 22 04:04:50 compute-0 nova_compute[253461]: 2025-11-22 04:04:50.942 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:50 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Nov 22 04:04:50 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 13.529s CPU time.
Nov 22 04:04:50 compute-0 systemd-machined[215728]: Machine qemu-21-instance-00000015 terminated.
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.016 253465 INFO nova.virt.libvirt.driver [-] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Instance destroyed successfully.
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.017 253465 DEBUG nova.objects.instance [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'resources' on Instance uuid 0a4b1bbf-edde-478c-91f0-40e5825475fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.037 253465 DEBUG nova.virt.libvirt.vif [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:04:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1435026467',display_name='tempest-TestVolumeBootPattern-server-1435026467',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1435026467',id=21,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:04:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-i3dcxenv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:04:30Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=0a4b1bbf-edde-478c-91f0-40e5825475fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.039 253465 DEBUG nova.network.os_vif_util [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "49fa071c-33be-4850-a57f-35030627daa3", "address": "fa:16:3e:a1:24:58", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49fa071c-33", "ovs_interfaceid": "49fa071c-33be-4850-a57f-35030627daa3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.040 253465 DEBUG nova.network.os_vif_util [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a1:24:58,bridge_name='br-int',has_traffic_filtering=True,id=49fa071c-33be-4850-a57f-35030627daa3,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49fa071c-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.041 253465 DEBUG os_vif [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:24:58,bridge_name='br-int',has_traffic_filtering=True,id=49fa071c-33be-4850-a57f-35030627daa3,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49fa071c-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.043 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.043 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49fa071c-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.049 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.054 253465 INFO os_vif [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a1:24:58,bridge_name='br-int',has_traffic_filtering=True,id=49fa071c-33be-4850-a57f-35030627daa3,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49fa071c-33')
Nov 22 04:04:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e391 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:51 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[287470]: [NOTICE]   (287474) : haproxy version is 2.8.14-c23fe91
Nov 22 04:04:51 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[287470]: [NOTICE]   (287474) : path to executable is /usr/sbin/haproxy
Nov 22 04:04:51 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[287470]: [WARNING]  (287474) : Exiting Master process...
Nov 22 04:04:51 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[287470]: [ALERT]    (287474) : Current worker (287476) exited with code 143 (Terminated)
Nov 22 04:04:51 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[287470]: [WARNING]  (287474) : All workers exited. Exiting... (0)
Nov 22 04:04:51 compute-0 systemd[1]: libpod-6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203.scope: Deactivated successfully.
Nov 22 04:04:51 compute-0 podman[287691]: 2025-11-22 04:04:51.172547616 +0000 UTC m=+0.121131887 container died 6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-bba8c504d4e77b41e0f45e2edaf538b7f391e16fff53a1547e82600d332186ad-merged.mount: Deactivated successfully.
Nov 22 04:04:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203-userdata-shm.mount: Deactivated successfully.
Nov 22 04:04:51 compute-0 podman[287691]: 2025-11-22 04:04:51.397380987 +0000 UTC m=+0.345965268 container cleanup 6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:04:51 compute-0 systemd[1]: libpod-conmon-6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203.scope: Deactivated successfully.
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.670 253465 DEBUG nova.compute.manager [req-dc985611-71d2-4d2e-a975-8dcb591b9139 req-1af7ed27-ca64-427f-b446-d2df3cc94412 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received event network-vif-unplugged-49fa071c-33be-4850-a57f-35030627daa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.671 253465 DEBUG oslo_concurrency.lockutils [req-dc985611-71d2-4d2e-a975-8dcb591b9139 req-1af7ed27-ca64-427f-b446-d2df3cc94412 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.672 253465 DEBUG oslo_concurrency.lockutils [req-dc985611-71d2-4d2e-a975-8dcb591b9139 req-1af7ed27-ca64-427f-b446-d2df3cc94412 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.672 253465 DEBUG oslo_concurrency.lockutils [req-dc985611-71d2-4d2e-a975-8dcb591b9139 req-1af7ed27-ca64-427f-b446-d2df3cc94412 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.672 253465 DEBUG nova.compute.manager [req-dc985611-71d2-4d2e-a975-8dcb591b9139 req-1af7ed27-ca64-427f-b446-d2df3cc94412 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] No waiting events found dispatching network-vif-unplugged-49fa071c-33be-4850-a57f-35030627daa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.672 253465 DEBUG nova.compute.manager [req-dc985611-71d2-4d2e-a975-8dcb591b9139 req-1af7ed27-ca64-427f-b446-d2df3cc94412 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received event network-vif-unplugged-49fa071c-33be-4850-a57f-35030627daa3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:04:51 compute-0 podman[287738]: 2025-11-22 04:04:51.673275792 +0000 UTC m=+0.239801548 container remove 6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.680 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[83d76905-f368-435c-854e-ffec868bdd8f]: (4, ('Sat Nov 22 04:04:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203)\n6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203\nSat Nov 22 04:04:51 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203)\n6ed337c9c54794f1552687aadb47fcfe9c9018ce6803599967c227abdb4d7203\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.682 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7d63cb0b-238a-4d3d-9cf9-eb0871e94b7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.683 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.685 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:51 compute-0 kernel: tap4670b112-90: left promiscuous mode
Nov 22 04:04:51 compute-0 nova_compute[253461]: 2025-11-22 04:04:51.699 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.702 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[60eda4b2-440f-4219-8f69-0d44db1aeb36]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.715 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0d6b26-57a4-4c70-ad8f-da376318af1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.718 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[dba19c34-34d6-49f3-9f0c-01c078d41633]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.740 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a27e0263-2a39-4678-956a-01486bfb4b2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451991, 'reachable_time': 42252, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287754, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.743 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:04:51 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:51.743 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[2bee7c82-eeab-47bb-b412-2887729953de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d4670b112\x2d9f63\x2d4a03\x2d8d79\x2d91f581c69c03.mount: Deactivated successfully.
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.211 253465 INFO nova.virt.libvirt.driver [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Deleting instance files /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd_del
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.212 253465 INFO nova.virt.libvirt.driver [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Deletion of /var/lib/nova/instances/0a4b1bbf-edde-478c-91f0-40e5825475fd_del complete
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.257 253465 INFO nova.compute.manager [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Took 1.68 seconds to destroy the instance on the hypervisor.
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.257 253465 DEBUG oslo.service.loopingcall [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.258 253465 DEBUG nova.compute.manager [-] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.258 253465 DEBUG nova.network.neutron [-] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:04:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:52.398 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:04:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:52.399 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.399 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:52 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:04:52.400 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:04:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 3.1 GiB data, 3.3 GiB used, 57 GiB / 60 GiB avail; 445 KiB/s rd, 76 MiB/s wr, 238 op/s
Nov 22 04:04:52 compute-0 ceph-mon[75011]: pgmap v1589: 305 pgs: 305 active+clean; 3.1 GiB data, 3.3 GiB used, 57 GiB / 60 GiB avail; 445 KiB/s rd, 76 MiB/s wr, 238 op/s
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.885 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.968 253465 DEBUG nova.network.neutron [-] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:04:52 compute-0 nova_compute[253461]: 2025-11-22 04:04:52.987 253465 INFO nova.compute.manager [-] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Took 0.73 seconds to deallocate network for instance.
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.038 253465 DEBUG nova.compute.manager [req-e2078e1d-07be-48ac-af3c-36d755113f32 req-b0aad609-178f-48e3-976a-3fd53da099ff f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received event network-vif-deleted-49fa071c-33be-4850-a57f-35030627daa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.160 253465 INFO nova.compute.manager [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Took 0.17 seconds to detach 1 volumes for instance.
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.230 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.231 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.299 253465 DEBUG oslo_concurrency.processutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/107650102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.762 253465 DEBUG oslo_concurrency.processutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.769 253465 DEBUG nova.compute.manager [req-3e61b912-bafa-40c1-baef-5ae093f02670 req-e9aa0006-cf68-4a1d-aa56-779b8521d3c8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received event network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.769 253465 DEBUG oslo_concurrency.lockutils [req-3e61b912-bafa-40c1-baef-5ae093f02670 req-e9aa0006-cf68-4a1d-aa56-779b8521d3c8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.769 253465 DEBUG oslo_concurrency.lockutils [req-3e61b912-bafa-40c1-baef-5ae093f02670 req-e9aa0006-cf68-4a1d-aa56-779b8521d3c8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.770 253465 DEBUG oslo_concurrency.lockutils [req-3e61b912-bafa-40c1-baef-5ae093f02670 req-e9aa0006-cf68-4a1d-aa56-779b8521d3c8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.770 253465 DEBUG nova.compute.manager [req-3e61b912-bafa-40c1-baef-5ae093f02670 req-e9aa0006-cf68-4a1d-aa56-779b8521d3c8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] No waiting events found dispatching network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.770 253465 WARNING nova.compute.manager [req-3e61b912-bafa-40c1-baef-5ae093f02670 req-e9aa0006-cf68-4a1d-aa56-779b8521d3c8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Received unexpected event network-vif-plugged-49fa071c-33be-4850-a57f-35030627daa3 for instance with vm_state deleted and task_state None.
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.772 253465 DEBUG nova.compute.provider_tree [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.795 253465 DEBUG nova.scheduler.client.report [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:04:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/107650102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.824 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.864 253465 INFO nova.scheduler.client.report [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Deleted allocations for instance 0a4b1bbf-edde-478c-91f0-40e5825475fd
Nov 22 04:04:53 compute-0 nova_compute[253461]: 2025-11-22 04:04:53.950 253465 DEBUG oslo_concurrency.lockutils [None req-5a14a06b-a18a-4828-b03d-0638b608c35f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "0a4b1bbf-edde-478c-91f0-40e5825475fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.376s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 368 KiB/s rd, 81 MiB/s wr, 270 op/s
Nov 22 04:04:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Nov 22 04:04:54 compute-0 ceph-mon[75011]: pgmap v1590: 305 pgs: 305 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 368 KiB/s rd, 81 MiB/s wr, 270 op/s
Nov 22 04:04:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Nov 22 04:04:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Nov 22 04:04:55 compute-0 ceph-mon[75011]: osdmap e392: 3 total, 3 up, 3 in
Nov 22 04:04:56 compute-0 nova_compute[253461]: 2025-11-22 04:04:56.046 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e392 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:04:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721973897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2721973897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:56 compute-0 podman[287778]: 2025-11-22 04:04:56.416098055 +0000 UTC m=+0.091046812 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:04:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 2.9 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 196 KiB/s rd, 90 MiB/s wr, 250 op/s
Nov 22 04:04:56 compute-0 podman[287779]: 2025-11-22 04:04:56.46889231 +0000 UTC m=+0.138553248 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:04:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2721973897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:56 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2721973897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:56 compute-0 ceph-mon[75011]: pgmap v1592: 305 pgs: 305 active+clean; 2.9 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 196 KiB/s rd, 90 MiB/s wr, 250 op/s
Nov 22 04:04:56 compute-0 nova_compute[253461]: 2025-11-22 04:04:56.895 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:56 compute-0 nova_compute[253461]: 2025-11-22 04:04:56.896 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:56 compute-0 nova_compute[253461]: 2025-11-22 04:04:56.926 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.040 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.040 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.049 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.049 253465 INFO nova.compute.claims [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.171 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:04:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1191982514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.656 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.663 253465 DEBUG nova.compute.provider_tree [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.679 253465 DEBUG nova.scheduler.client.report [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.714 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.715 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.762 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.763 253465 DEBUG nova.network.neutron [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.786 253465 INFO nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.807 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:04:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Nov 22 04:04:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1191982514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:04:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Nov 22 04:04:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.856 253465 INFO nova.virt.block_device [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Booting with volume 48cf9701-6d3b-4714-8ead-2dd4dcc19ce1 at /dev/vda
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.912 253465 DEBUG nova.policy [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45ccef35c0c843a59c9dfd0eb67190a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '83cc5de7368b40b984b51f781e85343c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.921 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.988 253465 DEBUG os_brick.utils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.989 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.998 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.998 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[4f4f90cd-b549-4fda-b5f2-373adf5b5b3d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:57 compute-0 nova_compute[253461]: 2025-11-22 04:04:57.999 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.008 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.008 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa9acb5-bf44-432d-9239-ca13f963b75e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.009 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.018 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.019 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1eacaf-c447-4322-ba23-9ee324ed4787]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.020 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d2221e-97be-403d-830b-a7e1c2eb43a9]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.020 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.039 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.048 253465 DEBUG os_brick.initiator.connectors.lightos [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.049 253465 DEBUG os_brick.initiator.connectors.lightos [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.049 253465 DEBUG os_brick.initiator.connectors.lightos [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.050 253465 DEBUG os_brick.utils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.051 253465 DEBUG nova.virt.block_device [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updating existing volume attachment record: f9b9b6a2-c8ab-4adf-a73d-4678daeac383 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:04:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 111 KiB/s rd, 47 MiB/s wr, 195 op/s
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.446 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 04:04:58 compute-0 nova_compute[253461]: 2025-11-22 04:04:58.561 253465 DEBUG nova.network.neutron [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Successfully created port: e0a979c4-306d-47e7-a853-95a815ae464f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:04:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:04:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3437588662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Nov 22 04:04:58 compute-0 ceph-mon[75011]: osdmap e393: 3 total, 3 up, 3 in
Nov 22 04:04:58 compute-0 ceph-mon[75011]: pgmap v1594: 305 pgs: 305 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 111 KiB/s rd, 47 MiB/s wr, 195 op/s
Nov 22 04:04:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3437588662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:04:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Nov 22 04:04:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.075 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.077 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.078 253465 INFO nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Creating image(s)
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.078 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.078 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Ensure instance console log exists: /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.079 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.079 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.080 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.381 253465 DEBUG nova.network.neutron [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Successfully updated port: e0a979c4-306d-47e7-a853-95a815ae464f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.420 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.421 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquired lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.421 253465 DEBUG nova.network.neutron [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.466 253465 DEBUG nova.compute.manager [req-e5bc271a-4ffd-4d2b-b851-5701b3eab95c req-37fcbdf9-c096-4649-a01c-71afc3bfb27a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-changed-e0a979c4-306d-47e7-a853-95a815ae464f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.466 253465 DEBUG nova.compute.manager [req-e5bc271a-4ffd-4d2b-b851-5701b3eab95c req-37fcbdf9-c096-4649-a01c-71afc3bfb27a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Refreshing instance network info cache due to event network-changed-e0a979c4-306d-47e7-a853-95a815ae464f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.466 253465 DEBUG oslo_concurrency.lockutils [req-e5bc271a-4ffd-4d2b-b851-5701b3eab95c req-37fcbdf9-c096-4649-a01c-71afc3bfb27a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:04:59 compute-0 nova_compute[253461]: 2025-11-22 04:04:59.553 253465 DEBUG nova.network.neutron [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:04:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:04:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/495013989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:04:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/495013989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:04:59 compute-0 ceph-mon[75011]: osdmap e394: 3 total, 3 up, 3 in
Nov 22 04:04:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/495013989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:04:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/495013989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.132 253465 DEBUG nova.network.neutron [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updating instance_info_cache with network_info: [{"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.147 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Releasing lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.148 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Instance network_info: |[{"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.148 253465 DEBUG oslo_concurrency.lockutils [req-e5bc271a-4ffd-4d2b-b851-5701b3eab95c req-37fcbdf9-c096-4649-a01c-71afc3bfb27a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.149 253465 DEBUG nova.network.neutron [req-e5bc271a-4ffd-4d2b-b851-5701b3eab95c req-37fcbdf9-c096-4649-a01c-71afc3bfb27a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Refreshing network info cache for port e0a979c4-306d-47e7-a853-95a815ae464f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.152 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Start _get_guest_xml network_info=[{"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': 'f9b9b6a2-c8ab-4adf-a73d-4678daeac383', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-48cf9701-6d3b-4714-8ead-2dd4dcc19ce1', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '48cf9701-6d3b-4714-8ead-2dd4dcc19ce1', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'ab8b13c6-9785-42c2-a54c-61aa3a7ae664', 'attached_at': '', 'detached_at': '', 'volume_id': '48cf9701-6d3b-4714-8ead-2dd4dcc19ce1', 'serial': '48cf9701-6d3b-4714-8ead-2dd4dcc19ce1'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.157 253465 WARNING nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.164 253465 DEBUG nova.virt.libvirt.host [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.164 253465 DEBUG nova.virt.libvirt.host [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.173 253465 DEBUG nova.virt.libvirt.host [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.173 253465 DEBUG nova.virt.libvirt.host [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.174 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.174 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.174 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.174 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.175 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.175 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.175 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.175 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.175 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.176 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.176 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.176 253465 DEBUG nova.virt.hardware [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.196 253465 DEBUG nova.storage.rbd_utils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image ab8b13c6-9785-42c2-a54c-61aa3a7ae664_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.199 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1838971941' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1838971941' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 4 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 289 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 103 KiB/s rd, 11 MiB/s wr, 181 op/s
Nov 22 04:05:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864224433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.644 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.719 253465 DEBUG nova.virt.libvirt.vif [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:04:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-616121886',display_name='tempest-TestVolumeBootPattern-server-616121886',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-616121886',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-anmeurag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:04:57Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=ab8b13c6-9785-42c2-a54c-61aa3a7ae664,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.719 253465 DEBUG nova.network.os_vif_util [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.720 253465 DEBUG nova.network.os_vif_util [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:00:7d,bridge_name='br-int',has_traffic_filtering=True,id=e0a979c4-306d-47e7-a853-95a815ae464f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0a979c4-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.721 253465 DEBUG nova.objects.instance [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'pci_devices' on Instance uuid ab8b13c6-9785-42c2-a54c-61aa3a7ae664 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.736 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <uuid>ab8b13c6-9785-42c2-a54c-61aa3a7ae664</uuid>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <name>instance-00000016</name>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <nova:name>tempest-TestVolumeBootPattern-server-616121886</nova:name>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:05:00</nova:creationTime>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <nova:user uuid="45ccef35c0c843a59c9dfd0eb67190a6">tempest-TestVolumeBootPattern-1584219565-project-member</nova:user>
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <nova:project uuid="83cc5de7368b40b984b51f781e85343c">tempest-TestVolumeBootPattern-1584219565</nova:project>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <nova:port uuid="e0a979c4-306d-47e7-a853-95a815ae464f">
Nov 22 04:05:00 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <system>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <entry name="serial">ab8b13c6-9785-42c2-a54c-61aa3a7ae664</entry>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <entry name="uuid">ab8b13c6-9785-42c2-a54c-61aa3a7ae664</entry>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </system>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <os>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   </os>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <features>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   </features>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/ab8b13c6-9785-42c2-a54c-61aa3a7ae664_disk.config">
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       </source>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-48cf9701-6d3b-4714-8ead-2dd4dcc19ce1">
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       </source>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:05:00 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <serial>48cf9701-6d3b-4714-8ead-2dd4dcc19ce1</serial>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:b7:00:7d"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <target dev="tape0a979c4-30"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664/console.log" append="off"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <video>
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </video>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:05:00 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:05:00 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:05:00 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:05:00 compute-0 nova_compute[253461]: </domain>
Nov 22 04:05:00 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.738 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Preparing to wait for external event network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.738 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.739 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.739 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.740 253465 DEBUG nova.virt.libvirt.vif [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:04:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-616121886',display_name='tempest-TestVolumeBootPattern-server-616121886',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-616121886',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-anmeurag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:04:57Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=ab8b13c6-9785-42c2-a54c-61aa3a7ae664,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.740 253465 DEBUG nova.network.os_vif_util [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.741 253465 DEBUG nova.network.os_vif_util [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:00:7d,bridge_name='br-int',has_traffic_filtering=True,id=e0a979c4-306d-47e7-a853-95a815ae464f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0a979c4-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.741 253465 DEBUG os_vif [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:00:7d,bridge_name='br-int',has_traffic_filtering=True,id=e0a979c4-306d-47e7-a853-95a815ae464f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0a979c4-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.742 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.742 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.742 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.746 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.746 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0a979c4-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.747 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape0a979c4-30, col_values=(('external_ids', {'iface-id': 'e0a979c4-306d-47e7-a853-95a815ae464f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:00:7d', 'vm-uuid': 'ab8b13c6-9785-42c2-a54c-61aa3a7ae664'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.748 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:00 compute-0 NetworkManager[48916]: <info>  [1763784300.7503] manager: (tape0a979c4-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/117)
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.751 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.757 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.759 253465 INFO os_vif [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:00:7d,bridge_name='br-int',has_traffic_filtering=True,id=e0a979c4-306d-47e7-a853-95a815ae464f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0a979c4-30')
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.843 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.844 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.844 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No VIF found with MAC fa:16:3e:b7:00:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.845 253465 INFO nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Using config drive
Nov 22 04:05:00 compute-0 nova_compute[253461]: 2025-11-22 04:05:00.877 253465 DEBUG nova.storage.rbd_utils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image ab8b13c6-9785-42c2-a54c-61aa3a7ae664_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:05:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1838971941' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1838971941' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:00 compute-0 ceph-mon[75011]: pgmap v1596: 305 pgs: 4 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 289 active+clean; 2.2 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 103 KiB/s rd, 11 MiB/s wr, 181 op/s
Nov 22 04:05:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2864224433' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Nov 22 04:05:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Nov 22 04:05:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.507 253465 INFO nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Creating config drive at /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664/disk.config
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.513 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpignate4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.649 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpignate4e" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.687 253465 DEBUG nova.storage.rbd_utils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image ab8b13c6-9785-42c2-a54c-61aa3a7ae664_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.692 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664/disk.config ab8b13c6-9785-42c2-a54c-61aa3a7ae664_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.855 253465 DEBUG oslo_concurrency.processutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664/disk.config ab8b13c6-9785-42c2-a54c-61aa3a7ae664_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.857 253465 INFO nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Deleting local config drive /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664/disk.config because it was imported into RBD.
Nov 22 04:05:01 compute-0 kernel: tape0a979c4-30: entered promiscuous mode
Nov 22 04:05:01 compute-0 NetworkManager[48916]: <info>  [1763784301.9184] manager: (tape0a979c4-30): new Tun device (/org/freedesktop/NetworkManager/Devices/118)
Nov 22 04:05:01 compute-0 ovn_controller[152691]: 2025-11-22T04:05:01Z|00231|binding|INFO|Claiming lport e0a979c4-306d-47e7-a853-95a815ae464f for this chassis.
Nov 22 04:05:01 compute-0 ovn_controller[152691]: 2025-11-22T04:05:01Z|00232|binding|INFO|e0a979c4-306d-47e7-a853-95a815ae464f: Claiming fa:16:3e:b7:00:7d 10.100.0.3
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.921 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:01.937 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:00:7d 10.100.0.3'], port_security=['fa:16:3e:b7:00:7d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ab8b13c6-9785-42c2-a54c-61aa3a7ae664', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '374de8a5-1500-46fb-adf8-2bb87fa0ef15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=e0a979c4-306d-47e7-a853-95a815ae464f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:05:01 compute-0 ovn_controller[152691]: 2025-11-22T04:05:01Z|00233|binding|INFO|Setting lport e0a979c4-306d-47e7-a853-95a815ae464f ovn-installed in OVS
Nov 22 04:05:01 compute-0 ovn_controller[152691]: 2025-11-22T04:05:01Z|00234|binding|INFO|Setting lport e0a979c4-306d-47e7-a853-95a815ae464f up in Southbound
Nov 22 04:05:01 compute-0 nova_compute[253461]: 2025-11-22 04:05:01.939 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:01.940 162689 INFO neutron.agent.ovn.metadata.agent [-] Port e0a979c4-306d-47e7-a853-95a815ae464f in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 bound to our chassis
Nov 22 04:05:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:01.970 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:05:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:01.988 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad7bdee-3f07-4185-949c-9a441467de52]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:01.988 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4670b112-91 in ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:05:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:01.991 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4670b112-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:05:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:01.991 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0697f82a-576f-48b5-9085-ff8af9f41d6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:01.993 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4f77dd0f-4f12-4392-9734-69f8614df514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:01 compute-0 systemd-machined[215728]: New machine qemu-22-instance-00000016.
Nov 22 04:05:01 compute-0 systemd-udevd[287967]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.006 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4f5d03-4101-4808-8cd8-ee086c964701]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Nov 22 04:05:02 compute-0 NetworkManager[48916]: <info>  [1763784302.0113] device (tape0a979c4-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:05:02 compute-0 NetworkManager[48916]: <info>  [1763784302.0121] device (tape0a979c4-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.035 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9457c2be-8a18-4daf-869b-93af079812fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.064 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[73f36ca0-01be-41ed-879b-842fe54f7dc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.069 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c0553a93-a6a5-44a8-9b59-80fa9f4b66de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 NetworkManager[48916]: <info>  [1763784302.0705] manager: (tap4670b112-90): new Veth device (/org/freedesktop/NetworkManager/Devices/119)
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.111 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[68d30d72-2954-4a45-b9fc-367a70e1e4c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.114 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[893815e6-ff78-4ecd-819e-41662008b5aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 NetworkManager[48916]: <info>  [1763784302.1392] device (tap4670b112-90): carrier: link connected
Nov 22 04:05:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Nov 22 04:05:02 compute-0 ceph-mon[75011]: osdmap e395: 3 total, 3 up, 3 in
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.147 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[6d588f26-575a-40fa-a21c-d17972d8a031]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.166 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[456bcbb2-6c7f-4551-ad6c-f1ce9cdf31fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455473, 'reachable_time': 38477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287999, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.187 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3faaaa7e-06ce-4236-b804-c6d5b7316cee]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe58:43a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455473, 'tstamp': 455473}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288000, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.214 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[af3d908f-6a90-41a3-b7e5-c19293cbd56a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455473, 'reachable_time': 38477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288001, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.256 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5cf72220-f82d-4d4c-9e76-11ba8f6c0e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.320 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4f798a07-e626-42d9-8f69-f0ef0465ed68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.321 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.322 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.322 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:02 compute-0 NetworkManager[48916]: <info>  [1763784302.3591] manager: (tap4670b112-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Nov 22 04:05:02 compute-0 kernel: tap4670b112-90: entered promiscuous mode
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.358 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.361 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:02 compute-0 ovn_controller[152691]: 2025-11-22T04:05:02Z|00235|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.363 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.385 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.386 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.387 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[454aff0a-7f28-4e3f-bbbb-ee174a405950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.388 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/4670b112-9f63-4a03-8d79-91f581c69c03.pid.haproxy
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:05:02 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:02.389 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'env', 'PROCESS_TAG=haproxy-4670b112-9f63-4a03-8d79-91f581c69c03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4670b112-9f63-4a03-8d79-91f581c69c03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:05:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 4 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 289 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 75 KiB/s rd, 6.6 KiB/s wr, 124 op/s
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.492 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784302.4919777, ab8b13c6-9785-42c2-a54c-61aa3a7ae664 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.492 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] VM Started (Lifecycle Event)
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.519 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.524 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784302.4944518, ab8b13c6-9785-42c2-a54c-61aa3a7ae664 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.524 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] VM Paused (Lifecycle Event)
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.552 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.556 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.589 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.622 253465 DEBUG nova.network.neutron [req-e5bc271a-4ffd-4d2b-b851-5701b3eab95c req-37fcbdf9-c096-4649-a01c-71afc3bfb27a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updated VIF entry in instance network info cache for port e0a979c4-306d-47e7-a853-95a815ae464f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.623 253465 DEBUG nova.network.neutron [req-e5bc271a-4ffd-4d2b-b851-5701b3eab95c req-37fcbdf9-c096-4649-a01c-71afc3bfb27a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updating instance_info_cache with network_info: [{"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.642 253465 DEBUG oslo_concurrency.lockutils [req-e5bc271a-4ffd-4d2b-b851-5701b3eab95c req-37fcbdf9-c096-4649-a01c-71afc3bfb27a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.828 253465 DEBUG nova.compute.manager [req-ee35a3a0-c80a-483f-9e37-22aed41a79bf req-dd63dca5-30b0-413f-8975-2f0f26b165b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.829 253465 DEBUG oslo_concurrency.lockutils [req-ee35a3a0-c80a-483f-9e37-22aed41a79bf req-dd63dca5-30b0-413f-8975-2f0f26b165b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.829 253465 DEBUG oslo_concurrency.lockutils [req-ee35a3a0-c80a-483f-9e37-22aed41a79bf req-dd63dca5-30b0-413f-8975-2f0f26b165b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.829 253465 DEBUG oslo_concurrency.lockutils [req-ee35a3a0-c80a-483f-9e37-22aed41a79bf req-dd63dca5-30b0-413f-8975-2f0f26b165b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.830 253465 DEBUG nova.compute.manager [req-ee35a3a0-c80a-483f-9e37-22aed41a79bf req-dd63dca5-30b0-413f-8975-2f0f26b165b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Processing event network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.831 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.836 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.836 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784302.83566, ab8b13c6-9785-42c2-a54c-61aa3a7ae664 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.836 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] VM Resumed (Lifecycle Event)
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.840 253465 INFO nova.virt.libvirt.driver [-] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Instance spawned successfully.
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.841 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.856 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.862 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.865 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.866 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.866 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.866 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.867 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.867 253465 DEBUG nova.virt.libvirt.driver [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.879 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:05:02 compute-0 podman[288075]: 2025-11-22 04:05:02.884963651 +0000 UTC m=+0.094015612 container create 72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:05:02 compute-0 podman[288075]: 2025-11-22 04:05:02.818323705 +0000 UTC m=+0.027375696 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.916 253465 INFO nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Took 3.84 seconds to spawn the instance on the hypervisor.
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.917 253465 DEBUG nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.925 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:02 compute-0 systemd[1]: Started libpod-conmon-72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7.scope.
Nov 22 04:05:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:05:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1159b98ff66a857e34f24fbe77fc41eea8be4519a609fe574c6a3b98558bc5ed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:02 compute-0 podman[288075]: 2025-11-22 04:05:02.975790836 +0000 UTC m=+0.184842817 container init 72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:05:02 compute-0 podman[288075]: 2025-11-22 04:05:02.986753575 +0000 UTC m=+0.195805536 container start 72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:05:02 compute-0 nova_compute[253461]: 2025-11-22 04:05:02.998 253465 INFO nova.compute.manager [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Took 5.99 seconds to build instance.
Nov 22 04:05:03 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[288089]: [NOTICE]   (288093) : New worker (288095) forked
Nov 22 04:05:03 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[288089]: [NOTICE]   (288093) : Loading success.
Nov 22 04:05:03 compute-0 nova_compute[253461]: 2025-11-22 04:05:03.016 253465 DEBUG oslo_concurrency.lockutils [None req-88595e91-b1f4-493f-9c01-dad4c61b059d 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/821400915' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/821400915' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:03 compute-0 ceph-mon[75011]: osdmap e396: 3 total, 3 up, 3 in
Nov 22 04:05:03 compute-0 ceph-mon[75011]: pgmap v1599: 305 pgs: 4 active+clean+snaptrim, 12 active+clean+snaptrim_wait, 289 active+clean; 2.2 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 75 KiB/s rd, 6.6 KiB/s wr, 124 op/s
Nov 22 04:05:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/821400915' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/821400915' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 1.8 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 716 KiB/s rd, 7.0 KiB/s wr, 181 op/s
Nov 22 04:05:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Nov 22 04:05:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Nov 22 04:05:04 compute-0 ceph-mon[75011]: pgmap v1600: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 1.8 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 716 KiB/s rd, 7.0 KiB/s wr, 181 op/s
Nov 22 04:05:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Nov 22 04:05:05 compute-0 nova_compute[253461]: 2025-11-22 04:05:05.309 253465 DEBUG nova.compute.manager [req-ae55365d-1547-4bfa-9288-e6bd6b8b5193 req-f8532a50-69fd-4a63-a57f-bffebb96a7f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:05:05 compute-0 nova_compute[253461]: 2025-11-22 04:05:05.311 253465 DEBUG oslo_concurrency.lockutils [req-ae55365d-1547-4bfa-9288-e6bd6b8b5193 req-f8532a50-69fd-4a63-a57f-bffebb96a7f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:05 compute-0 nova_compute[253461]: 2025-11-22 04:05:05.311 253465 DEBUG oslo_concurrency.lockutils [req-ae55365d-1547-4bfa-9288-e6bd6b8b5193 req-f8532a50-69fd-4a63-a57f-bffebb96a7f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:05 compute-0 nova_compute[253461]: 2025-11-22 04:05:05.312 253465 DEBUG oslo_concurrency.lockutils [req-ae55365d-1547-4bfa-9288-e6bd6b8b5193 req-f8532a50-69fd-4a63-a57f-bffebb96a7f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:05 compute-0 nova_compute[253461]: 2025-11-22 04:05:05.312 253465 DEBUG nova.compute.manager [req-ae55365d-1547-4bfa-9288-e6bd6b8b5193 req-f8532a50-69fd-4a63-a57f-bffebb96a7f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] No waiting events found dispatching network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:05:05 compute-0 nova_compute[253461]: 2025-11-22 04:05:05.313 253465 WARNING nova.compute.manager [req-ae55365d-1547-4bfa-9288-e6bd6b8b5193 req-f8532a50-69fd-4a63-a57f-bffebb96a7f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received unexpected event network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f for instance with vm_state active and task_state None.
Nov 22 04:05:05 compute-0 ceph-mon[75011]: osdmap e397: 3 total, 3 up, 3 in
Nov 22 04:05:05 compute-0 nova_compute[253461]: 2025-11-22 04:05:05.805 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:06 compute-0 nova_compute[253461]: 2025-11-22 04:05:06.014 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784291.0134697, 0a4b1bbf-edde-478c-91f0-40e5825475fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:05:06 compute-0 nova_compute[253461]: 2025-11-22 04:05:06.015 253465 INFO nova.compute.manager [-] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] VM Stopped (Lifecycle Event)
Nov 22 04:05:06 compute-0 nova_compute[253461]: 2025-11-22 04:05:06.049 253465 DEBUG nova.compute.manager [None req-95336159-77a4-410c-9b3b-7c422ac253ae - - - - - -] [instance: 0a4b1bbf-edde-478c-91f0-40e5825475fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Nov 22 04:05:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Nov 22 04:05:06 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Nov 22 04:05:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986499122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2986499122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 1.5 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.9 KiB/s wr, 181 op/s
Nov 22 04:05:07 compute-0 ceph-mon[75011]: osdmap e398: 3 total, 3 up, 3 in
Nov 22 04:05:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2986499122' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2986499122' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:07 compute-0 ceph-mon[75011]: pgmap v1603: 305 pgs: 305 active+clean; 1.5 GiB data, 1.8 GiB used, 58 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.9 KiB/s wr, 181 op/s
Nov 22 04:05:07 compute-0 nova_compute[253461]: 2025-11-22 04:05:07.976 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 863 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 27 KiB/s wr, 251 op/s
Nov 22 04:05:08 compute-0 ceph-mon[75011]: pgmap v1604: 305 pgs: 305 active+clean; 863 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 27 KiB/s wr, 251 op/s
Nov 22 04:05:09 compute-0 nova_compute[253461]: 2025-11-22 04:05:09.762 253465 DEBUG nova.compute.manager [req-21090b08-80a8-46f9-966a-ccc63a4d4327 req-aa2bcbb2-4805-462e-8e1f-80962eb16875 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-changed-e0a979c4-306d-47e7-a853-95a815ae464f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:05:09 compute-0 nova_compute[253461]: 2025-11-22 04:05:09.762 253465 DEBUG nova.compute.manager [req-21090b08-80a8-46f9-966a-ccc63a4d4327 req-aa2bcbb2-4805-462e-8e1f-80962eb16875 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Refreshing instance network info cache due to event network-changed-e0a979c4-306d-47e7-a853-95a815ae464f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:05:09 compute-0 nova_compute[253461]: 2025-11-22 04:05:09.763 253465 DEBUG oslo_concurrency.lockutils [req-21090b08-80a8-46f9-966a-ccc63a4d4327 req-aa2bcbb2-4805-462e-8e1f-80962eb16875 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:05:09 compute-0 nova_compute[253461]: 2025-11-22 04:05:09.763 253465 DEBUG oslo_concurrency.lockutils [req-21090b08-80a8-46f9-966a-ccc63a4d4327 req-aa2bcbb2-4805-462e-8e1f-80962eb16875 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:05:09 compute-0 nova_compute[253461]: 2025-11-22 04:05:09.763 253465 DEBUG nova.network.neutron [req-21090b08-80a8-46f9-966a-ccc63a4d4327 req-aa2bcbb2-4805-462e-8e1f-80962eb16875 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Refreshing network info cache for port e0a979c4-306d-47e7-a853-95a815ae464f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:05:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 305 active+clean; 167 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 248 op/s
Nov 22 04:05:10 compute-0 nova_compute[253461]: 2025-11-22 04:05:10.807 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:10 compute-0 ceph-mon[75011]: pgmap v1605: 305 pgs: 305 active+clean; 167 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 21 KiB/s wr, 248 op/s
Nov 22 04:05:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Nov 22 04:05:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Nov 22 04:05:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Nov 22 04:05:11 compute-0 podman[288104]: 2025-11-22 04:05:11.408820789 +0000 UTC m=+0.084650316 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:05:12 compute-0 ceph-mon[75011]: osdmap e399: 3 total, 3 up, 3 in
Nov 22 04:05:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1607: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 20 KiB/s wr, 185 op/s
Nov 22 04:05:13 compute-0 nova_compute[253461]: 2025-11-22 04:05:12.997 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:13 compute-0 ceph-mon[75011]: pgmap v1607: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 20 KiB/s wr, 185 op/s
Nov 22 04:05:14 compute-0 ovn_controller[152691]: 2025-11-22T04:05:14Z|00236|binding|INFO|Releasing lport e72a94a7-9aac-4cfd-886c-1e1e93834214 from this chassis (sb_readonly=0)
Nov 22 04:05:14 compute-0 nova_compute[253461]: 2025-11-22 04:05:14.208 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:14 compute-0 nova_compute[253461]: 2025-11-22 04:05:14.234 253465 DEBUG nova.network.neutron [req-21090b08-80a8-46f9-966a-ccc63a4d4327 req-aa2bcbb2-4805-462e-8e1f-80962eb16875 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updated VIF entry in instance network info cache for port e0a979c4-306d-47e7-a853-95a815ae464f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:05:14 compute-0 nova_compute[253461]: 2025-11-22 04:05:14.234 253465 DEBUG nova.network.neutron [req-21090b08-80a8-46f9-966a-ccc63a4d4327 req-aa2bcbb2-4805-462e-8e1f-80962eb16875 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updating instance_info_cache with network_info: [{"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:05:14 compute-0 nova_compute[253461]: 2025-11-22 04:05:14.260 253465 DEBUG oslo_concurrency.lockutils [req-21090b08-80a8-46f9-966a-ccc63a4d4327 req-aa2bcbb2-4805-462e-8e1f-80962eb16875 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:05:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 19 KiB/s wr, 139 op/s
Nov 22 04:05:14 compute-0 ceph-mon[75011]: pgmap v1608: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 19 KiB/s wr, 139 op/s
Nov 22 04:05:15 compute-0 ovn_controller[152691]: 2025-11-22T04:05:15Z|00042|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.3
Nov 22 04:05:15 compute-0 ovn_controller[152691]: 2025-11-22T04:05:15Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b7:00:7d 10.100.0.3
Nov 22 04:05:15 compute-0 nova_compute[253461]: 2025-11-22 04:05:15.811 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 18 KiB/s wr, 124 op/s
Nov 22 04:05:16 compute-0 ceph-mon[75011]: pgmap v1609: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 18 KiB/s wr, 124 op/s
Nov 22 04:05:18 compute-0 nova_compute[253461]: 2025-11-22 04:05:18.000 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 636 KiB/s rd, 14 KiB/s wr, 76 op/s
Nov 22 04:05:18 compute-0 ceph-mon[75011]: pgmap v1610: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 636 KiB/s rd, 14 KiB/s wr, 76 op/s
Nov 22 04:05:18 compute-0 ovn_controller[152691]: 2025-11-22T04:05:18Z|00044|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.9 does not match offer 10.100.0.3
Nov 22 04:05:18 compute-0 ovn_controller[152691]: 2025-11-22T04:05:18Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b7:00:7d 10.100.0.3
Nov 22 04:05:20 compute-0 ovn_controller[152691]: 2025-11-22T04:05:20Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:00:7d 10.100.0.3
Nov 22 04:05:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1611: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 15 KiB/s wr, 54 op/s
Nov 22 04:05:20 compute-0 ovn_controller[152691]: 2025-11-22T04:05:20Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:00:7d 10.100.0.3
Nov 22 04:05:20 compute-0 ceph-mon[75011]: pgmap v1611: 305 pgs: 305 active+clean; 167 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 15 KiB/s wr, 54 op/s
Nov 22 04:05:20 compute-0 nova_compute[253461]: 2025-11-22 04:05:20.813 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Nov 22 04:05:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Nov 22 04:05:21 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Nov 22 04:05:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1613: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 20 KiB/s wr, 54 op/s
Nov 22 04:05:22 compute-0 ceph-mon[75011]: osdmap e400: 3 total, 3 up, 3 in
Nov 22 04:05:22 compute-0 ceph-mon[75011]: pgmap v1613: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 638 KiB/s rd, 20 KiB/s wr, 54 op/s
Nov 22 04:05:22 compute-0 sshd-session[288126]: Invalid user user from 27.79.43.64 port 52130
Nov 22 04:05:23 compute-0 nova_compute[253461]: 2025-11-22 04:05:23.003 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:23.016 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:23.017 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:23.017 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:23 compute-0 sshd-session[288126]: Connection closed by invalid user user 27.79.43.64 port 52130 [preauth]
Nov 22 04:05:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Nov 22 04:05:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Nov 22 04:05:23 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Nov 22 04:05:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 734 KiB/s rd, 25 KiB/s wr, 98 op/s
Nov 22 04:05:24 compute-0 ceph-mon[75011]: osdmap e401: 3 total, 3 up, 3 in
Nov 22 04:05:24 compute-0 ceph-mon[75011]: pgmap v1615: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 734 KiB/s rd, 25 KiB/s wr, 98 op/s
Nov 22 04:05:25 compute-0 nova_compute[253461]: 2025-11-22 04:05:25.815 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 487 KiB/s rd, 25 KiB/s wr, 68 op/s
Nov 22 04:05:26 compute-0 ceph-mon[75011]: pgmap v1616: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 487 KiB/s rd, 25 KiB/s wr, 68 op/s
Nov 22 04:05:27 compute-0 podman[288128]: 2025-11-22 04:05:27.400231368 +0000 UTC m=+0.077825692 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 22 04:05:27 compute-0 podman[288129]: 2025-11-22 04:05:27.403321535 +0000 UTC m=+0.078812250 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:05:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Nov 22 04:05:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Nov 22 04:05:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Nov 22 04:05:28 compute-0 nova_compute[253461]: 2025-11-22 04:05:28.029 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:28 compute-0 sudo[288169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:28 compute-0 sudo[288169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:28 compute-0 sudo[288169]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:28 compute-0 sudo[288194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:05:28 compute-0 sudo[288194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:28 compute-0 sudo[288194]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:28 compute-0 sudo[288219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:28 compute-0 sudo[288219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:28 compute-0 sudo[288219]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:28 compute-0 sudo[288244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:05:28 compute-0 sudo[288244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 28 KiB/s wr, 51 op/s
Nov 22 04:05:28 compute-0 ceph-mon[75011]: osdmap e402: 3 total, 3 up, 3 in
Nov 22 04:05:28 compute-0 ceph-mon[75011]: pgmap v1618: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 28 KiB/s wr, 51 op/s
Nov 22 04:05:28 compute-0 sudo[288244]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:05:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:05:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:05:28 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:05:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:05:28 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:05:28 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ebd86d52-ac07-43fd-9010-10effac29174 does not exist
Nov 22 04:05:28 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3ab90d3b-3f73-4caf-a449-bf9c2ac985d0 does not exist
Nov 22 04:05:28 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1b795c0b-8395-4655-8cb2-3b2744a75ff2 does not exist
Nov 22 04:05:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:05:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:05:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:05:28 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:05:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:05:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:05:29 compute-0 sudo[288300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:29 compute-0 sudo[288300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:29 compute-0 sudo[288300]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:29 compute-0 sudo[288325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:05:29 compute-0 sudo[288325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:29 compute-0 sudo[288325]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:29 compute-0 sudo[288350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:29 compute-0 sudo[288350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:29 compute-0 sudo[288350]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:29 compute-0 sudo[288375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:05:29 compute-0 sudo[288375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:05:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:05:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:05:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:05:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:05:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:05:29 compute-0 podman[288439]: 2025-11-22 04:05:29.817807585 +0000 UTC m=+0.077611312 container create fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mayer, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:05:29 compute-0 systemd[1]: Started libpod-conmon-fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5.scope.
Nov 22 04:05:29 compute-0 podman[288439]: 2025-11-22 04:05:29.784108581 +0000 UTC m=+0.043912368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:29 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:05:29 compute-0 podman[288439]: 2025-11-22 04:05:29.930930623 +0000 UTC m=+0.190734410 container init fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mayer, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:05:29 compute-0 podman[288439]: 2025-11-22 04:05:29.945043157 +0000 UTC m=+0.204846884 container start fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:05:29 compute-0 podman[288439]: 2025-11-22 04:05:29.949126755 +0000 UTC m=+0.208930492 container attach fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:05:29 compute-0 vigorous_mayer[288456]: 167 167
Nov 22 04:05:29 compute-0 systemd[1]: libpod-fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5.scope: Deactivated successfully.
Nov 22 04:05:29 compute-0 podman[288439]: 2025-11-22 04:05:29.955366072 +0000 UTC m=+0.215169839 container died fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:05:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-18e021ecd35be61787197a2b3db0ad32bc68d9ac2b0f81db24b2682eb090d4ba-merged.mount: Deactivated successfully.
Nov 22 04:05:30 compute-0 podman[288439]: 2025-11-22 04:05:30.008972955 +0000 UTC m=+0.268776682 container remove fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:05:30 compute-0 systemd[1]: libpod-conmon-fa48b6aa17e5a5d108b7d77bdfdd946ac11da2157f40750ef67b4ea3a34dd0e5.scope: Deactivated successfully.
Nov 22 04:05:30 compute-0 podman[288479]: 2025-11-22 04:05:30.243831181 +0000 UTC m=+0.068543474 container create e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:05:30 compute-0 systemd[1]: Started libpod-conmon-e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15.scope.
Nov 22 04:05:30 compute-0 podman[288479]: 2025-11-22 04:05:30.218532947 +0000 UTC m=+0.043245310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1985995286ac8fae9b3e4d714b8d4f57f238302418ff544fdeba4dabb0f0bc8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1985995286ac8fae9b3e4d714b8d4f57f238302418ff544fdeba4dabb0f0bc8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1985995286ac8fae9b3e4d714b8d4f57f238302418ff544fdeba4dabb0f0bc8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1985995286ac8fae9b3e4d714b8d4f57f238302418ff544fdeba4dabb0f0bc8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1985995286ac8fae9b3e4d714b8d4f57f238302418ff544fdeba4dabb0f0bc8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:30 compute-0 podman[288479]: 2025-11-22 04:05:30.381065826 +0000 UTC m=+0.205778140 container init e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:05:30 compute-0 podman[288479]: 2025-11-22 04:05:30.395564889 +0000 UTC m=+0.220277212 container start e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:05:30 compute-0 podman[288479]: 2025-11-22 04:05:30.399969516 +0000 UTC m=+0.224681809 container attach e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:05:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 31 KiB/s wr, 120 op/s
Nov 22 04:05:30 compute-0 ceph-mon[75011]: pgmap v1619: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 31 KiB/s wr, 120 op/s
Nov 22 04:05:30 compute-0 nova_compute[253461]: 2025-11-22 04:05:30.817 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e402 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:31 compute-0 admiring_panini[288495]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:05:31 compute-0 admiring_panini[288495]: --> relative data size: 1.0
Nov 22 04:05:31 compute-0 admiring_panini[288495]: --> All data devices are unavailable
Nov 22 04:05:31 compute-0 systemd[1]: libpod-e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15.scope: Deactivated successfully.
Nov 22 04:05:31 compute-0 podman[288479]: 2025-11-22 04:05:31.610743883 +0000 UTC m=+1.435456176 container died e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:05:31 compute-0 systemd[1]: libpod-e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15.scope: Consumed 1.120s CPU time.
Nov 22 04:05:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1985995286ac8fae9b3e4d714b8d4f57f238302418ff544fdeba4dabb0f0bc8-merged.mount: Deactivated successfully.
Nov 22 04:05:31 compute-0 podman[288479]: 2025-11-22 04:05:31.682030312 +0000 UTC m=+1.506742625 container remove e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:05:31 compute-0 systemd[1]: libpod-conmon-e2bc119cef5491cec42cef22d174fa4ab04739c1b9147871509ab296c51b9c15.scope: Deactivated successfully.
Nov 22 04:05:31 compute-0 sudo[288375]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:31 compute-0 sudo[288537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:31 compute-0 sudo[288537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:31 compute-0 sudo[288537]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:31 compute-0 sudo[288562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:05:31 compute-0 sudo[288562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:31 compute-0 sudo[288562]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:31 compute-0 sudo[288587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:31 compute-0 sudo[288587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:31 compute-0 sudo[288587]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:32 compute-0 sudo[288612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:05:32 compute-0 sudo[288612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1886371424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1886371424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:32 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1886371424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:32 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1886371424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1620: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 35 KiB/s wr, 115 op/s
Nov 22 04:05:32 compute-0 podman[288678]: 2025-11-22 04:05:32.514326071 +0000 UTC m=+0.052286078 container create f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sanderson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:05:32 compute-0 systemd[1]: Started libpod-conmon-f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132.scope.
Nov 22 04:05:32 compute-0 podman[288678]: 2025-11-22 04:05:32.487844796 +0000 UTC m=+0.025804853 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:05:32 compute-0 podman[288678]: 2025-11-22 04:05:32.619033908 +0000 UTC m=+0.156993985 container init f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:05:32 compute-0 podman[288678]: 2025-11-22 04:05:32.630582718 +0000 UTC m=+0.168542705 container start f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:05:32 compute-0 podman[288678]: 2025-11-22 04:05:32.635689711 +0000 UTC m=+0.173649738 container attach f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:05:32 compute-0 systemd[1]: libpod-f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132.scope: Deactivated successfully.
Nov 22 04:05:32 compute-0 distracted_sanderson[288694]: 167 167
Nov 22 04:05:32 compute-0 conmon[288694]: conmon f1733b3f600670aaec4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132.scope/container/memory.events
Nov 22 04:05:32 compute-0 podman[288678]: 2025-11-22 04:05:32.641349755 +0000 UTC m=+0.179309762 container died f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sanderson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c05f106ae83d2e45343482d5e88d93457f102d8e9eb8e9d3f2eb8e1150abd4-merged.mount: Deactivated successfully.
Nov 22 04:05:32 compute-0 podman[288678]: 2025-11-22 04:05:32.700772281 +0000 UTC m=+0.238732298 container remove f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_sanderson, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:05:32 compute-0 systemd[1]: libpod-conmon-f1733b3f600670aaec4bb3196c0c02ba95a43766660b27570864ed7eb4fcd132.scope: Deactivated successfully.
Nov 22 04:05:32 compute-0 podman[288716]: 2025-11-22 04:05:32.928460343 +0000 UTC m=+0.054756963 container create e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:05:32 compute-0 systemd[1]: Started libpod-conmon-e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54.scope.
Nov 22 04:05:32 compute-0 podman[288716]: 2025-11-22 04:05:32.903840175 +0000 UTC m=+0.030136855 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:33 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4e84199187cd4d6cca19d2aa7e4a64456f3c89c9d60dcb764a31b217011482/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4e84199187cd4d6cca19d2aa7e4a64456f3c89c9d60dcb764a31b217011482/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4e84199187cd4d6cca19d2aa7e4a64456f3c89c9d60dcb764a31b217011482/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a4e84199187cd4d6cca19d2aa7e4a64456f3c89c9d60dcb764a31b217011482/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:33 compute-0 nova_compute[253461]: 2025-11-22 04:05:33.060 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:33 compute-0 podman[288716]: 2025-11-22 04:05:33.068747307 +0000 UTC m=+0.195043957 container init e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 04:05:33 compute-0 podman[288716]: 2025-11-22 04:05:33.076657842 +0000 UTC m=+0.202954472 container start e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:05:33 compute-0 podman[288716]: 2025-11-22 04:05:33.082320569 +0000 UTC m=+0.208617199 container attach e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:05:33 compute-0 ceph-mon[75011]: pgmap v1620: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 35 KiB/s wr, 115 op/s
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]: {
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:     "0": [
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:         {
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "devices": [
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "/dev/loop3"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             ],
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_name": "ceph_lv0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_size": "21470642176",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "name": "ceph_lv0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "tags": {
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cluster_name": "ceph",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.crush_device_class": "",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.encrypted": "0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osd_id": "0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.type": "block",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.vdo": "0"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             },
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "type": "block",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "vg_name": "ceph_vg0"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:         }
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:     ],
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:     "1": [
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:         {
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "devices": [
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "/dev/loop4"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             ],
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_name": "ceph_lv1",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_size": "21470642176",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "name": "ceph_lv1",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "tags": {
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cluster_name": "ceph",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.crush_device_class": "",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.encrypted": "0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osd_id": "1",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.type": "block",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.vdo": "0"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             },
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "type": "block",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "vg_name": "ceph_vg1"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:         }
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:     ],
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:     "2": [
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:         {
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "devices": [
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "/dev/loop5"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             ],
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_name": "ceph_lv2",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_size": "21470642176",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "name": "ceph_lv2",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "tags": {
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.cluster_name": "ceph",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.crush_device_class": "",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.encrypted": "0",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osd_id": "2",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.type": "block",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:                 "ceph.vdo": "0"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             },
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "type": "block",
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:             "vg_name": "ceph_vg2"
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:         }
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]:     ]
Nov 22 04:05:33 compute-0 flamboyant_noether[288733]: }
Nov 22 04:05:33 compute-0 systemd[1]: libpod-e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54.scope: Deactivated successfully.
Nov 22 04:05:33 compute-0 podman[288716]: 2025-11-22 04:05:33.86376339 +0000 UTC m=+0.990060060 container died e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noether, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a4e84199187cd4d6cca19d2aa7e4a64456f3c89c9d60dcb764a31b217011482-merged.mount: Deactivated successfully.
Nov 22 04:05:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Nov 22 04:05:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Nov 22 04:05:34 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Nov 22 04:05:34 compute-0 podman[288716]: 2025-11-22 04:05:34.20825894 +0000 UTC m=+1.334555600 container remove e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_noether, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:05:34 compute-0 sudo[288612]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:34 compute-0 systemd[1]: libpod-conmon-e5a84cc1d63dc4957262f589792699fa78ac1edc92b4c96a8e0b5625a7cb4a54.scope: Deactivated successfully.
Nov 22 04:05:34 compute-0 sudo[288755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:34 compute-0 sudo[288755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:34 compute-0 sudo[288755]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1622: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 24 KiB/s wr, 173 op/s
Nov 22 04:05:34 compute-0 sudo[288780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:05:34 compute-0 sudo[288780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:34 compute-0 sudo[288780]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:34 compute-0 sudo[288805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:34 compute-0 sudo[288805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:34 compute-0 sudo[288805]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:34 compute-0 sudo[288830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:05:34 compute-0 sudo[288830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3419316583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3419316583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:35 compute-0 podman[288894]: 2025-11-22 04:05:35.039475124 +0000 UTC m=+0.065174700 container create e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shamir, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:05:35 compute-0 podman[288894]: 2025-11-22 04:05:34.997244051 +0000 UTC m=+0.022943607 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:35 compute-0 systemd[1]: Started libpod-conmon-e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87.scope.
Nov 22 04:05:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:05:35 compute-0 podman[288894]: 2025-11-22 04:05:35.16794807 +0000 UTC m=+0.193647696 container init e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shamir, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:05:35 compute-0 podman[288894]: 2025-11-22 04:05:35.179211147 +0000 UTC m=+0.204910713 container start e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shamir, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:05:35 compute-0 ceph-mon[75011]: osdmap e403: 3 total, 3 up, 3 in
Nov 22 04:05:35 compute-0 podman[288894]: 2025-11-22 04:05:35.184177424 +0000 UTC m=+0.209877000 container attach e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:05:35 compute-0 ceph-mon[75011]: pgmap v1622: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 24 KiB/s wr, 173 op/s
Nov 22 04:05:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3419316583' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3419316583' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:35 compute-0 silly_shamir[288910]: 167 167
Nov 22 04:05:35 compute-0 systemd[1]: libpod-e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87.scope: Deactivated successfully.
Nov 22 04:05:35 compute-0 podman[288894]: 2025-11-22 04:05:35.186811921 +0000 UTC m=+0.212511487 container died e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shamir, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:05:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-090c24c667d9522e6673cab259e33c72643589c07220ff7544db522a5a135507-merged.mount: Deactivated successfully.
Nov 22 04:05:35 compute-0 podman[288894]: 2025-11-22 04:05:35.246993172 +0000 UTC m=+0.272692748 container remove e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_shamir, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:05:35 compute-0 systemd[1]: libpod-conmon-e2c96b2059057bbfd485e43c701a71754a09d03f94043982db0a47644d5d1c87.scope: Deactivated successfully.
Nov 22 04:05:35 compute-0 podman[288936]: 2025-11-22 04:05:35.516022373 +0000 UTC m=+0.084017544 container create 875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:05:35 compute-0 podman[288936]: 2025-11-22 04:05:35.480579166 +0000 UTC m=+0.048574397 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:05:35 compute-0 systemd[1]: Started libpod-conmon-875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79.scope.
Nov 22 04:05:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5c06a6f25caae25a02735783bd5d94b0041e899346540f36296a4ecc681502/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5c06a6f25caae25a02735783bd5d94b0041e899346540f36296a4ecc681502/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5c06a6f25caae25a02735783bd5d94b0041e899346540f36296a4ecc681502/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac5c06a6f25caae25a02735783bd5d94b0041e899346540f36296a4ecc681502/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:05:35 compute-0 podman[288936]: 2025-11-22 04:05:35.64602437 +0000 UTC m=+0.214019561 container init 875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:05:35 compute-0 podman[288936]: 2025-11-22 04:05:35.660658303 +0000 UTC m=+0.228653454 container start 875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:05:35 compute-0 podman[288936]: 2025-11-22 04:05:35.668899174 +0000 UTC m=+0.236894415 container attach 875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Nov 22 04:05:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:05:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 24K writes, 95K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 24K writes, 8556 syncs, 2.82 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 54K keys, 13K commit groups, 1.0 writes per commit group, ingest: 32.29 MB, 0.05 MB/s
                                           Interval WAL: 13K writes, 5655 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:05:35 compute-0 nova_compute[253461]: 2025-11-22 04:05:35.823 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Nov 22 04:05:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:05:36
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['backups', 'vms', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.control', '.rgw.root', 'images']
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 25 KiB/s wr, 178 op/s
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]: {
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "osd_id": 1,
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "type": "bluestore"
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:     },
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "osd_id": 0,
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "type": "bluestore"
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:     },
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "osd_id": 2,
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:         "type": "bluestore"
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]:     }
Nov 22 04:05:36 compute-0 nostalgic_franklin[288954]: }
Nov 22 04:05:36 compute-0 systemd[1]: libpod-875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79.scope: Deactivated successfully.
Nov 22 04:05:36 compute-0 podman[288936]: 2025-11-22 04:05:36.742079146 +0000 UTC m=+1.310074377 container died 875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:05:36 compute-0 systemd[1]: libpod-875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79.scope: Consumed 1.084s CPU time.
Nov 22 04:05:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197260966' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/197260966' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac5c06a6f25caae25a02735783bd5d94b0041e899346540f36296a4ecc681502-merged.mount: Deactivated successfully.
Nov 22 04:05:36 compute-0 podman[288936]: 2025-11-22 04:05:36.824571821 +0000 UTC m=+1.392566982 container remove 875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_franklin, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Nov 22 04:05:36 compute-0 systemd[1]: libpod-conmon-875f1eb41c4d8b0b3286bc9f20a90ca5e8b1a8d9328b4099a9b693325b925d79.scope: Deactivated successfully.
Nov 22 04:05:36 compute-0 sudo[288830]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:05:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:05:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:05:36 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b9a39b08-a2da-4134-9e09-746f7576907b does not exist
Nov 22 04:05:36 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 41da6dc5-2746-4786-9f08-744db8dceaed does not exist
Nov 22 04:05:36 compute-0 sudo[289002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:05:36 compute-0 sudo[289002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:36 compute-0 sudo[289002]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:37 compute-0 sudo[289027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:05:37 compute-0 sudo[289027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:05:37 compute-0 sudo[289027]: pam_unix(sudo:session): session closed for user root
Nov 22 04:05:37 compute-0 ceph-mon[75011]: osdmap e404: 3 total, 3 up, 3 in
Nov 22 04:05:37 compute-0 ceph-mon[75011]: pgmap v1624: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 141 KiB/s rd, 25 KiB/s wr, 178 op/s
Nov 22 04:05:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/197260966' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/197260966' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:05:37 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:05:38 compute-0 nova_compute[253461]: 2025-11-22 04:05:38.062 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 12 KiB/s wr, 105 op/s
Nov 22 04:05:38 compute-0 ceph-mon[75011]: pgmap v1625: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 12 KiB/s wr, 105 op/s
Nov 22 04:05:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108051321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3108051321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3108051321' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3108051321' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:39 compute-0 nova_compute[253461]: 2025-11-22 04:05:39.786 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:39 compute-0 nova_compute[253461]: 2025-11-22 04:05:39.805 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Triggering sync for uuid ab8b13c6-9785-42c2-a54c-61aa3a7ae664 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Nov 22 04:05:39 compute-0 nova_compute[253461]: 2025-11-22 04:05:39.805 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:39 compute-0 nova_compute[253461]: 2025-11-22 04:05:39.806 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:39 compute-0 nova_compute[253461]: 2025-11-22 04:05:39.832 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 7.6 KiB/s wr, 190 op/s
Nov 22 04:05:40 compute-0 nova_compute[253461]: 2025-11-22 04:05:40.450 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:40 compute-0 nova_compute[253461]: 2025-11-22 04:05:40.450 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:05:40 compute-0 nova_compute[253461]: 2025-11-22 04:05:40.451 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:05:40 compute-0 ceph-mon[75011]: pgmap v1626: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 150 KiB/s rd, 7.6 KiB/s wr, 190 op/s
Nov 22 04:05:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:05:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 24K writes, 102K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 24K writes, 8410 syncs, 2.93 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 56K keys, 13K commit groups, 1.0 writes per commit group, ingest: 36.91 MB, 0.06 MB/s
                                           Interval WAL: 13K writes, 5413 syncs, 2.47 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:05:40 compute-0 nova_compute[253461]: 2025-11-22 04:05:40.867 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Nov 22 04:05:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Nov 22 04:05:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Nov 22 04:05:41 compute-0 nova_compute[253461]: 2025-11-22 04:05:41.353 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:05:41 compute-0 nova_compute[253461]: 2025-11-22 04:05:41.353 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:05:41 compute-0 nova_compute[253461]: 2025-11-22 04:05:41.354 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:05:41 compute-0 nova_compute[253461]: 2025-11-22 04:05:41.354 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab8b13c6-9785-42c2-a54c-61aa3a7ae664 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:05:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Nov 22 04:05:42 compute-0 ceph-mon[75011]: osdmap e405: 3 total, 3 up, 3 in
Nov 22 04:05:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Nov 22 04:05:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Nov 22 04:05:42 compute-0 podman[289052]: 2025-11-22 04:05:42.389468927 +0000 UTC m=+0.066744760 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:05:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 3.5 KiB/s wr, 130 op/s
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.625 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updating instance_info_cache with network_info: [{"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.644 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.645 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.645 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.645 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.645 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.667 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.668 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.668 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.668 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:05:42 compute-0 nova_compute[253461]: 2025-11-22 04:05:42.668 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.067 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/592088557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.140 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:43 compute-0 ceph-mon[75011]: osdmap e406: 3 total, 3 up, 3 in
Nov 22 04:05:43 compute-0 ceph-mon[75011]: pgmap v1629: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 3.5 KiB/s wr, 130 op/s
Nov 22 04:05:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/592088557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.224 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.224 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.425 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.427 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4249MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.427 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.428 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.525 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance ab8b13c6-9785-42c2-a54c-61aa3a7ae664 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.525 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.526 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.555 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing inventories for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.574 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating ProviderTree inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.574 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.592 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing aggregate associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.625 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing trait associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 04:05:43 compute-0 nova_compute[253461]: 2025-11-22 04:05:43.672 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3813787023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:44 compute-0 nova_compute[253461]: 2025-11-22 04:05:44.164 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:44 compute-0 nova_compute[253461]: 2025-11-22 04:05:44.173 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:05:44 compute-0 nova_compute[253461]: 2025-11-22 04:05:44.197 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:05:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Nov 22 04:05:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3813787023' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Nov 22 04:05:44 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Nov 22 04:05:44 compute-0 nova_compute[253461]: 2025-11-22 04:05:44.231 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:05:44 compute-0 nova_compute[253461]: 2025-11-22 04:05:44.232 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 425 KiB/s rd, 4.2 KiB/s wr, 162 op/s
Nov 22 04:05:44 compute-0 ovn_controller[152691]: 2025-11-22T04:05:44Z|00237|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 22 04:05:45 compute-0 ceph-mon[75011]: osdmap e407: 3 total, 3 up, 3 in
Nov 22 04:05:45 compute-0 ceph-mon[75011]: pgmap v1631: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 425 KiB/s rd, 4.2 KiB/s wr, 162 op/s
Nov 22 04:05:45 compute-0 nova_compute[253461]: 2025-11-22 04:05:45.869 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:46 compute-0 nova_compute[253461]: 2025-11-22 04:05:46.017 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:46 compute-0 nova_compute[253461]: 2025-11-22 04:05:46.018 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:46 compute-0 nova_compute[253461]: 2025-11-22 04:05:46.018 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:46 compute-0 nova_compute[253461]: 2025-11-22 04:05:46.019 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:05:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:05:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 18K writes, 80K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 6334 syncs, 2.98 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9397 writes, 42K keys, 9397 commit groups, 1.0 writes per commit group, ingest: 24.85 MB, 0.04 MB/s
                                           Interval WAL: 9397 writes, 3817 syncs, 2.46 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:05:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:46 compute-0 nova_compute[253461]: 2025-11-22 04:05:46.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1632: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 487 KiB/s rd, 5.5 KiB/s wr, 98 op/s
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011061603625382563 of space, bias 1.0, pg target 0.33184810876147686 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:05:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:05:46 compute-0 ceph-mon[75011]: pgmap v1632: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 487 KiB/s rd, 5.5 KiB/s wr, 98 op/s
Nov 22 04:05:47 compute-0 nova_compute[253461]: 2025-11-22 04:05:47.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.067 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.440 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "13d70bea-d9e6-4e8d-9824-beae38fa6143" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.441 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 408 KiB/s rd, 5.8 KiB/s wr, 92 op/s
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.460 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.543 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.544 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.552 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.553 253465 INFO nova.compute.claims [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:05:48 compute-0 nova_compute[253461]: 2025-11-22 04:05:48.732 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:48 compute-0 ceph-mon[75011]: pgmap v1633: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 408 KiB/s rd, 5.8 KiB/s wr, 92 op/s
Nov 22 04:05:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:05:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3043834830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.169 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.177 253465 DEBUG nova.compute.provider_tree [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.206 253465 DEBUG nova.scheduler.client.report [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.243 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.244 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.306 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.307 253465 DEBUG nova.network.neutron [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.326 253465 INFO nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.344 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.413 253465 INFO nova.virt.block_device [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Booting with volume 27525764-6209-40cb-957b-821214eedf42 at /dev/vda
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.559 253465 DEBUG os_brick.utils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.560 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.578 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.579 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0198c0-bd53-4238-858b-89b63c9ac378]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.580 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.584 253465 DEBUG nova.policy [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '45ccef35c0c843a59c9dfd0eb67190a6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '83cc5de7368b40b984b51f781e85343c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.593 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.593 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[25a227ac-cd17-4c2c-a323-f8cb5d183282]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.596 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.610 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.611 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[f0380dff-29dd-4d05-b690-6ec5aea61125]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.612 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[4a06e695-72ef-4d5d-8b81-0f5c54d4c87b]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.614 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.651 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.654 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.655 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.655 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.656 253465 DEBUG os_brick.utils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] <== get_connector_properties: return (96ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:05:49 compute-0 nova_compute[253461]: 2025-11-22 04:05:49.656 253465 DEBUG nova.virt.block_device [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updating existing volume attachment record: 8edc5ba5-6db0-434a-98a8-67d62329fbae _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:05:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3043834830' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.183 253465 DEBUG nova.network.neutron [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Successfully created port: 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:05:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1359563047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 8.8 KiB/s wr, 132 op/s
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.664 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.667 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.668 253465 INFO nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Creating image(s)
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.669 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.669 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Ensure instance console log exists: /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.670 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.670 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.671 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Nov 22 04:05:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1359563047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:50 compute-0 ceph-mon[75011]: pgmap v1634: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 193 KiB/s rd, 8.8 KiB/s wr, 132 op/s
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.873 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Nov 22 04:05:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.890 253465 DEBUG nova.network.neutron [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Successfully updated port: 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.918 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.919 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquired lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.919 253465 DEBUG nova.network.neutron [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.998 253465 DEBUG nova.compute.manager [req-8aac9840-8b00-4b8e-b7d3-7a1db1124d3e req-98bd32be-692f-4cd3-adc4-38ca23b8552e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-changed-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.998 253465 DEBUG nova.compute.manager [req-8aac9840-8b00-4b8e-b7d3-7a1db1124d3e req-98bd32be-692f-4cd3-adc4-38ca23b8552e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Refreshing instance network info cache due to event network-changed-9d19a24c-87e3-48b2-9c7a-f0795362a1a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:05:50 compute-0 nova_compute[253461]: 2025-11-22 04:05:50.998 253465 DEBUG oslo_concurrency.lockutils [req-8aac9840-8b00-4b8e-b7d3-7a1db1124d3e req-98bd32be-692f-4cd3-adc4-38ca23b8552e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:05:51 compute-0 nova_compute[253461]: 2025-11-22 04:05:51.079 253465 DEBUG nova.network.neutron [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:05:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e408 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/43020736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/43020736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:51 compute-0 ceph-mgr[75294]: [devicehealth INFO root] Check health
Nov 22 04:05:51 compute-0 ceph-mon[75011]: osdmap e408: 3 total, 3 up, 3 in
Nov 22 04:05:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/43020736' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/43020736' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.403 253465 DEBUG nova.network.neutron [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updating instance_info_cache with network_info: [{"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.424 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Releasing lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.425 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Instance network_info: |[{"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.425 253465 DEBUG oslo_concurrency.lockutils [req-8aac9840-8b00-4b8e-b7d3-7a1db1124d3e req-98bd32be-692f-4cd3-adc4-38ca23b8552e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.426 253465 DEBUG nova.network.neutron [req-8aac9840-8b00-4b8e-b7d3-7a1db1124d3e req-98bd32be-692f-4cd3-adc4-38ca23b8552e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Refreshing network info cache for port 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.431 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Start _get_guest_xml network_info=[{"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': '8edc5ba5-6db0-434a-98a8-67d62329fbae', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-27525764-6209-40cb-957b-821214eedf42', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '27525764-6209-40cb-957b-821214eedf42', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '13d70bea-d9e6-4e8d-9824-beae38fa6143', 'attached_at': '', 'detached_at': '', 'volume_id': '27525764-6209-40cb-957b-821214eedf42', 'serial': '27525764-6209-40cb-957b-821214eedf42'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.438 253465 WARNING nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.447 253465 DEBUG nova.virt.libvirt.host [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.448 253465 DEBUG nova.virt.libvirt.host [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:05:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1636: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 8.0 KiB/s wr, 113 op/s
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.452 253465 DEBUG nova.virt.libvirt.host [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.453 253465 DEBUG nova.virt.libvirt.host [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.454 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.454 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.455 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.455 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.456 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.456 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.456 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.457 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.457 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.457 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.458 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.458 253465 DEBUG nova.virt.hardware [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.495 253465 DEBUG nova.storage.rbd_utils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 13d70bea-d9e6-4e8d-9824-beae38fa6143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:05:52 compute-0 nova_compute[253461]: 2025-11-22 04:05:52.499 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Nov 22 04:05:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Nov 22 04:05:52 compute-0 ceph-mon[75011]: pgmap v1636: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 169 KiB/s rd, 8.0 KiB/s wr, 113 op/s
Nov 22 04:05:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Nov 22 04:05:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:05:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774096491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.011 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.043 253465 DEBUG nova.virt.libvirt.vif [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1270756583',display_name='tempest-TestVolumeBootPattern-server-1270756583',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1270756583',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-ixn8dsdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:05:49Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=13d70bea-d9e6-4e8d-9824-beae38fa6143,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.043 253465 DEBUG nova.network.os_vif_util [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.044 253465 DEBUG nova.network.os_vif_util [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:23:4c,bridge_name='br-int',has_traffic_filtering=True,id=9d19a24c-87e3-48b2-9c7a-f0795362a1a1,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d19a24c-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.045 253465 DEBUG nova.objects.instance [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'pci_devices' on Instance uuid 13d70bea-d9e6-4e8d-9824-beae38fa6143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.056 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <uuid>13d70bea-d9e6-4e8d-9824-beae38fa6143</uuid>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <name>instance-00000017</name>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <nova:name>tempest-TestVolumeBootPattern-server-1270756583</nova:name>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:05:52</nova:creationTime>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <nova:user uuid="45ccef35c0c843a59c9dfd0eb67190a6">tempest-TestVolumeBootPattern-1584219565-project-member</nova:user>
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <nova:project uuid="83cc5de7368b40b984b51f781e85343c">tempest-TestVolumeBootPattern-1584219565</nova:project>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <nova:port uuid="9d19a24c-87e3-48b2-9c7a-f0795362a1a1">
Nov 22 04:05:53 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <system>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <entry name="serial">13d70bea-d9e6-4e8d-9824-beae38fa6143</entry>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <entry name="uuid">13d70bea-d9e6-4e8d-9824-beae38fa6143</entry>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </system>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <os>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   </os>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <features>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   </features>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/13d70bea-d9e6-4e8d-9824-beae38fa6143_disk.config">
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       </source>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-27525764-6209-40cb-957b-821214eedf42">
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       </source>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:05:53 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <serial>27525764-6209-40cb-957b-821214eedf42</serial>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:ac:23:4c"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <target dev="tap9d19a24c-87"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143/console.log" append="off"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <video>
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </video>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:05:53 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:05:53 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:05:53 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:05:53 compute-0 nova_compute[253461]: </domain>
Nov 22 04:05:53 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.056 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Preparing to wait for external event network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.057 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.057 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.057 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.058 253465 DEBUG nova.virt.libvirt.vif [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1270756583',display_name='tempest-TestVolumeBootPattern-server-1270756583',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1270756583',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-ixn8dsdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:05:49Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=13d70bea-d9e6-4e8d-9824-beae38fa6143,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.058 253465 DEBUG nova.network.os_vif_util [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.059 253465 DEBUG nova.network.os_vif_util [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ac:23:4c,bridge_name='br-int',has_traffic_filtering=True,id=9d19a24c-87e3-48b2-9c7a-f0795362a1a1,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d19a24c-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.059 253465 DEBUG os_vif [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:23:4c,bridge_name='br-int',has_traffic_filtering=True,id=9d19a24c-87e3-48b2-9c7a-f0795362a1a1,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d19a24c-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.059 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.060 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.060 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.064 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.064 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d19a24c-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.065 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d19a24c-87, col_values=(('external_ids', {'iface-id': '9d19a24c-87e3-48b2-9c7a-f0795362a1a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ac:23:4c', 'vm-uuid': '13d70bea-d9e6-4e8d-9824-beae38fa6143'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.066 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:53 compute-0 NetworkManager[48916]: <info>  [1763784353.0678] manager: (tap9d19a24c-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/121)
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.068 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.074 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.077 253465 INFO os_vif [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ac:23:4c,bridge_name='br-int',has_traffic_filtering=True,id=9d19a24c-87e3-48b2-9c7a-f0795362a1a1,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d19a24c-87')
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.144 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.145 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.145 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] No VIF found with MAC fa:16:3e:ac:23:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.146 253465 INFO nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Using config drive
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.178 253465 DEBUG nova.storage.rbd_utils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 13d70bea-d9e6-4e8d-9824-beae38fa6143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:05:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1346450085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1346450085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.622 253465 INFO nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Creating config drive at /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143/disk.config
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.631 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp15zciy1p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.777 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp15zciy1p" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.820 253465 DEBUG nova.storage.rbd_utils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] rbd image 13d70bea-d9e6-4e8d-9824-beae38fa6143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:05:53 compute-0 nova_compute[253461]: 2025-11-22 04:05:53.824 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143/disk.config 13d70bea-d9e6-4e8d-9824-beae38fa6143_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:05:53 compute-0 ceph-mon[75011]: osdmap e409: 3 total, 3 up, 3 in
Nov 22 04:05:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1774096491' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:05:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1346450085' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1346450085' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.015 253465 DEBUG oslo_concurrency.processutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143/disk.config 13d70bea-d9e6-4e8d-9824-beae38fa6143_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.016 253465 INFO nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Deleting local config drive /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143/disk.config because it was imported into RBD.
Nov 22 04:05:54 compute-0 kernel: tap9d19a24c-87: entered promiscuous mode
Nov 22 04:05:54 compute-0 NetworkManager[48916]: <info>  [1763784354.0793] manager: (tap9d19a24c-87): new Tun device (/org/freedesktop/NetworkManager/Devices/122)
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.080 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:54 compute-0 ovn_controller[152691]: 2025-11-22T04:05:54Z|00238|binding|INFO|Claiming lport 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 for this chassis.
Nov 22 04:05:54 compute-0 ovn_controller[152691]: 2025-11-22T04:05:54Z|00239|binding|INFO|9d19a24c-87e3-48b2-9c7a-f0795362a1a1: Claiming fa:16:3e:ac:23:4c 10.100.0.13
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.088 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:23:4c 10.100.0.13'], port_security=['fa:16:3e:ac:23:4c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '13d70bea-d9e6-4e8d-9824-beae38fa6143', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '374de8a5-1500-46fb-adf8-2bb87fa0ef15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=9d19a24c-87e3-48b2-9c7a-f0795362a1a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.089 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 bound to our chassis
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.091 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:05:54 compute-0 ovn_controller[152691]: 2025-11-22T04:05:54Z|00240|binding|INFO|Setting lport 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 ovn-installed in OVS
Nov 22 04:05:54 compute-0 ovn_controller[152691]: 2025-11-22T04:05:54Z|00241|binding|INFO|Setting lport 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 up in Southbound
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.101 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.109 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1b81155d-52a9-4482-a501-a4c4585dc889]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:54 compute-0 systemd-udevd[289262]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:05:54 compute-0 systemd-machined[215728]: New machine qemu-23-instance-00000017.
Nov 22 04:05:54 compute-0 NetworkManager[48916]: <info>  [1763784354.1333] device (tap9d19a24c-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:05:54 compute-0 NetworkManager[48916]: <info>  [1763784354.1342] device (tap9d19a24c-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:05:54 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.145 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[71f6988b-f639-4969-82ad-8a5a4be31a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.148 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[d67887cf-b4b1-4be8-8e41-ccba5be8ce14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.192 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4ff1c7-9def-4dce-989c-1b34a27db6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.224 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6d06c3ca-994b-49c2-8a5f-a831d82df431]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455473, 'reachable_time': 38477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289271, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.248 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ed25ff81-c646-4edb-8147-ffe756124c29]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4670b112-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455488, 'tstamp': 455488}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289274, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4670b112-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455491, 'tstamp': 455491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289274, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.251 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.253 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.254 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.255 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.255 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.256 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.257 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:05:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 5.9 KiB/s wr, 87 op/s
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.520 253465 DEBUG nova.compute.manager [req-09e0729a-44a4-4736-9a03-c8a8393d30e5 req-603f5091-634f-4058-9e8f-bc0c1a2edd82 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.521 253465 DEBUG oslo_concurrency.lockutils [req-09e0729a-44a4-4736-9a03-c8a8393d30e5 req-603f5091-634f-4058-9e8f-bc0c1a2edd82 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.521 253465 DEBUG oslo_concurrency.lockutils [req-09e0729a-44a4-4736-9a03-c8a8393d30e5 req-603f5091-634f-4058-9e8f-bc0c1a2edd82 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.521 253465 DEBUG oslo_concurrency.lockutils [req-09e0729a-44a4-4736-9a03-c8a8393d30e5 req-603f5091-634f-4058-9e8f-bc0c1a2edd82 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.521 253465 DEBUG nova.compute.manager [req-09e0729a-44a4-4736-9a03-c8a8393d30e5 req-603f5091-634f-4058-9e8f-bc0c1a2edd82 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Processing event network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.601 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:05:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:54.602 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.602 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.642 253465 DEBUG nova.network.neutron [req-8aac9840-8b00-4b8e-b7d3-7a1db1124d3e req-98bd32be-692f-4cd3-adc4-38ca23b8552e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updated VIF entry in instance network info cache for port 9d19a24c-87e3-48b2-9c7a-f0795362a1a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.643 253465 DEBUG nova.network.neutron [req-8aac9840-8b00-4b8e-b7d3-7a1db1124d3e req-98bd32be-692f-4cd3-adc4-38ca23b8552e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updating instance_info_cache with network_info: [{"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.666 253465 DEBUG oslo_concurrency.lockutils [req-8aac9840-8b00-4b8e-b7d3-7a1db1124d3e req-98bd32be-692f-4cd3-adc4-38ca23b8552e f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:05:54 compute-0 ceph-mon[75011]: pgmap v1638: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 5.9 KiB/s wr, 87 op/s
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.926 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784354.926225, 13d70bea-d9e6-4e8d-9824-beae38fa6143 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.927 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] VM Started (Lifecycle Event)
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.929 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.933 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.937 253465 INFO nova.virt.libvirt.driver [-] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Instance spawned successfully.
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.937 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.957 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.961 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.981 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.981 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.982 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.982 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.983 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:54 compute-0 nova_compute[253461]: 2025-11-22 04:05:54.983 253465 DEBUG nova.virt.libvirt.driver [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.063 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.063 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784354.926374, 13d70bea-d9e6-4e8d-9824-beae38fa6143 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.064 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] VM Paused (Lifecycle Event)
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.095 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.099 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784354.9319363, 13d70bea-d9e6-4e8d-9824-beae38fa6143 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.100 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] VM Resumed (Lifecycle Event)
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.112 253465 INFO nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Took 4.45 seconds to spawn the instance on the hypervisor.
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.112 253465 DEBUG nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.125 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.129 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.175 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.204 253465 INFO nova.compute.manager [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Took 6.69 seconds to build instance.
Nov 22 04:05:55 compute-0 nova_compute[253461]: 2025-11-22 04:05:55.221 253465 DEBUG oslo_concurrency.lockutils [None req-f14b2a02-20da-4109-a178-73d3834663d2 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:05:55 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2363427708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:05:55 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2363427708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2363427708' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:05:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2363427708' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:05:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e409 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:05:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 5.7 KiB/s wr, 150 op/s
Nov 22 04:05:56 compute-0 nova_compute[253461]: 2025-11-22 04:05:56.610 253465 DEBUG nova.compute.manager [req-10d344a7-c2ff-45b4-880e-8ef5be0dad1f req-d12b6dad-a6ba-463e-bd5f-ee04cb140abb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:05:56 compute-0 nova_compute[253461]: 2025-11-22 04:05:56.610 253465 DEBUG oslo_concurrency.lockutils [req-10d344a7-c2ff-45b4-880e-8ef5be0dad1f req-d12b6dad-a6ba-463e-bd5f-ee04cb140abb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:05:56 compute-0 nova_compute[253461]: 2025-11-22 04:05:56.611 253465 DEBUG oslo_concurrency.lockutils [req-10d344a7-c2ff-45b4-880e-8ef5be0dad1f req-d12b6dad-a6ba-463e-bd5f-ee04cb140abb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:05:56 compute-0 nova_compute[253461]: 2025-11-22 04:05:56.611 253465 DEBUG oslo_concurrency.lockutils [req-10d344a7-c2ff-45b4-880e-8ef5be0dad1f req-d12b6dad-a6ba-463e-bd5f-ee04cb140abb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:05:56 compute-0 nova_compute[253461]: 2025-11-22 04:05:56.611 253465 DEBUG nova.compute.manager [req-10d344a7-c2ff-45b4-880e-8ef5be0dad1f req-d12b6dad-a6ba-463e-bd5f-ee04cb140abb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] No waiting events found dispatching network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:05:56 compute-0 nova_compute[253461]: 2025-11-22 04:05:56.612 253465 WARNING nova.compute.manager [req-10d344a7-c2ff-45b4-880e-8ef5be0dad1f req-d12b6dad-a6ba-463e-bd5f-ee04cb140abb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received unexpected event network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 for instance with vm_state active and task_state None.
Nov 22 04:05:57 compute-0 ceph-mon[75011]: pgmap v1639: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 5.7 KiB/s wr, 150 op/s
Nov 22 04:05:57 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:05:57.604 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:05:58 compute-0 nova_compute[253461]: 2025-11-22 04:05:58.068 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:58 compute-0 nova_compute[253461]: 2025-11-22 04:05:58.077 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:05:58 compute-0 podman[289318]: 2025-11-22 04:05:58.375170823 +0000 UTC m=+0.054223295 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:05:58 compute-0 podman[289319]: 2025-11-22 04:05:58.448203036 +0000 UTC m=+0.117868501 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:05:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 177 op/s
Nov 22 04:05:58 compute-0 ceph-mon[75011]: pgmap v1640: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 21 KiB/s wr, 177 op/s
Nov 22 04:05:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Nov 22 04:05:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Nov 22 04:05:59 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Nov 22 04:06:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1046825848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1046825848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 21 KiB/s wr, 233 op/s
Nov 22 04:06:00 compute-0 ceph-mon[75011]: osdmap e410: 3 total, 3 up, 3 in
Nov 22 04:06:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1046825848' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1046825848' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:00 compute-0 ceph-mon[75011]: pgmap v1642: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 21 KiB/s wr, 233 op/s
Nov 22 04:06:00 compute-0 nova_compute[253461]: 2025-11-22 04:06:00.579 253465 DEBUG nova.compute.manager [req-d1c8b9fb-e93b-4d3f-a0fd-77ffb0540013 req-fcb40a8a-481f-47cd-8b27-a6f88266b8de f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-changed-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:06:00 compute-0 nova_compute[253461]: 2025-11-22 04:06:00.580 253465 DEBUG nova.compute.manager [req-d1c8b9fb-e93b-4d3f-a0fd-77ffb0540013 req-fcb40a8a-481f-47cd-8b27-a6f88266b8de f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Refreshing instance network info cache due to event network-changed-9d19a24c-87e3-48b2-9c7a-f0795362a1a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:06:00 compute-0 nova_compute[253461]: 2025-11-22 04:06:00.581 253465 DEBUG oslo_concurrency.lockutils [req-d1c8b9fb-e93b-4d3f-a0fd-77ffb0540013 req-fcb40a8a-481f-47cd-8b27-a6f88266b8de f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:06:00 compute-0 nova_compute[253461]: 2025-11-22 04:06:00.582 253465 DEBUG oslo_concurrency.lockutils [req-d1c8b9fb-e93b-4d3f-a0fd-77ffb0540013 req-fcb40a8a-481f-47cd-8b27-a6f88266b8de f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:06:00 compute-0 nova_compute[253461]: 2025-11-22 04:06:00.583 253465 DEBUG nova.network.neutron [req-d1c8b9fb-e93b-4d3f-a0fd-77ffb0540013 req-fcb40a8a-481f-47cd-8b27-a6f88266b8de f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Refreshing network info cache for port 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:06:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e410 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Nov 22 04:06:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Nov 22 04:06:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Nov 22 04:06:01 compute-0 nova_compute[253461]: 2025-11-22 04:06:01.949 253465 DEBUG nova.network.neutron [req-d1c8b9fb-e93b-4d3f-a0fd-77ffb0540013 req-fcb40a8a-481f-47cd-8b27-a6f88266b8de f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updated VIF entry in instance network info cache for port 9d19a24c-87e3-48b2-9c7a-f0795362a1a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:06:01 compute-0 nova_compute[253461]: 2025-11-22 04:06:01.950 253465 DEBUG nova.network.neutron [req-d1c8b9fb-e93b-4d3f-a0fd-77ffb0540013 req-fcb40a8a-481f-47cd-8b27-a6f88266b8de f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updating instance_info_cache with network_info: [{"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:06:02 compute-0 nova_compute[253461]: 2025-11-22 04:06:02.126 253465 DEBUG oslo_concurrency.lockutils [req-d1c8b9fb-e93b-4d3f-a0fd-77ffb0540013 req-fcb40a8a-481f-47cd-8b27-a6f88266b8de f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:06:02 compute-0 ceph-mon[75011]: osdmap e411: 3 total, 3 up, 3 in
Nov 22 04:06:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 22 KiB/s wr, 240 op/s
Nov 22 04:06:03 compute-0 nova_compute[253461]: 2025-11-22 04:06:03.070 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:03 compute-0 nova_compute[253461]: 2025-11-22 04:06:03.078 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:03 compute-0 ceph-mon[75011]: pgmap v1644: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 22 KiB/s wr, 240 op/s
Nov 22 04:06:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Nov 22 04:06:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Nov 22 04:06:04 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Nov 22 04:06:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 KiB/s wr, 125 op/s
Nov 22 04:06:05 compute-0 ceph-mon[75011]: osdmap e412: 3 total, 3 up, 3 in
Nov 22 04:06:05 compute-0 ceph-mon[75011]: pgmap v1646: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 KiB/s wr, 125 op/s
Nov 22 04:06:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3294070538' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3294070538' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e412 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3294070538' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3294070538' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 721 KiB/s rd, 2.2 KiB/s wr, 66 op/s
Nov 22 04:06:07 compute-0 ceph-mon[75011]: pgmap v1647: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 721 KiB/s rd, 2.2 KiB/s wr, 66 op/s
Nov 22 04:06:08 compute-0 nova_compute[253461]: 2025-11-22 04:06:08.074 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:08 compute-0 nova_compute[253461]: 2025-11-22 04:06:08.080 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Nov 22 04:06:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.1 KiB/s wr, 87 op/s
Nov 22 04:06:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Nov 22 04:06:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Nov 22 04:06:08 compute-0 ceph-mon[75011]: pgmap v1648: 305 pgs: 305 active+clean; 169 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.1 KiB/s wr, 87 op/s
Nov 22 04:06:09 compute-0 ovn_controller[152691]: 2025-11-22T04:06:09Z|00048|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.13
Nov 22 04:06:09 compute-0 ovn_controller[152691]: 2025-11-22T04:06:09Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ac:23:4c 10.100.0.13
Nov 22 04:06:09 compute-0 ceph-mon[75011]: osdmap e413: 3 total, 3 up, 3 in
Nov 22 04:06:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 174 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 133 KiB/s wr, 125 op/s
Nov 22 04:06:11 compute-0 ceph-mon[75011]: pgmap v1650: 305 pgs: 305 active+clean; 174 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 133 KiB/s wr, 125 op/s
Nov 22 04:06:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292849444' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3292849444' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 183 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 745 KiB/s wr, 159 op/s
Nov 22 04:06:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3292849444' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3292849444' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:13 compute-0 nova_compute[253461]: 2025-11-22 04:06:13.077 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:13 compute-0 nova_compute[253461]: 2025-11-22 04:06:13.083 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:13 compute-0 ovn_controller[152691]: 2025-11-22T04:06:13Z|00050|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.13
Nov 22 04:06:13 compute-0 ovn_controller[152691]: 2025-11-22T04:06:13Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ac:23:4c 10.100.0.13
Nov 22 04:06:13 compute-0 podman[289364]: 2025-11-22 04:06:13.414517466 +0000 UTC m=+0.089537673 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:06:13 compute-0 ceph-mon[75011]: pgmap v1651: 305 pgs: 305 active+clean; 183 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 745 KiB/s wr, 159 op/s
Nov 22 04:06:14 compute-0 ovn_controller[152691]: 2025-11-22T04:06:14Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ac:23:4c 10.100.0.13
Nov 22 04:06:14 compute-0 ovn_controller[152691]: 2025-11-22T04:06:14Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ac:23:4c 10.100.0.13
Nov 22 04:06:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 183 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 614 KiB/s wr, 143 op/s
Nov 22 04:06:14 compute-0 ceph-mon[75011]: pgmap v1652: 305 pgs: 305 active+clean; 183 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 614 KiB/s wr, 143 op/s
Nov 22 04:06:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523296025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523296025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3523296025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3523296025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e413 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Nov 22 04:06:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Nov 22 04:06:16 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Nov 22 04:06:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 187 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 871 KiB/s wr, 131 op/s
Nov 22 04:06:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031471518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031471518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:17 compute-0 ceph-mon[75011]: osdmap e414: 3 total, 3 up, 3 in
Nov 22 04:06:17 compute-0 ceph-mon[75011]: pgmap v1654: 305 pgs: 305 active+clean; 187 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 871 KiB/s wr, 131 op/s
Nov 22 04:06:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2031471518' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:17 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2031471518' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:18 compute-0 nova_compute[253461]: 2025-11-22 04:06:18.081 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:18 compute-0 nova_compute[253461]: 2025-11-22 04:06:18.084 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 187 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 709 KiB/s wr, 123 op/s
Nov 22 04:06:18 compute-0 ceph-mon[75011]: pgmap v1655: 305 pgs: 305 active+clean; 187 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 709 KiB/s wr, 123 op/s
Nov 22 04:06:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 187 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 464 KiB/s rd, 595 KiB/s wr, 91 op/s
Nov 22 04:06:20 compute-0 ceph-mon[75011]: pgmap v1656: 305 pgs: 305 active+clean; 187 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 464 KiB/s rd, 595 KiB/s wr, 91 op/s
Nov 22 04:06:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 187 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 98 KiB/s wr, 50 op/s
Nov 22 04:06:22 compute-0 ceph-mon[75011]: pgmap v1657: 305 pgs: 305 active+clean; 187 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 117 KiB/s rd, 98 KiB/s wr, 50 op/s
Nov 22 04:06:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:23.016 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:23.017 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:23.018 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:23 compute-0 nova_compute[253461]: 2025-11-22 04:06:23.083 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:23 compute-0 nova_compute[253461]: 2025-11-22 04:06:23.087 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 187 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 94 KiB/s wr, 39 op/s
Nov 22 04:06:24 compute-0 ceph-mon[75011]: pgmap v1658: 305 pgs: 305 active+clean; 187 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 94 KiB/s wr, 39 op/s
Nov 22 04:06:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1659: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 54 KiB/s wr, 34 op/s
Nov 22 04:06:26 compute-0 ceph-mon[75011]: pgmap v1659: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 54 KiB/s wr, 34 op/s
Nov 22 04:06:28 compute-0 nova_compute[253461]: 2025-11-22 04:06:28.088 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:28 compute-0 nova_compute[253461]: 2025-11-22 04:06:28.090 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:06:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 51 KiB/s wr, 31 op/s
Nov 22 04:06:28 compute-0 ceph-mon[75011]: pgmap v1660: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 51 KiB/s wr, 31 op/s
Nov 22 04:06:29 compute-0 podman[289384]: 2025-11-22 04:06:29.381242078 +0000 UTC m=+0.053062238 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:06:29 compute-0 podman[289385]: 2025-11-22 04:06:29.417258974 +0000 UTC m=+0.082268491 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:06:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Nov 22 04:06:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Nov 22 04:06:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Nov 22 04:06:30 compute-0 ovn_controller[152691]: 2025-11-22T04:06:30Z|00242|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 04:06:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 62 KiB/s wr, 10 op/s
Nov 22 04:06:30 compute-0 ceph-mon[75011]: osdmap e415: 3 total, 3 up, 3 in
Nov 22 04:06:30 compute-0 ceph-mon[75011]: pgmap v1662: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 62 KiB/s wr, 10 op/s
Nov 22 04:06:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e415 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Nov 22 04:06:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Nov 22 04:06:32 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Nov 22 04:06:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 68 KiB/s wr, 11 op/s
Nov 22 04:06:33 compute-0 nova_compute[253461]: 2025-11-22 04:06:33.089 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:06:33 compute-0 nova_compute[253461]: 2025-11-22 04:06:33.092 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:06:33 compute-0 nova_compute[253461]: 2025-11-22 04:06:33.092 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Nov 22 04:06:33 compute-0 nova_compute[253461]: 2025-11-22 04:06:33.092 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 22 04:06:33 compute-0 nova_compute[253461]: 2025-11-22 04:06:33.131 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:33 compute-0 nova_compute[253461]: 2025-11-22 04:06:33.132 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Nov 22 04:06:33 compute-0 ceph-mon[75011]: osdmap e416: 3 total, 3 up, 3 in
Nov 22 04:06:33 compute-0 ceph-mon[75011]: pgmap v1664: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 68 KiB/s wr, 11 op/s
Nov 22 04:06:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1665: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 12 KiB/s wr, 14 op/s
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.590 253465 DEBUG nova.compute.manager [req-4dee786c-eaf2-44da-a15a-240c83429b7e req-e3e56352-e5fc-4dc5-91be-52cf9ebf8cf0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-changed-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.590 253465 DEBUG nova.compute.manager [req-4dee786c-eaf2-44da-a15a-240c83429b7e req-e3e56352-e5fc-4dc5-91be-52cf9ebf8cf0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Refreshing instance network info cache due to event network-changed-9d19a24c-87e3-48b2-9c7a-f0795362a1a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.591 253465 DEBUG oslo_concurrency.lockutils [req-4dee786c-eaf2-44da-a15a-240c83429b7e req-e3e56352-e5fc-4dc5-91be-52cf9ebf8cf0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.591 253465 DEBUG oslo_concurrency.lockutils [req-4dee786c-eaf2-44da-a15a-240c83429b7e req-e3e56352-e5fc-4dc5-91be-52cf9ebf8cf0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.591 253465 DEBUG nova.network.neutron [req-4dee786c-eaf2-44da-a15a-240c83429b7e req-e3e56352-e5fc-4dc5-91be-52cf9ebf8cf0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Refreshing network info cache for port 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:06:34 compute-0 ceph-mon[75011]: pgmap v1665: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 68 KiB/s rd, 12 KiB/s wr, 14 op/s
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.943 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "13d70bea-d9e6-4e8d-9824-beae38fa6143" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.944 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.945 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.945 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.945 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.946 253465 INFO nova.compute.manager [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Terminating instance
Nov 22 04:06:34 compute-0 nova_compute[253461]: 2025-11-22 04:06:34.948 253465 DEBUG nova.compute.manager [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:06:35 compute-0 kernel: tap9d19a24c-87 (unregistering): left promiscuous mode
Nov 22 04:06:35 compute-0 NetworkManager[48916]: <info>  [1763784395.4661] device (tap9d19a24c-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:06:35 compute-0 ovn_controller[152691]: 2025-11-22T04:06:35Z|00243|binding|INFO|Releasing lport 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 from this chassis (sb_readonly=0)
Nov 22 04:06:35 compute-0 ovn_controller[152691]: 2025-11-22T04:06:35Z|00244|binding|INFO|Setting lport 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 down in Southbound
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.484 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:35 compute-0 ovn_controller[152691]: 2025-11-22T04:06:35Z|00245|binding|INFO|Removing iface tap9d19a24c-87 ovn-installed in OVS
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.488 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.524 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.527 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:23:4c 10.100.0.13'], port_security=['fa:16:3e:ac:23:4c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '13d70bea-d9e6-4e8d-9824-beae38fa6143', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '374de8a5-1500-46fb-adf8-2bb87fa0ef15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=9d19a24c-87e3-48b2-9c7a-f0795362a1a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.528 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 9d19a24c-87e3-48b2-9c7a-f0795362a1a1 in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.531 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4670b112-9f63-4a03-8d79-91f581c69c03
Nov 22 04:06:35 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Nov 22 04:06:35 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 16.192s CPU time.
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.554 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e33c912e-e0f9-44e4-b50e-6cfc4d24b5d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:06:35 compute-0 systemd-machined[215728]: Machine qemu-23-instance-00000017 terminated.
Nov 22 04:06:35 compute-0 kernel: tap9d19a24c-87: entered promiscuous mode
Nov 22 04:06:35 compute-0 kernel: tap9d19a24c-87 (unregistering): left promiscuous mode
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.590 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.601 253465 INFO nova.virt.libvirt.driver [-] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Instance destroyed successfully.
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.602 253465 DEBUG nova.objects.instance [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'resources' on Instance uuid 13d70bea-d9e6-4e8d-9824-beae38fa6143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.602 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[51ddcb50-e2af-4f41-8848-40f6d0a6b3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.605 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c06ddb-89d8-4bbd-a231-600c368d0bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.644 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2e6b93-939d-485c-9ae5-df07a46fdaf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.664 253465 DEBUG nova.virt.libvirt.vif [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:05:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1270756583',display_name='tempest-TestVolumeBootPattern-server-1270756583',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1270756583',id=23,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:05:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-ixn8dsdr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:05:55Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=13d70bea-d9e6-4e8d-9824-beae38fa6143,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.664 253465 DEBUG nova.network.os_vif_util [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.665 253465 DEBUG nova.network.os_vif_util [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ac:23:4c,bridge_name='br-int',has_traffic_filtering=True,id=9d19a24c-87e3-48b2-9c7a-f0795362a1a1,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d19a24c-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.665 253465 DEBUG os_vif [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:23:4c,bridge_name='br-int',has_traffic_filtering=True,id=9d19a24c-87e3-48b2-9c7a-f0795362a1a1,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d19a24c-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.667 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.667 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d19a24c-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.669 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.670 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.670 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[53b7b7ac-d5b6-435a-8f76-a2cf62735cc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4670b112-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:58:43:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455473, 'reachable_time': 38477, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289446, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.673 253465 INFO os_vif [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ac:23:4c,bridge_name='br-int',has_traffic_filtering=True,id=9d19a24c-87e3-48b2-9c7a-f0795362a1a1,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d19a24c-87')
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.698 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c15091f0-371b-43fc-96e3-3ddc3d73251a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4670b112-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455488, 'tstamp': 455488}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289447, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4670b112-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 455491, 'tstamp': 455491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289447, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.700 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.702 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.704 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4670b112-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.704 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.705 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4670b112-90, col_values=(('external_ids', {'iface-id': 'e72a94a7-9aac-4cfd-886c-1e1e93834214'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:06:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:35.705 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.948 253465 DEBUG nova.compute.manager [req-6910ca48-8f28-4e41-afbd-5d9d40f815d6 req-08df5776-509a-4924-8821-e51c4abe2485 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-vif-unplugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.948 253465 DEBUG oslo_concurrency.lockutils [req-6910ca48-8f28-4e41-afbd-5d9d40f815d6 req-08df5776-509a-4924-8821-e51c4abe2485 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.949 253465 DEBUG oslo_concurrency.lockutils [req-6910ca48-8f28-4e41-afbd-5d9d40f815d6 req-08df5776-509a-4924-8821-e51c4abe2485 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.949 253465 DEBUG oslo_concurrency.lockutils [req-6910ca48-8f28-4e41-afbd-5d9d40f815d6 req-08df5776-509a-4924-8821-e51c4abe2485 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.949 253465 DEBUG nova.compute.manager [req-6910ca48-8f28-4e41-afbd-5d9d40f815d6 req-08df5776-509a-4924-8821-e51c4abe2485 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] No waiting events found dispatching network-vif-unplugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:06:35 compute-0 nova_compute[253461]: 2025-11-22 04:06:35.950 253465 DEBUG nova.compute.manager [req-6910ca48-8f28-4e41-afbd-5d9d40f815d6 req-08df5776-509a-4924-8821-e51c4abe2485 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-vif-unplugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:06:36 compute-0 nova_compute[253461]: 2025-11-22 04:06:36.080 253465 DEBUG nova.network.neutron [req-4dee786c-eaf2-44da-a15a-240c83429b7e req-e3e56352-e5fc-4dc5-91be-52cf9ebf8cf0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updated VIF entry in instance network info cache for port 9d19a24c-87e3-48b2-9c7a-f0795362a1a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:06:36 compute-0 nova_compute[253461]: 2025-11-22 04:06:36.081 253465 DEBUG nova.network.neutron [req-4dee786c-eaf2-44da-a15a-240c83429b7e req-e3e56352-e5fc-4dc5-91be-52cf9ebf8cf0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updating instance_info_cache with network_info: [{"id": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "address": "fa:16:3e:ac:23:4c", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d19a24c-87", "ovs_interfaceid": "9d19a24c-87e3-48b2-9c7a-f0795362a1a1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:06:36 compute-0 nova_compute[253461]: 2025-11-22 04:06:36.162 253465 DEBUG oslo_concurrency.lockutils [req-4dee786c-eaf2-44da-a15a-240c83429b7e req-e3e56352-e5fc-4dc5-91be-52cf9ebf8cf0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-13d70bea-d9e6-4e8d-9824-beae38fa6143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:06:36
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'images', '.mgr', 'vms', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes']
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:06:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 5.1 KiB/s wr, 36 op/s
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:06:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:06:36 compute-0 ceph-mon[75011]: pgmap v1666: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 5.1 KiB/s wr, 36 op/s
Nov 22 04:06:37 compute-0 sudo[289466]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:37 compute-0 sudo[289466]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:37 compute-0 sudo[289466]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:37 compute-0 sudo[289491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:06:37 compute-0 sudo[289491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:37 compute-0 sudo[289491]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:37 compute-0 sudo[289516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:37 compute-0 sudo[289516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:37 compute-0 sudo[289516]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:37 compute-0 sudo[289541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 04:06:37 compute-0 sudo[289541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:37 compute-0 sudo[289541]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:06:37 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:06:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:38 compute-0 sudo[289589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:38 compute-0 sudo[289589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:38 compute-0 sudo[289589]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:38 compute-0 nova_compute[253461]: 2025-11-22 04:06:38.134 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:38 compute-0 sudo[289614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:06:38 compute-0 sudo[289614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:38 compute-0 sudo[289614]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:38 compute-0 sudo[289639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:38 compute-0 sudo[289639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:38 compute-0 nova_compute[253461]: 2025-11-22 04:06:38.249 253465 DEBUG nova.compute.manager [req-463c90ca-cd48-4f3b-8047-9b14ed146f7f req-099073c4-a8a2-4ad6-8372-31e142bf2fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:06:38 compute-0 nova_compute[253461]: 2025-11-22 04:06:38.249 253465 DEBUG oslo_concurrency.lockutils [req-463c90ca-cd48-4f3b-8047-9b14ed146f7f req-099073c4-a8a2-4ad6-8372-31e142bf2fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:38 compute-0 nova_compute[253461]: 2025-11-22 04:06:38.249 253465 DEBUG oslo_concurrency.lockutils [req-463c90ca-cd48-4f3b-8047-9b14ed146f7f req-099073c4-a8a2-4ad6-8372-31e142bf2fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:38 compute-0 nova_compute[253461]: 2025-11-22 04:06:38.249 253465 DEBUG oslo_concurrency.lockutils [req-463c90ca-cd48-4f3b-8047-9b14ed146f7f req-099073c4-a8a2-4ad6-8372-31e142bf2fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:38 compute-0 nova_compute[253461]: 2025-11-22 04:06:38.250 253465 DEBUG nova.compute.manager [req-463c90ca-cd48-4f3b-8047-9b14ed146f7f req-099073c4-a8a2-4ad6-8372-31e142bf2fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] No waiting events found dispatching network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:06:38 compute-0 nova_compute[253461]: 2025-11-22 04:06:38.250 253465 WARNING nova.compute.manager [req-463c90ca-cd48-4f3b-8047-9b14ed146f7f req-099073c4-a8a2-4ad6-8372-31e142bf2fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received unexpected event network-vif-plugged-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 for instance with vm_state active and task_state deleting.
Nov 22 04:06:38 compute-0 sudo[289639]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:38 compute-0 sudo[289664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:06:38 compute-0 sudo[289664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.5 KiB/s wr, 40 op/s
Nov 22 04:06:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/393205240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/393205240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:38 compute-0 sudo[289664]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:06:38 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:06:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:06:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:06:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:06:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:39 compute-0 ceph-mon[75011]: pgmap v1667: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 317 KiB/s rd, 1.5 KiB/s wr, 40 op/s
Nov 22 04:06:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/393205240' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/393205240' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:39 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 24268f4d-bfa2-44d8-9f6d-7466a1b455d0 does not exist
Nov 22 04:06:39 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ae8f93c0-6480-4802-892d-2ac3007b3167 does not exist
Nov 22 04:06:39 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e5567210-05d7-44ec-a547-749372bf0c0e does not exist
Nov 22 04:06:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:06:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:06:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:06:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:06:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:06:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:06:39 compute-0 sudo[289720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:39 compute-0 sudo[289720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:39 compute-0 sudo[289720]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:39 compute-0 sudo[289745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:06:39 compute-0 sudo[289745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:39 compute-0 sudo[289745]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:39 compute-0 sudo[289770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:39 compute-0 sudo[289770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:39 compute-0 sudo[289770]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:39 compute-0 sudo[289796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:06:39 compute-0 sudo[289796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:39 compute-0 nova_compute[253461]: 2025-11-22 04:06:39.758 253465 INFO nova.virt.libvirt.driver [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Deleting instance files /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143_del
Nov 22 04:06:39 compute-0 nova_compute[253461]: 2025-11-22 04:06:39.759 253465 INFO nova.virt.libvirt.driver [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Deletion of /var/lib/nova/instances/13d70bea-d9e6-4e8d-9824-beae38fa6143_del complete
Nov 22 04:06:39 compute-0 nova_compute[253461]: 2025-11-22 04:06:39.988 253465 INFO nova.compute.manager [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Took 5.04 seconds to destroy the instance on the hypervisor.
Nov 22 04:06:39 compute-0 nova_compute[253461]: 2025-11-22 04:06:39.989 253465 DEBUG oslo.service.loopingcall [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:06:39 compute-0 nova_compute[253461]: 2025-11-22 04:06:39.990 253465 DEBUG nova.compute.manager [-] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:06:39 compute-0 nova_compute[253461]: 2025-11-22 04:06:39.990 253465 DEBUG nova.network.neutron [-] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:06:40 compute-0 podman[289861]: 2025-11-22 04:06:40.013984201 +0000 UTC m=+0.040293520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:06:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:06:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:06:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:06:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:06:40 compute-0 podman[289861]: 2025-11-22 04:06:40.172256012 +0000 UTC m=+0.198565271 container create 2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:06:40 compute-0 systemd[1]: Started libpod-conmon-2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00.scope.
Nov 22 04:06:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:06:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 1.6 KiB/s wr, 51 op/s
Nov 22 04:06:40 compute-0 podman[289861]: 2025-11-22 04:06:40.521748408 +0000 UTC m=+0.548057707 container init 2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:06:40 compute-0 podman[289861]: 2025-11-22 04:06:40.530541305 +0000 UTC m=+0.556850554 container start 2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:06:40 compute-0 infallible_vaughan[289878]: 167 167
Nov 22 04:06:40 compute-0 systemd[1]: libpod-2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00.scope: Deactivated successfully.
Nov 22 04:06:40 compute-0 podman[289861]: 2025-11-22 04:06:40.653797795 +0000 UTC m=+0.680107114 container attach 2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:06:40 compute-0 podman[289861]: 2025-11-22 04:06:40.654601194 +0000 UTC m=+0.680910443 container died 2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:06:40 compute-0 nova_compute[253461]: 2025-11-22 04:06:40.672 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-581ab884dc3bc36d378e6839518a9f2a7804a46ad06a3c6cef81eb570d129d3b-merged.mount: Deactivated successfully.
Nov 22 04:06:41 compute-0 ceph-mon[75011]: pgmap v1668: 305 pgs: 305 active+clean; 191 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 1.6 KiB/s wr, 51 op/s
Nov 22 04:06:41 compute-0 nova_compute[253461]: 2025-11-22 04:06:41.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:41 compute-0 nova_compute[253461]: 2025-11-22 04:06:41.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e416 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Nov 22 04:06:41 compute-0 nova_compute[253461]: 2025-11-22 04:06:41.542 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:41 compute-0 nova_compute[253461]: 2025-11-22 04:06:41.543 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:41 compute-0 nova_compute[253461]: 2025-11-22 04:06:41.544 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:41 compute-0 nova_compute[253461]: 2025-11-22 04:06:41.544 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:06:41 compute-0 nova_compute[253461]: 2025-11-22 04:06:41.545 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.007 253465 DEBUG nova.compute.manager [req-a778f300-82cb-4da9-89b8-348e22c4a197 req-d40215f0-8dd0-4d7f-bfa7-051e90ee55a2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Received event network-vif-deleted-9d19a24c-87e3-48b2-9c7a-f0795362a1a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.008 253465 INFO nova.compute.manager [req-a778f300-82cb-4da9-89b8-348e22c4a197 req-d40215f0-8dd0-4d7f-bfa7-051e90ee55a2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Neutron deleted interface 9d19a24c-87e3-48b2-9c7a-f0795362a1a1; detaching it from the instance and deleting it from the info cache
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.008 253465 DEBUG nova.network.neutron [req-a778f300-82cb-4da9-89b8-348e22c4a197 req-d40215f0-8dd0-4d7f-bfa7-051e90ee55a2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:06:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Nov 22 04:06:42 compute-0 podman[289861]: 2025-11-22 04:06:42.056644586 +0000 UTC m=+2.082953815 container remove 2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_vaughan, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.064 253465 DEBUG nova.network.neutron [-] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:06:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Nov 22 04:06:42 compute-0 systemd[1]: libpod-conmon-2ecad0bf1fa253ebcca5609427170f87f7f2f1b9cda61e46b9d2c20a2f1a6f00.scope: Deactivated successfully.
Nov 22 04:06:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2184920922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.278 253465 INFO nova.compute.manager [-] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Took 2.29 seconds to deallocate network for instance.
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.284 253465 DEBUG nova.compute.manager [req-a778f300-82cb-4da9-89b8-348e22c4a197 req-d40215f0-8dd0-4d7f-bfa7-051e90ee55a2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Detach interface failed, port_id=9d19a24c-87e3-48b2-9c7a-f0795362a1a1, reason: Instance 13d70bea-d9e6-4e8d-9824-beae38fa6143 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.309 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.764s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:06:42 compute-0 podman[289922]: 2025-11-22 04:06:42.266709213 +0000 UTC m=+0.032759290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 190 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 KiB/s wr, 59 op/s
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.644 253465 INFO nova.compute.manager [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Took 0.37 seconds to detach 1 volumes for instance.
Nov 22 04:06:42 compute-0 podman[289922]: 2025-11-22 04:06:42.645776307 +0000 UTC m=+0.411826324 container create 73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_faraday, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.685 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.686 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.915 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.917 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4236MB free_disk=59.98798751831055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.917 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:42 compute-0 nova_compute[253461]: 2025-11-22 04:06:42.918 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.136 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:43 compute-0 systemd[1]: Started libpod-conmon-73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596.scope.
Nov 22 04:06:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8166f48d5ac956b9a3061266ff56e8064ce368ef99b8aab209fa2c46c680585/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8166f48d5ac956b9a3061266ff56e8064ce368ef99b8aab209fa2c46c680585/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8166f48d5ac956b9a3061266ff56e8064ce368ef99b8aab209fa2c46c680585/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8166f48d5ac956b9a3061266ff56e8064ce368ef99b8aab209fa2c46c680585/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8166f48d5ac956b9a3061266ff56e8064ce368ef99b8aab209fa2c46c680585/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.180 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:43 compute-0 ceph-mon[75011]: osdmap e417: 3 total, 3 up, 3 in
Nov 22 04:06:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2184920922' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:43 compute-0 ceph-mon[75011]: pgmap v1670: 305 pgs: 305 active+clean; 190 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 292 KiB/s rd, 2.1 KiB/s wr, 59 op/s
Nov 22 04:06:43 compute-0 podman[289922]: 2025-11-22 04:06:43.232638893 +0000 UTC m=+0.998688930 container init 73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_faraday, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:06:43 compute-0 podman[289922]: 2025-11-22 04:06:43.244708419 +0000 UTC m=+1.010758436 container start 73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.276 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance ab8b13c6-9785-42c2-a54c-61aa3a7ae664 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.277 253465 WARNING nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 13d70bea-d9e6-4e8d-9824-beae38fa6143 is not being actively managed by this compute host but has allocations referencing this compute host: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. Skipping heal of allocation because we do not know what to do.
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.278 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.278 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.327 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:06:43 compute-0 podman[289922]: 2025-11-22 04:06:43.378836077 +0000 UTC m=+1.144886104 container attach 73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:06:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:06:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463888210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.797 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.806 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:06:43 compute-0 nova_compute[253461]: 2025-11-22 04:06:43.891 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:06:44 compute-0 nova_compute[253461]: 2025-11-22 04:06:44.075 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:06:44 compute-0 nova_compute[253461]: 2025-11-22 04:06:44.076 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:44 compute-0 nova_compute[253461]: 2025-11-22 04:06:44.077 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:44 compute-0 nova_compute[253461]: 2025-11-22 04:06:44.083 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:44 compute-0 nova_compute[253461]: 2025-11-22 04:06:44.209 253465 INFO nova.scheduler.client.report [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Deleted allocations for instance 13d70bea-d9e6-4e8d-9824-beae38fa6143
Nov 22 04:06:44 compute-0 podman[289979]: 2025-11-22 04:06:44.431123677 +0000 UTC m=+0.097798578 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:06:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1671: 305 pgs: 305 active+clean; 190 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 KiB/s wr, 57 op/s
Nov 22 04:06:44 compute-0 compassionate_faraday[289940]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:06:44 compute-0 compassionate_faraday[289940]: --> relative data size: 1.0
Nov 22 04:06:44 compute-0 compassionate_faraday[289940]: --> All data devices are unavailable
Nov 22 04:06:44 compute-0 systemd[1]: libpod-73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596.scope: Deactivated successfully.
Nov 22 04:06:44 compute-0 systemd[1]: libpod-73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596.scope: Consumed 1.217s CPU time.
Nov 22 04:06:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3463888210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:06:44 compute-0 podman[290011]: 2025-11-22 04:06:44.868239823 +0000 UTC m=+0.047130498 container died 73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.078 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.079 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.079 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.079 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:06:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1841151877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1841151877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8166f48d5ac956b9a3061266ff56e8064ce368ef99b8aab209fa2c46c680585-merged.mount: Deactivated successfully.
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.577 253465 DEBUG oslo_concurrency.lockutils [None req-73e09493-17f1-4c55-8a69-750a11642f2f 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "13d70bea-d9e6-4e8d-9824-beae38fa6143" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.655 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.655 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.656 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.656 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid ab8b13c6-9785-42c2-a54c-61aa3a7ae664 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:06:45 compute-0 nova_compute[253461]: 2025-11-22 04:06:45.677 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:46 compute-0 ceph-mon[75011]: pgmap v1671: 305 pgs: 305 active+clean; 190 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 KiB/s wr, 57 op/s
Nov 22 04:06:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1841151877' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1841151877' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:46 compute-0 podman[290011]: 2025-11-22 04:06:46.124466146 +0000 UTC m=+1.303356771 container remove 73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_faraday, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:06:46 compute-0 systemd[1]: libpod-conmon-73564d3e203ce3556b4885cdab9ec1e13b785573edf37ad07f3ae8052a0c4596.scope: Deactivated successfully.
Nov 22 04:06:46 compute-0 sudo[289796]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:46 compute-0 sudo[290026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:46 compute-0 sudo[290026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:46 compute-0 sudo[290026]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:46 compute-0 sudo[290051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:06:46 compute-0 sudo[290051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:46 compute-0 sudo[290051]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:46 compute-0 sudo[290076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:46 compute-0 sudo[290076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:46 compute-0 sudo[290076]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 190 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 1.2 KiB/s wr, 48 op/s
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0012221498136029173 of space, bias 1.0, pg target 0.36664494408087517 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:06:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:06:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:46 compute-0 sudo[290101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:06:46 compute-0 sudo[290101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:47 compute-0 podman[290167]: 2025-11-22 04:06:46.944208133 +0000 UTC m=+0.028328562 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:47 compute-0 podman[290167]: 2025-11-22 04:06:47.057716839 +0000 UTC m=+0.141837258 container create f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 04:06:47 compute-0 ceph-mon[75011]: pgmap v1672: 305 pgs: 305 active+clean; 190 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 187 KiB/s rd, 1.2 KiB/s wr, 48 op/s
Nov 22 04:06:47 compute-0 systemd[1]: Started libpod-conmon-f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d.scope.
Nov 22 04:06:47 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:06:47 compute-0 nova_compute[253461]: 2025-11-22 04:06:47.484 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updating instance_info_cache with network_info: [{"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:06:47 compute-0 nova_compute[253461]: 2025-11-22 04:06:47.538 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:06:47 compute-0 nova_compute[253461]: 2025-11-22 04:06:47.539 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:06:47 compute-0 nova_compute[253461]: 2025-11-22 04:06:47.539 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:47 compute-0 nova_compute[253461]: 2025-11-22 04:06:47.539 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:47 compute-0 nova_compute[253461]: 2025-11-22 04:06:47.540 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:47 compute-0 nova_compute[253461]: 2025-11-22 04:06:47.540 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:47 compute-0 nova_compute[253461]: 2025-11-22 04:06:47.540 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:06:47 compute-0 podman[290167]: 2025-11-22 04:06:47.980823831 +0000 UTC m=+1.064944250 container init f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cohen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:06:47 compute-0 podman[290167]: 2025-11-22 04:06:47.994416852 +0000 UTC m=+1.078537301 container start f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cohen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:06:48 compute-0 pensive_cohen[290184]: 167 167
Nov 22 04:06:48 compute-0 systemd[1]: libpod-f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d.scope: Deactivated successfully.
Nov 22 04:06:48 compute-0 podman[290167]: 2025-11-22 04:06:48.006601839 +0000 UTC m=+1.090722328 container attach f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:06:48 compute-0 podman[290167]: 2025-11-22 04:06:48.007183049 +0000 UTC m=+1.091303498 container died f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cded85d18f379eb96976749ee70078165885d6ad9557ae3e7ddf81f877f2d81-merged.mount: Deactivated successfully.
Nov 22 04:06:48 compute-0 podman[290167]: 2025-11-22 04:06:48.130355184 +0000 UTC m=+1.214475603 container remove f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:06:48 compute-0 nova_compute[253461]: 2025-11-22 04:06:48.138 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:48 compute-0 systemd[1]: libpod-conmon-f008aa51914dd1a2a7d6abfd28ca8d7d45b7c44d0f95b9294f2098ef7675d33d.scope: Deactivated successfully.
Nov 22 04:06:48 compute-0 podman[290208]: 2025-11-22 04:06:48.379784249 +0000 UTC m=+0.084923451 container create e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_yalow, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:06:48 compute-0 podman[290208]: 2025-11-22 04:06:48.321648332 +0000 UTC m=+0.026787564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:48 compute-0 nova_compute[253461]: 2025-11-22 04:06:48.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 190 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1023 B/s wr, 39 op/s
Nov 22 04:06:48 compute-0 ceph-mon[75011]: pgmap v1673: 305 pgs: 305 active+clean; 190 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1023 B/s wr, 39 op/s
Nov 22 04:06:48 compute-0 systemd[1]: Started libpod-conmon-e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6.scope.
Nov 22 04:06:48 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abf6ed4a7310a20501d1461c37369b1b415d797be3191153a04b1b9d695d5f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abf6ed4a7310a20501d1461c37369b1b415d797be3191153a04b1b9d695d5f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abf6ed4a7310a20501d1461c37369b1b415d797be3191153a04b1b9d695d5f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6abf6ed4a7310a20501d1461c37369b1b415d797be3191153a04b1b9d695d5f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:48 compute-0 podman[290208]: 2025-11-22 04:06:48.704245538 +0000 UTC m=+0.409384770 container init e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:06:48 compute-0 podman[290208]: 2025-11-22 04:06:48.712334393 +0000 UTC m=+0.417473595 container start e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_yalow, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:06:48 compute-0 podman[290208]: 2025-11-22 04:06:48.766590508 +0000 UTC m=+0.471729720 container attach e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:06:49 compute-0 nova_compute[253461]: 2025-11-22 04:06:49.426 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:06:49 compute-0 confident_yalow[290224]: {
Nov 22 04:06:49 compute-0 confident_yalow[290224]:     "0": [
Nov 22 04:06:49 compute-0 confident_yalow[290224]:         {
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "devices": [
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "/dev/loop3"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             ],
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_name": "ceph_lv0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_size": "21470642176",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "name": "ceph_lv0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "tags": {
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cluster_name": "ceph",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.crush_device_class": "",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.encrypted": "0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osd_id": "0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.type": "block",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.vdo": "0"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             },
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "type": "block",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "vg_name": "ceph_vg0"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:         }
Nov 22 04:06:49 compute-0 confident_yalow[290224]:     ],
Nov 22 04:06:49 compute-0 confident_yalow[290224]:     "1": [
Nov 22 04:06:49 compute-0 confident_yalow[290224]:         {
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "devices": [
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "/dev/loop4"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             ],
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_name": "ceph_lv1",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_size": "21470642176",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "name": "ceph_lv1",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "tags": {
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cluster_name": "ceph",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.crush_device_class": "",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.encrypted": "0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osd_id": "1",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.type": "block",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.vdo": "0"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             },
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "type": "block",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "vg_name": "ceph_vg1"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:         }
Nov 22 04:06:49 compute-0 confident_yalow[290224]:     ],
Nov 22 04:06:49 compute-0 confident_yalow[290224]:     "2": [
Nov 22 04:06:49 compute-0 confident_yalow[290224]:         {
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "devices": [
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "/dev/loop5"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             ],
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_name": "ceph_lv2",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_size": "21470642176",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "name": "ceph_lv2",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "tags": {
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.cluster_name": "ceph",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.crush_device_class": "",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.encrypted": "0",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osd_id": "2",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.type": "block",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:                 "ceph.vdo": "0"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             },
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "type": "block",
Nov 22 04:06:49 compute-0 confident_yalow[290224]:             "vg_name": "ceph_vg2"
Nov 22 04:06:49 compute-0 confident_yalow[290224]:         }
Nov 22 04:06:49 compute-0 confident_yalow[290224]:     ]
Nov 22 04:06:49 compute-0 confident_yalow[290224]: }
Nov 22 04:06:49 compute-0 systemd[1]: libpod-e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6.scope: Deactivated successfully.
Nov 22 04:06:49 compute-0 podman[290208]: 2025-11-22 04:06:49.798165549 +0000 UTC m=+1.503304761 container died e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_yalow, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-6abf6ed4a7310a20501d1461c37369b1b415d797be3191153a04b1b9d695d5f1-merged.mount: Deactivated successfully.
Nov 22 04:06:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 190 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 716 B/s wr, 51 op/s
Nov 22 04:06:50 compute-0 nova_compute[253461]: 2025-11-22 04:06:50.598 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784395.5972211, 13d70bea-d9e6-4e8d-9824-beae38fa6143 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:06:50 compute-0 nova_compute[253461]: 2025-11-22 04:06:50.599 253465 INFO nova.compute.manager [-] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] VM Stopped (Lifecycle Event)
Nov 22 04:06:50 compute-0 podman[290208]: 2025-11-22 04:06:50.654932848 +0000 UTC m=+2.360072080 container remove e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_yalow, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:06:50 compute-0 systemd[1]: libpod-conmon-e10e3c472c8823c9b6e323751e3721344cdae087ffcc9ff134ff57f7d5ceedd6.scope: Deactivated successfully.
Nov 22 04:06:50 compute-0 nova_compute[253461]: 2025-11-22 04:06:50.676 253465 DEBUG nova.compute.manager [None req-e161ffb9-acc5-457f-8722-3fc820e460d7 - - - - - -] [instance: 13d70bea-d9e6-4e8d-9824-beae38fa6143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:06:50 compute-0 nova_compute[253461]: 2025-11-22 04:06:50.681 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:50 compute-0 sudo[290101]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:50 compute-0 sudo[290247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:50 compute-0 sudo[290247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:50 compute-0 sudo[290247]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:50 compute-0 ceph-mon[75011]: pgmap v1674: 305 pgs: 305 active+clean; 190 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 716 B/s wr, 51 op/s
Nov 22 04:06:50 compute-0 sudo[290272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:06:50 compute-0 sudo[290272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:50 compute-0 sudo[290272]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:50 compute-0 sudo[290297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:50 compute-0 sudo[290297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:50 compute-0 sudo[290297]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:51 compute-0 sudo[290322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:06:51 compute-0 sudo[290322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:06:51 compute-0 podman[290387]: 2025-11-22 04:06:51.453685059 +0000 UTC m=+0.025747843 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:51 compute-0 podman[290387]: 2025-11-22 04:06:51.778147437 +0000 UTC m=+0.350210171 container create 4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:06:52 compute-0 systemd[1]: Started libpod-conmon-4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de.scope.
Nov 22 04:06:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:06:52 compute-0 podman[290387]: 2025-11-22 04:06:52.231257962 +0000 UTC m=+0.803320976 container init 4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:06:52 compute-0 podman[290387]: 2025-11-22 04:06:52.237520269 +0000 UTC m=+0.809583003 container start 4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:06:52 compute-0 trusting_galois[290404]: 167 167
Nov 22 04:06:52 compute-0 systemd[1]: libpod-4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de.scope: Deactivated successfully.
Nov 22 04:06:52 compute-0 podman[290387]: 2025-11-22 04:06:52.260691545 +0000 UTC m=+0.832754259 container attach 4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:06:52 compute-0 podman[290387]: 2025-11-22 04:06:52.261029713 +0000 UTC m=+0.833092427 container died 4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:06:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:06:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148024678' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:06:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1148024678' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-099fcc0fd291a33795dbd0e04cecae0d15eb52f02e2e70773d2d6c79ff2107e3-merged.mount: Deactivated successfully.
Nov 22 04:06:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1148024678' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:06:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1148024678' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:06:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 181 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 790 B/s wr, 51 op/s
Nov 22 04:06:52 compute-0 sshd-session[289587]: Connection closed by authenticating user root 27.79.43.64 port 45904 [preauth]
Nov 22 04:06:52 compute-0 podman[290387]: 2025-11-22 04:06:52.591561736 +0000 UTC m=+1.163624480 container remove 4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_galois, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:06:52 compute-0 systemd[1]: libpod-conmon-4d50966a31b96c66394e9e2f9f5c4c768515b11d64a7ca8126adb252511f63de.scope: Deactivated successfully.
Nov 22 04:06:52 compute-0 podman[290430]: 2025-11-22 04:06:52.771243146 +0000 UTC m=+0.032650362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:06:52 compute-0 podman[290430]: 2025-11-22 04:06:52.890244926 +0000 UTC m=+0.151652052 container create 991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:06:53 compute-0 systemd[1]: Started libpod-conmon-991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8.scope.
Nov 22 04:06:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deabdfeb2a26dbc1e6c8415a72896d1dc42cb1688c4f9cedb413e2585bc4f2c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deabdfeb2a26dbc1e6c8415a72896d1dc42cb1688c4f9cedb413e2585bc4f2c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deabdfeb2a26dbc1e6c8415a72896d1dc42cb1688c4f9cedb413e2585bc4f2c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deabdfeb2a26dbc1e6c8415a72896d1dc42cb1688c4f9cedb413e2585bc4f2c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:06:53 compute-0 nova_compute[253461]: 2025-11-22 04:06:53.140 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:53 compute-0 podman[290430]: 2025-11-22 04:06:53.210063718 +0000 UTC m=+0.471470864 container init 991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_liskov, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:06:53 compute-0 podman[290430]: 2025-11-22 04:06:53.224099006 +0000 UTC m=+0.485506122 container start 991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_liskov, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:06:53 compute-0 podman[290430]: 2025-11-22 04:06:53.266875215 +0000 UTC m=+0.528282361 container attach 991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_liskov, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Nov 22 04:06:53 compute-0 ceph-mon[75011]: pgmap v1675: 305 pgs: 305 active+clean; 181 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 790 B/s wr, 51 op/s
Nov 22 04:06:54 compute-0 objective_liskov[290446]: {
Nov 22 04:06:54 compute-0 objective_liskov[290446]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "osd_id": 1,
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "type": "bluestore"
Nov 22 04:06:54 compute-0 objective_liskov[290446]:     },
Nov 22 04:06:54 compute-0 objective_liskov[290446]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "osd_id": 0,
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "type": "bluestore"
Nov 22 04:06:54 compute-0 objective_liskov[290446]:     },
Nov 22 04:06:54 compute-0 objective_liskov[290446]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "osd_id": 2,
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:06:54 compute-0 objective_liskov[290446]:         "type": "bluestore"
Nov 22 04:06:54 compute-0 objective_liskov[290446]:     }
Nov 22 04:06:54 compute-0 objective_liskov[290446]: }
Nov 22 04:06:54 compute-0 systemd[1]: libpod-991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8.scope: Deactivated successfully.
Nov 22 04:06:54 compute-0 podman[290430]: 2025-11-22 04:06:54.245048372 +0000 UTC m=+1.506455488 container died 991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 04:06:54 compute-0 systemd[1]: libpod-991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8.scope: Consumed 1.031s CPU time.
Nov 22 04:06:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1676: 305 pgs: 305 active+clean; 176 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.2 KiB/s wr, 56 op/s
Nov 22 04:06:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-deabdfeb2a26dbc1e6c8415a72896d1dc42cb1688c4f9cedb413e2585bc4f2c0-merged.mount: Deactivated successfully.
Nov 22 04:06:54 compute-0 podman[290430]: 2025-11-22 04:06:54.781644837 +0000 UTC m=+2.043052003 container remove 991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:06:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:06:54 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430042763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:54 compute-0 sudo[290322]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:54 compute-0 systemd[1]: libpod-conmon-991a0d835077925f98151f4705fbf0385012c19fd1e9134cd9aae64232e0afc8.scope: Deactivated successfully.
Nov 22 04:06:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:06:54 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:06:54 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:54 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d6061ca0-cb8f-43ca-9625-fce5eeacaf21 does not exist
Nov 22 04:06:54 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f1b45a26-e897-425a-b0e8-628965daf8ff does not exist
Nov 22 04:06:55 compute-0 sudo[290491]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:06:55 compute-0 sudo[290491]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:55 compute-0 sudo[290491]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Nov 22 04:06:55 compute-0 ceph-mon[75011]: pgmap v1676: 305 pgs: 305 active+clean; 176 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.2 KiB/s wr, 56 op/s
Nov 22 04:06:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2430042763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:06:55 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:55 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:06:55 compute-0 sudo[290516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:06:55 compute-0 sudo[290516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:06:55 compute-0 sudo[290516]: pam_unix(sudo:session): session closed for user root
Nov 22 04:06:55 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Nov 22 04:06:55 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Nov 22 04:06:55 compute-0 nova_compute[253461]: 2025-11-22 04:06:55.683 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.5 KiB/s wr, 60 op/s
Nov 22 04:06:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Nov 22 04:06:56 compute-0 ceph-mon[75011]: osdmap e418: 3 total, 3 up, 3 in
Nov 22 04:06:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Nov 22 04:06:56 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.455 253465 DEBUG nova.compute.manager [req-6275276e-88ab-4ebc-b67e-6b7babbb672a req-7e17f36e-81ed-472a-bf3c-12ca8823f6ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-changed-e0a979c4-306d-47e7-a853-95a815ae464f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.455 253465 DEBUG nova.compute.manager [req-6275276e-88ab-4ebc-b67e-6b7babbb672a req-7e17f36e-81ed-472a-bf3c-12ca8823f6ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Refreshing instance network info cache due to event network-changed-e0a979c4-306d-47e7-a853-95a815ae464f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.456 253465 DEBUG oslo_concurrency.lockutils [req-6275276e-88ab-4ebc-b67e-6b7babbb672a req-7e17f36e-81ed-472a-bf3c-12ca8823f6ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.456 253465 DEBUG oslo_concurrency.lockutils [req-6275276e-88ab-4ebc-b67e-6b7babbb672a req-7e17f36e-81ed-472a-bf3c-12ca8823f6ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.457 253465 DEBUG nova.network.neutron [req-6275276e-88ab-4ebc-b67e-6b7babbb672a req-7e17f36e-81ed-472a-bf3c-12ca8823f6ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Refreshing network info cache for port e0a979c4-306d-47e7-a853-95a815ae464f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.537 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.537 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.538 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.538 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.539 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.540 253465 INFO nova.compute.manager [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Terminating instance
Nov 22 04:06:57 compute-0 nova_compute[253461]: 2025-11-22 04:06:57.542 253465 DEBUG nova.compute.manager [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:06:57 compute-0 ceph-mon[75011]: pgmap v1678: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.5 KiB/s wr, 60 op/s
Nov 22 04:06:57 compute-0 ceph-mon[75011]: osdmap e419: 3 total, 3 up, 3 in
Nov 22 04:06:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Nov 22 04:06:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Nov 22 04:06:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Nov 22 04:06:58 compute-0 kernel: tape0a979c4-30 (unregistering): left promiscuous mode
Nov 22 04:06:58 compute-0 NetworkManager[48916]: <info>  [1763784418.1175] device (tape0a979c4-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:06:58 compute-0 ovn_controller[152691]: 2025-11-22T04:06:58Z|00246|binding|INFO|Releasing lport e0a979c4-306d-47e7-a853-95a815ae464f from this chassis (sb_readonly=0)
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.129 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:58 compute-0 ovn_controller[152691]: 2025-11-22T04:06:58Z|00247|binding|INFO|Setting lport e0a979c4-306d-47e7-a853-95a815ae464f down in Southbound
Nov 22 04:06:58 compute-0 ovn_controller[152691]: 2025-11-22T04:06:58Z|00248|binding|INFO|Removing iface tape0a979c4-30 ovn-installed in OVS
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.132 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.170 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:58 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Nov 22 04:06:58 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 19.183s CPU time.
Nov 22 04:06:58 compute-0 systemd-machined[215728]: Machine qemu-22-instance-00000016 terminated.
Nov 22 04:06:58 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:58.213 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:00:7d 10.100.0.3'], port_security=['fa:16:3e:b7:00:7d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'ab8b13c6-9785-42c2-a54c-61aa3a7ae664', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4670b112-9f63-4a03-8d79-91f581c69c03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '83cc5de7368b40b984b51f781e85343c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '374de8a5-1500-46fb-adf8-2bb87fa0ef15', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3de1077d-f0a7-4322-aae7-65d3ef85ce44, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=e0a979c4-306d-47e7-a853-95a815ae464f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:06:58 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:58.215 162689 INFO neutron.agent.ovn.metadata.agent [-] Port e0a979c4-306d-47e7-a853-95a815ae464f in datapath 4670b112-9f63-4a03-8d79-91f581c69c03 unbound from our chassis
Nov 22 04:06:58 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:58.216 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4670b112-9f63-4a03-8d79-91f581c69c03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:06:58 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:58.218 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7a2d0d-6d67-4a03-9c04-4968d76e6e0b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:06:58 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:58.219 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 namespace which is not needed anymore
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.398 253465 INFO nova.virt.libvirt.driver [-] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Instance destroyed successfully.
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.398 253465 DEBUG nova.objects.instance [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lazy-loading 'resources' on Instance uuid ab8b13c6-9785-42c2-a54c-61aa3a7ae664 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:06:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.625 253465 DEBUG nova.virt.libvirt.vif [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:04:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-616121886',display_name='tempest-TestVolumeBootPattern-server-616121886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-616121886',id=22,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBL00ntGaoBOOeBs6+y4FUIy/lgKnZ24cCu86O0xSJiDYa9NepVO6DpAMiaAdnoZSl8JwTuHPIlPQIHrkP9B6Kyjt/oOfo9cDi3Gw7Ruq0v506sUUdjxtfkDfzDyLVnMg5A==',key_name='tempest-TestVolumeBootPattern-21755227',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:05:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='83cc5de7368b40b984b51f781e85343c',ramdisk_id='',reservation_id='r-anmeurag',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-1584219565',owner_user_name='tempest-TestVolumeBootPattern-1584219565-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:05:03Z,user_data=None,user_id='45ccef35c0c843a59c9dfd0eb67190a6',uuid=ab8b13c6-9785-42c2-a54c-61aa3a7ae664,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.625 253465 DEBUG nova.network.os_vif_util [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converting VIF {"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.626 253465 DEBUG nova.network.os_vif_util [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:00:7d,bridge_name='br-int',has_traffic_filtering=True,id=e0a979c4-306d-47e7-a853-95a815ae464f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0a979c4-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.627 253465 DEBUG os_vif [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:00:7d,bridge_name='br-int',has_traffic_filtering=True,id=e0a979c4-306d-47e7-a853-95a815ae464f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0a979c4-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.631 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.631 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0a979c4-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:06:58 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[288089]: [NOTICE]   (288093) : haproxy version is 2.8.14-c23fe91
Nov 22 04:06:58 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[288089]: [NOTICE]   (288093) : path to executable is /usr/sbin/haproxy
Nov 22 04:06:58 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[288089]: [WARNING]  (288093) : Exiting Master process...
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.635 253465 DEBUG nova.compute.manager [req-05cb5388-0349-4771-b6fb-3085247e6ec5 req-82ce8a5c-c2c9-45f0-bc95-bf7efe200317 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-vif-unplugged-e0a979c4-306d-47e7-a853-95a815ae464f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.635 253465 DEBUG oslo_concurrency.lockutils [req-05cb5388-0349-4771-b6fb-3085247e6ec5 req-82ce8a5c-c2c9-45f0-bc95-bf7efe200317 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.636 253465 DEBUG oslo_concurrency.lockutils [req-05cb5388-0349-4771-b6fb-3085247e6ec5 req-82ce8a5c-c2c9-45f0-bc95-bf7efe200317 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.636 253465 DEBUG oslo_concurrency.lockutils [req-05cb5388-0349-4771-b6fb-3085247e6ec5 req-82ce8a5c-c2c9-45f0-bc95-bf7efe200317 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.636 253465 DEBUG nova.compute.manager [req-05cb5388-0349-4771-b6fb-3085247e6ec5 req-82ce8a5c-c2c9-45f0-bc95-bf7efe200317 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] No waiting events found dispatching network-vif-unplugged-e0a979c4-306d-47e7-a853-95a815ae464f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:06:58 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[288089]: [ALERT]    (288093) : Current worker (288095) exited with code 143 (Terminated)
Nov 22 04:06:58 compute-0 neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03[288089]: [WARNING]  (288093) : All workers exited. Exiting... (0)
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.637 253465 DEBUG nova.compute.manager [req-05cb5388-0349-4771-b6fb-3085247e6ec5 req-82ce8a5c-c2c9-45f0-bc95-bf7efe200317 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-vif-unplugged-e0a979c4-306d-47e7-a853-95a815ae464f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.638 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.639 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:06:58 compute-0 systemd[1]: libpod-72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7.scope: Deactivated successfully.
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.643 253465 INFO os_vif [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:00:7d,bridge_name='br-int',has_traffic_filtering=True,id=e0a979c4-306d-47e7-a853-95a815ae464f,network=Network(4670b112-9f63-4a03-8d79-91f581c69c03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape0a979c4-30')
Nov 22 04:06:58 compute-0 podman[290566]: 2025-11-22 04:06:58.645838897 +0000 UTC m=+0.289024733 container died 72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.895 253465 DEBUG nova.network.neutron [req-6275276e-88ab-4ebc-b67e-6b7babbb672a req-7e17f36e-81ed-472a-bf3c-12ca8823f6ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updated VIF entry in instance network info cache for port e0a979c4-306d-47e7-a853-95a815ae464f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:06:58 compute-0 nova_compute[253461]: 2025-11-22 04:06:58.896 253465 DEBUG nova.network.neutron [req-6275276e-88ab-4ebc-b67e-6b7babbb672a req-7e17f36e-81ed-472a-bf3c-12ca8823f6ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updating instance_info_cache with network_info: [{"id": "e0a979c4-306d-47e7-a853-95a815ae464f", "address": "fa:16:3e:b7:00:7d", "network": {"id": "4670b112-9f63-4a03-8d79-91f581c69c03", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-51058466-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "83cc5de7368b40b984b51f781e85343c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape0a979c4-30", "ovs_interfaceid": "e0a979c4-306d-47e7-a853-95a815ae464f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:06:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:06:59.087 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:06:59 compute-0 nova_compute[253461]: 2025-11-22 04:06:59.088 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:06:59 compute-0 ceph-mon[75011]: osdmap e420: 3 total, 3 up, 3 in
Nov 22 04:06:59 compute-0 ceph-mon[75011]: pgmap v1681: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.7 KiB/s wr, 48 op/s
Nov 22 04:06:59 compute-0 nova_compute[253461]: 2025-11-22 04:06:59.403 253465 DEBUG oslo_concurrency.lockutils [req-6275276e-88ab-4ebc-b67e-6b7babbb672a req-7e17f36e-81ed-472a-bf3c-12ca8823f6ba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-ab8b13c6-9785-42c2-a54c-61aa3a7ae664" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7-userdata-shm.mount: Deactivated successfully.
Nov 22 04:06:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-1159b98ff66a857e34f24fbe77fc41eea8be4519a609fe574c6a3b98558bc5ed-merged.mount: Deactivated successfully.
Nov 22 04:06:59 compute-0 podman[290621]: 2025-11-22 04:06:59.616699923 +0000 UTC m=+0.163020258 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:06:59 compute-0 podman[290622]: 2025-11-22 04:06:59.682881558 +0000 UTC m=+0.223937048 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:06:59 compute-0 podman[290566]: 2025-11-22 04:06:59.916322007 +0000 UTC m=+1.559507873 container cleanup 72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:06:59 compute-0 systemd[1]: libpod-conmon-72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7.scope: Deactivated successfully.
Nov 22 04:07:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:07:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733945243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:07:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1733945243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 169 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 58 op/s
Nov 22 04:07:00 compute-0 nova_compute[253461]: 2025-11-22 04:07:00.888 253465 DEBUG nova.compute.manager [req-a474c5b4-ec7d-48d2-889f-42df03aa0072 req-5b389c62-743b-4e8f-bb93-f63e7c92845a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:07:00 compute-0 nova_compute[253461]: 2025-11-22 04:07:00.889 253465 DEBUG oslo_concurrency.lockutils [req-a474c5b4-ec7d-48d2-889f-42df03aa0072 req-5b389c62-743b-4e8f-bb93-f63e7c92845a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:07:00 compute-0 nova_compute[253461]: 2025-11-22 04:07:00.890 253465 DEBUG oslo_concurrency.lockutils [req-a474c5b4-ec7d-48d2-889f-42df03aa0072 req-5b389c62-743b-4e8f-bb93-f63e7c92845a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:07:00 compute-0 nova_compute[253461]: 2025-11-22 04:07:00.890 253465 DEBUG oslo_concurrency.lockutils [req-a474c5b4-ec7d-48d2-889f-42df03aa0072 req-5b389c62-743b-4e8f-bb93-f63e7c92845a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:07:00 compute-0 nova_compute[253461]: 2025-11-22 04:07:00.890 253465 DEBUG nova.compute.manager [req-a474c5b4-ec7d-48d2-889f-42df03aa0072 req-5b389c62-743b-4e8f-bb93-f63e7c92845a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] No waiting events found dispatching network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:07:00 compute-0 nova_compute[253461]: 2025-11-22 04:07:00.891 253465 WARNING nova.compute.manager [req-a474c5b4-ec7d-48d2-889f-42df03aa0072 req-5b389c62-743b-4e8f-bb93-f63e7c92845a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received unexpected event network-vif-plugged-e0a979c4-306d-47e7-a853-95a815ae464f for instance with vm_state active and task_state deleting.
Nov 22 04:07:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1733945243' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1733945243' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:01 compute-0 ceph-mon[75011]: pgmap v1682: 305 pgs: 305 active+clean; 169 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 58 op/s
Nov 22 04:07:01 compute-0 podman[290669]: 2025-11-22 04:07:01.077467644 +0000 UTC m=+1.123717848 container remove 72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.087 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[df1b8328-58dd-45cb-b71e-63c9a92d5f74]: (4, ('Sat Nov 22 04:06:58 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7)\n72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7\nSat Nov 22 04:06:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 (72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7)\n72602fa8ae17b061a2371479e43bf71ada2d6a6f410d5ebd3402e8de1aa1d1e7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.089 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[cc490fe5-424f-4ea3-b02d-ddb75cef2315]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.091 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4670b112-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:07:01 compute-0 nova_compute[253461]: 2025-11-22 04:07:01.093 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:01 compute-0 kernel: tap4670b112-90: left promiscuous mode
Nov 22 04:07:01 compute-0 nova_compute[253461]: 2025-11-22 04:07:01.121 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.126 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[da1de83b-bfd3-46e8-b350-02bbc9e559ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.146 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[424e095d-21a1-4c03-bd8b-486d7e070a4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.148 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[77954ae6-7c69-4e25-9955-0db714e60cd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.173 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5f746890-1a7e-49aa-ac10-5abb74ff6b88]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 455465, 'reachable_time': 43554, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290683, 'error': None, 'target': 'ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:07:01 compute-0 systemd[1]: run-netns-ovnmeta\x2d4670b112\x2d9f63\x2d4a03\x2d8d79\x2d91f581c69c03.mount: Deactivated successfully.
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.178 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4670b112-9f63-4a03-8d79-91f581c69c03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.178 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[9530a018-99d0-4ca6-a229-10872596130e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:07:01 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:01.179 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:07:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Nov 22 04:07:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Nov 22 04:07:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Nov 22 04:07:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 169 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 KiB/s wr, 72 op/s
Nov 22 04:07:02 compute-0 ceph-mon[75011]: osdmap e421: 3 total, 3 up, 3 in
Nov 22 04:07:02 compute-0 ceph-mon[75011]: pgmap v1684: 305 pgs: 305 active+clean; 169 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.7 KiB/s wr, 72 op/s
Nov 22 04:07:03 compute-0 nova_compute[253461]: 2025-11-22 04:07:03.174 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:03.181 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:07:03 compute-0 nova_compute[253461]: 2025-11-22 04:07:03.634 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 169 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 62 op/s
Nov 22 04:07:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2240947763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:05 compute-0 ceph-mon[75011]: pgmap v1685: 305 pgs: 305 active+clean; 169 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 2.7 KiB/s wr, 62 op/s
Nov 22 04:07:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2240947763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 169 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.1 KiB/s wr, 53 op/s
Nov 22 04:07:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:06 compute-0 nova_compute[253461]: 2025-11-22 04:07:06.992 253465 INFO nova.virt.libvirt.driver [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Deleting instance files /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664_del
Nov 22 04:07:06 compute-0 nova_compute[253461]: 2025-11-22 04:07:06.993 253465 INFO nova.virt.libvirt.driver [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Deletion of /var/lib/nova/instances/ab8b13c6-9785-42c2-a54c-61aa3a7ae664_del complete
Nov 22 04:07:07 compute-0 ceph-mon[75011]: pgmap v1686: 305 pgs: 305 active+clean; 169 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 2.1 KiB/s wr, 53 op/s
Nov 22 04:07:07 compute-0 nova_compute[253461]: 2025-11-22 04:07:07.488 253465 INFO nova.compute.manager [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Took 9.95 seconds to destroy the instance on the hypervisor.
Nov 22 04:07:07 compute-0 nova_compute[253461]: 2025-11-22 04:07:07.489 253465 DEBUG oslo.service.loopingcall [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:07:07 compute-0 nova_compute[253461]: 2025-11-22 04:07:07.489 253465 DEBUG nova.compute.manager [-] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:07:07 compute-0 nova_compute[253461]: 2025-11-22 04:07:07.489 253465 DEBUG nova.network.neutron [-] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:07:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Nov 22 04:07:08 compute-0 nova_compute[253461]: 2025-11-22 04:07:08.206 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Nov 22 04:07:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Nov 22 04:07:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.5 KiB/s wr, 41 op/s
Nov 22 04:07:08 compute-0 nova_compute[253461]: 2025-11-22 04:07:08.502 253465 DEBUG nova.network.neutron [-] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:07:08 compute-0 nova_compute[253461]: 2025-11-22 04:07:08.580 253465 INFO nova.compute.manager [-] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Took 1.09 seconds to deallocate network for instance.
Nov 22 04:07:08 compute-0 nova_compute[253461]: 2025-11-22 04:07:08.635 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:08 compute-0 nova_compute[253461]: 2025-11-22 04:07:08.671 253465 DEBUG nova.compute.manager [req-6121c82a-be69-4e4d-baa3-62e0acdae965 req-f259d0f6-4d9a-4c72-ab56-c2573995eb70 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Received event network-vif-deleted-e0a979c4-306d-47e7-a853-95a815ae464f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.011 253465 INFO nova.compute.manager [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Took 0.43 seconds to detach 1 volumes for instance.
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.199 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.200 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.264 253465 DEBUG oslo_concurrency.processutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:07:09 compute-0 ceph-mon[75011]: osdmap e422: 3 total, 3 up, 3 in
Nov 22 04:07:09 compute-0 ceph-mon[75011]: pgmap v1688: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.5 KiB/s wr, 41 op/s
Nov 22 04:07:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/177357771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.721 253465 DEBUG oslo_concurrency.processutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.730 253465 DEBUG nova.compute.provider_tree [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.760 253465 DEBUG nova.scheduler.client.report [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.803 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:07:09 compute-0 nova_compute[253461]: 2025-11-22 04:07:09.887 253465 INFO nova.scheduler.client.report [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Deleted allocations for instance ab8b13c6-9785-42c2-a54c-61aa3a7ae664
Nov 22 04:07:10 compute-0 nova_compute[253461]: 2025-11-22 04:07:10.020 253465 DEBUG oslo_concurrency.lockutils [None req-5f8e72c6-314b-41a1-9983-a7d9584182f7 45ccef35c0c843a59c9dfd0eb67190a6 83cc5de7368b40b984b51f781e85343c - - default default] Lock "ab8b13c6-9785-42c2-a54c-61aa3a7ae664" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 12.483s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:07:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Nov 22 04:07:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/177357771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Nov 22 04:07:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Nov 22 04:07:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Nov 22 04:07:11 compute-0 ceph-mon[75011]: osdmap e423: 3 total, 3 up, 3 in
Nov 22 04:07:11 compute-0 ceph-mon[75011]: pgmap v1690: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.7 KiB/s wr, 39 op/s
Nov 22 04:07:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e423 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.688557) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784431688624, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2065, "num_deletes": 268, "total_data_size": 2978737, "memory_usage": 3026464, "flush_reason": "Manual Compaction"}
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Nov 22 04:07:11 compute-0 sshd-session[290711]: Invalid user installer from 27.79.43.64 port 37902
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784431714867, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 1972024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31801, "largest_seqno": 33865, "table_properties": {"data_size": 1964342, "index_size": 4308, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 19762, "raw_average_key_size": 21, "raw_value_size": 1947568, "raw_average_value_size": 2159, "num_data_blocks": 191, "num_entries": 902, "num_filter_entries": 902, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784275, "oldest_key_time": 1763784275, "file_creation_time": 1763784431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 26343 microseconds, and 4924 cpu microseconds.
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.714912) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 1972024 bytes OK
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.714933) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.717709) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.717727) EVENT_LOG_v1 {"time_micros": 1763784431717722, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.717746) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 2969762, prev total WAL file size 2969762, number of live WAL files 2.
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.718864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(1925KB)], [65(10MB)]
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784431718928, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 12849952, "oldest_snapshot_seqno": -1}
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6542 keys, 10478982 bytes, temperature: kUnknown
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784431863856, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 10478982, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10430239, "index_size": 31287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16389, "raw_key_size": 163426, "raw_average_key_size": 24, "raw_value_size": 10307790, "raw_average_value_size": 1575, "num_data_blocks": 1265, "num_entries": 6542, "num_filter_entries": 6542, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.864194) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 10478982 bytes
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.866605) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 88.6 rd, 72.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 10.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(11.8) write-amplify(5.3) OK, records in: 7010, records dropped: 468 output_compression: NoCompression
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.866648) EVENT_LOG_v1 {"time_micros": 1763784431866630, "job": 36, "event": "compaction_finished", "compaction_time_micros": 145037, "compaction_time_cpu_micros": 47377, "output_level": 6, "num_output_files": 1, "total_output_size": 10478982, "num_input_records": 7010, "num_output_records": 6542, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784431867647, "job": 36, "event": "table_file_deletion", "file_number": 67}
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784431871558, "job": 36, "event": "table_file_deletion", "file_number": 65}
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.718707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.871721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.871728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.871730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.871732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:11 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:11.871734) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:11 compute-0 sshd-session[290711]: Connection closed by invalid user installer 27.79.43.64 port 37902 [preauth]
Nov 22 04:07:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:07:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2666072548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:07:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2666072548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.4 KiB/s wr, 53 op/s
Nov 22 04:07:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Nov 22 04:07:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Nov 22 04:07:12 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Nov 22 04:07:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2666072548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2666072548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:12 compute-0 ceph-mon[75011]: pgmap v1691: 305 pgs: 305 active+clean; 169 MiB data, 444 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.4 KiB/s wr, 53 op/s
Nov 22 04:07:13 compute-0 nova_compute[253461]: 2025-11-22 04:07:13.207 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:13 compute-0 nova_compute[253461]: 2025-11-22 04:07:13.391 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784418.3886676, ab8b13c6-9785-42c2-a54c-61aa3a7ae664 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:07:13 compute-0 nova_compute[253461]: 2025-11-22 04:07:13.392 253465 INFO nova.compute.manager [-] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] VM Stopped (Lifecycle Event)
Nov 22 04:07:13 compute-0 nova_compute[253461]: 2025-11-22 04:07:13.426 253465 DEBUG nova.compute.manager [None req-b0b22fd1-2cda-44d7-a22d-0e32f2e88a8e - - - - - -] [instance: ab8b13c6-9785-42c2-a54c-61aa3a7ae664] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:07:13 compute-0 nova_compute[253461]: 2025-11-22 04:07:13.638 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:13 compute-0 ceph-mon[75011]: osdmap e424: 3 total, 3 up, 3 in
Nov 22 04:07:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1693: 305 pgs: 305 active+clean; 152 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.6 KiB/s wr, 77 op/s
Nov 22 04:07:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3582866953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:14 compute-0 nova_compute[253461]: 2025-11-22 04:07:14.713 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:14 compute-0 ceph-mon[75011]: pgmap v1693: 305 pgs: 305 active+clean; 152 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 1.6 KiB/s wr, 77 op/s
Nov 22 04:07:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3582866953' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:14 compute-0 nova_compute[253461]: 2025-11-22 04:07:14.852 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:15 compute-0 podman[290714]: 2025-11-22 04:07:15.632045124 +0000 UTC m=+0.301586435 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:07:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Nov 22 04:07:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Nov 22 04:07:15 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Nov 22 04:07:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.8 KiB/s wr, 109 op/s
Nov 22 04:07:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e425 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Nov 22 04:07:16 compute-0 ceph-mon[75011]: osdmap e425: 3 total, 3 up, 3 in
Nov 22 04:07:16 compute-0 ceph-mon[75011]: pgmap v1695: 305 pgs: 305 active+clean; 121 MiB data, 419 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 2.8 KiB/s wr, 109 op/s
Nov 22 04:07:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Nov 22 04:07:16 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Nov 22 04:07:17 compute-0 ceph-mon[75011]: osdmap e426: 3 total, 3 up, 3 in
Nov 22 04:07:18 compute-0 nova_compute[253461]: 2025-11-22 04:07:18.210 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.3 KiB/s wr, 102 op/s
Nov 22 04:07:18 compute-0 nova_compute[253461]: 2025-11-22 04:07:18.639 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Nov 22 04:07:19 compute-0 ceph-mon[75011]: pgmap v1697: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.3 KiB/s wr, 102 op/s
Nov 22 04:07:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Nov 22 04:07:19 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Nov 22 04:07:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:07:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/680816931' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:07:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/680816931' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:20 compute-0 ceph-mon[75011]: osdmap e427: 3 total, 3 up, 3 in
Nov 22 04:07:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/680816931' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/680816931' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.5 KiB/s wr, 116 op/s
Nov 22 04:07:21 compute-0 ceph-mon[75011]: pgmap v1699: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 5.5 KiB/s wr, 116 op/s
Nov 22 04:07:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Nov 22 04:07:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Nov 22 04:07:21 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Nov 22 04:07:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2318763782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 104 op/s
Nov 22 04:07:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Nov 22 04:07:22 compute-0 ceph-mon[75011]: osdmap e428: 3 total, 3 up, 3 in
Nov 22 04:07:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2318763782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:22 compute-0 ceph-mon[75011]: pgmap v1701: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 5.8 KiB/s wr, 104 op/s
Nov 22 04:07:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Nov 22 04:07:22 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Nov 22 04:07:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:23.017 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:07:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:23.017 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:07:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:07:23.018 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:07:23 compute-0 nova_compute[253461]: 2025-11-22 04:07:23.256 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:23 compute-0 nova_compute[253461]: 2025-11-22 04:07:23.641 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Nov 22 04:07:23 compute-0 ceph-mon[75011]: osdmap e429: 3 total, 3 up, 3 in
Nov 22 04:07:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Nov 22 04:07:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Nov 22 04:07:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 4.5 KiB/s wr, 97 op/s
Nov 22 04:07:24 compute-0 ceph-mon[75011]: osdmap e430: 3 total, 3 up, 3 in
Nov 22 04:07:24 compute-0 ceph-mon[75011]: pgmap v1704: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 72 KiB/s rd, 4.5 KiB/s wr, 97 op/s
Nov 22 04:07:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 KiB/s wr, 48 op/s
Nov 22 04:07:26 compute-0 ceph-mon[75011]: pgmap v1705: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 KiB/s wr, 48 op/s
Nov 22 04:07:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Nov 22 04:07:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Nov 22 04:07:26 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Nov 22 04:07:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Nov 22 04:07:27 compute-0 ceph-mon[75011]: osdmap e431: 3 total, 3 up, 3 in
Nov 22 04:07:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Nov 22 04:07:27 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Nov 22 04:07:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2560144528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:28 compute-0 nova_compute[253461]: 2025-11-22 04:07:28.257 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.6 KiB/s wr, 45 op/s
Nov 22 04:07:28 compute-0 nova_compute[253461]: 2025-11-22 04:07:28.643 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:28 compute-0 ceph-mon[75011]: osdmap e432: 3 total, 3 up, 3 in
Nov 22 04:07:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2560144528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:28 compute-0 ceph-mon[75011]: pgmap v1708: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.6 KiB/s wr, 45 op/s
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.008626) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784449008686, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 512, "num_deletes": 252, "total_data_size": 416109, "memory_usage": 426728, "flush_reason": "Manual Compaction"}
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784449026679, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 410659, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33866, "largest_seqno": 34377, "table_properties": {"data_size": 407749, "index_size": 882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7093, "raw_average_key_size": 19, "raw_value_size": 401887, "raw_average_value_size": 1107, "num_data_blocks": 39, "num_entries": 363, "num_filter_entries": 363, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784432, "oldest_key_time": 1763784432, "file_creation_time": 1763784449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 18114 microseconds, and 2119 cpu microseconds.
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.026737) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 410659 bytes OK
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.026764) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.037672) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.037720) EVENT_LOG_v1 {"time_micros": 1763784449037708, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.037746) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 413077, prev total WAL file size 413077, number of live WAL files 2.
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.038403) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(401KB)], [68(10233KB)]
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784449038468, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10889641, "oldest_snapshot_seqno": -1}
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6391 keys, 9109748 bytes, temperature: kUnknown
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784449181220, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9109748, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9063644, "index_size": 29034, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16005, "raw_key_size": 161145, "raw_average_key_size": 25, "raw_value_size": 8945370, "raw_average_value_size": 1399, "num_data_blocks": 1158, "num_entries": 6391, "num_filter_entries": 6391, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784449, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.181530) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9109748 bytes
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.186284) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.2 rd, 63.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.0 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(48.7) write-amplify(22.2) OK, records in: 6905, records dropped: 514 output_compression: NoCompression
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.186320) EVENT_LOG_v1 {"time_micros": 1763784449186306, "job": 38, "event": "compaction_finished", "compaction_time_micros": 142833, "compaction_time_cpu_micros": 24623, "output_level": 6, "num_output_files": 1, "total_output_size": 9109748, "num_input_records": 6905, "num_output_records": 6391, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784449186593, "job": 38, "event": "table_file_deletion", "file_number": 70}
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784449188807, "job": 38, "event": "table_file_deletion", "file_number": 68}
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.038277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.188843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.188848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.188850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.188852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:29 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:07:29.188854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:07:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:07:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624901368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:07:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624901368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Nov 22 04:07:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Nov 22 04:07:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Nov 22 04:07:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/624901368' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/624901368' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:30 compute-0 ceph-mon[75011]: osdmap e433: 3 total, 3 up, 3 in
Nov 22 04:07:30 compute-0 podman[290736]: 2025-11-22 04:07:30.374149536 +0000 UTC m=+0.057523486 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 04:07:30 compute-0 podman[290737]: 2025-11-22 04:07:30.408638359 +0000 UTC m=+0.082864308 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 04:07:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.3 KiB/s wr, 52 op/s
Nov 22 04:07:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Nov 22 04:07:31 compute-0 ceph-mon[75011]: pgmap v1710: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.3 KiB/s wr, 52 op/s
Nov 22 04:07:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Nov 22 04:07:31 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Nov 22 04:07:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e434 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:32 compute-0 ceph-mon[75011]: osdmap e434: 3 total, 3 up, 3 in
Nov 22 04:07:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.1 KiB/s wr, 82 op/s
Nov 22 04:07:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1772709540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Nov 22 04:07:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Nov 22 04:07:33 compute-0 nova_compute[253461]: 2025-11-22 04:07:33.303 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:33 compute-0 ceph-mon[75011]: pgmap v1712: 305 pgs: 305 active+clean; 88 MiB data, 398 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.1 KiB/s wr, 82 op/s
Nov 22 04:07:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1772709540' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:33 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Nov 22 04:07:33 compute-0 nova_compute[253461]: 2025-11-22 04:07:33.645 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:34 compute-0 ceph-mon[75011]: osdmap e435: 3 total, 3 up, 3 in
Nov 22 04:07:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 88 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.5 KiB/s wr, 79 op/s
Nov 22 04:07:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Nov 22 04:07:35 compute-0 ceph-mon[75011]: pgmap v1714: 305 pgs: 305 active+clean; 88 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 58 KiB/s rd, 4.5 KiB/s wr, 79 op/s
Nov 22 04:07:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Nov 22 04:07:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:07:36
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'vms', 'default.rgw.log']
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 88 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.3 KiB/s wr, 72 op/s
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:07:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:07:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Nov 22 04:07:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Nov 22 04:07:36 compute-0 ceph-mon[75011]: osdmap e436: 3 total, 3 up, 3 in
Nov 22 04:07:36 compute-0 ceph-mon[75011]: pgmap v1716: 305 pgs: 305 active+clean; 88 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 4.3 KiB/s wr, 72 op/s
Nov 22 04:07:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Nov 22 04:07:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e437 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Nov 22 04:07:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Nov 22 04:07:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Nov 22 04:07:37 compute-0 ceph-mon[75011]: osdmap e437: 3 total, 3 up, 3 in
Nov 22 04:07:37 compute-0 ceph-mon[75011]: osdmap e438: 3 total, 3 up, 3 in
Nov 22 04:07:38 compute-0 nova_compute[253461]: 2025-11-22 04:07:38.305 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 88 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 4.0 KiB/s wr, 62 op/s
Nov 22 04:07:38 compute-0 nova_compute[253461]: 2025-11-22 04:07:38.648 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:38 compute-0 ceph-mon[75011]: pgmap v1719: 305 pgs: 305 active+clean; 88 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 4.0 KiB/s wr, 62 op/s
Nov 22 04:07:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Nov 22 04:07:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Nov 22 04:07:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Nov 22 04:07:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/50001181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 68 op/s
Nov 22 04:07:41 compute-0 ceph-mon[75011]: osdmap e439: 3 total, 3 up, 3 in
Nov 22 04:07:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/50001181' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:41 compute-0 ceph-mon[75011]: pgmap v1721: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 68 op/s
Nov 22 04:07:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e439 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Nov 22 04:07:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Nov 22 04:07:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Nov 22 04:07:42 compute-0 nova_compute[253461]: 2025-11-22 04:07:42.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:42 compute-0 nova_compute[253461]: 2025-11-22 04:07:42.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:42 compute-0 nova_compute[253461]: 2025-11-22 04:07:42.472 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:07:42 compute-0 nova_compute[253461]: 2025-11-22 04:07:42.473 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:07:42 compute-0 nova_compute[253461]: 2025-11-22 04:07:42.474 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:07:42 compute-0 nova_compute[253461]: 2025-11-22 04:07:42.474 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:07:42 compute-0 nova_compute[253461]: 2025-11-22 04:07:42.475 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:07:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.2 KiB/s wr, 66 op/s
Nov 22 04:07:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Nov 22 04:07:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3689970393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.008 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:07:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.271 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:07:43 compute-0 ceph-mon[75011]: osdmap e440: 3 total, 3 up, 3 in
Nov 22 04:07:43 compute-0 ceph-mon[75011]: pgmap v1723: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 4.2 KiB/s wr, 66 op/s
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.273 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4443MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.273 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.273 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:07:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.307 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.488 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.489 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.516 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:07:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:07:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1359128317' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:07:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1359128317' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.650 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:07:43 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1948489804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.965 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:07:43 compute-0 nova_compute[253461]: 2025-11-22 04:07:43.973 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:07:44 compute-0 nova_compute[253461]: 2025-11-22 04:07:44.041 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:07:44 compute-0 nova_compute[253461]: 2025-11-22 04:07:44.135 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:07:44 compute-0 nova_compute[253461]: 2025-11-22 04:07:44.136 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:07:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3689970393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:44 compute-0 ceph-mon[75011]: osdmap e441: 3 total, 3 up, 3 in
Nov 22 04:07:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1359128317' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1359128317' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1948489804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:07:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 4.3 KiB/s wr, 69 op/s
Nov 22 04:07:45 compute-0 nova_compute[253461]: 2025-11-22 04:07:45.137 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:45 compute-0 nova_compute[253461]: 2025-11-22 04:07:45.138 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:45 compute-0 nova_compute[253461]: 2025-11-22 04:07:45.138 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:07:45 compute-0 nova_compute[253461]: 2025-11-22 04:07:45.202 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:07:45 compute-0 nova_compute[253461]: 2025-11-22 04:07:45.203 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:45 compute-0 nova_compute[253461]: 2025-11-22 04:07:45.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:45 compute-0 ceph-mon[75011]: pgmap v1725: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 4.3 KiB/s wr, 69 op/s
Nov 22 04:07:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048768837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:46 compute-0 podman[290824]: 2025-11-22 04:07:46.407342019 +0000 UTC m=+0.072698072 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 KiB/s wr, 45 op/s
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034784117177834214 of space, bias 1.0, pg target 0.10435235153350264 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.0174513251286058e-06 of space, bias 1.0, pg target 0.00030523539753858175 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:07:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:07:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Nov 22 04:07:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4048768837' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:46 compute-0 ceph-mon[75011]: pgmap v1726: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 KiB/s wr, 45 op/s
Nov 22 04:07:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Nov 22 04:07:46 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Nov 22 04:07:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e442 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Nov 22 04:07:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Nov 22 04:07:46 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Nov 22 04:07:47 compute-0 nova_compute[253461]: 2025-11-22 04:07:47.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:47 compute-0 nova_compute[253461]: 2025-11-22 04:07:47.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:07:47 compute-0 ceph-mon[75011]: osdmap e442: 3 total, 3 up, 3 in
Nov 22 04:07:47 compute-0 ceph-mon[75011]: osdmap e443: 3 total, 3 up, 3 in
Nov 22 04:07:48 compute-0 nova_compute[253461]: 2025-11-22 04:07:48.309 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:48 compute-0 nova_compute[253461]: 2025-11-22 04:07:48.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.5 KiB/s wr, 54 op/s
Nov 22 04:07:48 compute-0 nova_compute[253461]: 2025-11-22 04:07:48.653 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Nov 22 04:07:48 compute-0 ceph-mon[75011]: pgmap v1729: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.5 KiB/s wr, 54 op/s
Nov 22 04:07:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Nov 22 04:07:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Nov 22 04:07:49 compute-0 nova_compute[253461]: 2025-11-22 04:07:49.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:07:50 compute-0 ceph-mon[75011]: osdmap e444: 3 total, 3 up, 3 in
Nov 22 04:07:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 3.3 KiB/s wr, 126 op/s
Nov 22 04:07:50 compute-0 ovn_controller[152691]: 2025-11-22T04:07:50Z|00249|memory_trim|INFO|Detected inactivity (last active 30021 ms ago): trimming memory
Nov 22 04:07:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:07:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/416277171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:07:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/416277171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/383100101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:51 compute-0 ceph-mon[75011]: pgmap v1731: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 97 KiB/s rd, 3.3 KiB/s wr, 126 op/s
Nov 22 04:07:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/416277171' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:07:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/416277171' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:07:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 4.3 KiB/s wr, 121 op/s
Nov 22 04:07:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/383100101' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Nov 22 04:07:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Nov 22 04:07:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Nov 22 04:07:53 compute-0 nova_compute[253461]: 2025-11-22 04:07:53.313 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:53 compute-0 ceph-mon[75011]: pgmap v1732: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 4.3 KiB/s wr, 121 op/s
Nov 22 04:07:53 compute-0 ceph-mon[75011]: osdmap e445: 3 total, 3 up, 3 in
Nov 22 04:07:53 compute-0 nova_compute[253461]: 2025-11-22 04:07:53.655 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.0 KiB/s wr, 103 op/s
Nov 22 04:07:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Nov 22 04:07:54 compute-0 ceph-mon[75011]: pgmap v1734: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 3.0 KiB/s wr, 103 op/s
Nov 22 04:07:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Nov 22 04:07:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Nov 22 04:07:55 compute-0 sudo[290845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:07:55 compute-0 sudo[290845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:07:55 compute-0 sudo[290845]: pam_unix(sudo:session): session closed for user root
Nov 22 04:07:55 compute-0 sudo[290870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:07:55 compute-0 sudo[290870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:07:55 compute-0 sudo[290870]: pam_unix(sudo:session): session closed for user root
Nov 22 04:07:55 compute-0 sudo[290895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:07:55 compute-0 sudo[290895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:07:55 compute-0 sudo[290895]: pam_unix(sudo:session): session closed for user root
Nov 22 04:07:55 compute-0 sudo[290920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:07:55 compute-0 sudo[290920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:07:55 compute-0 ceph-mon[75011]: osdmap e446: 3 total, 3 up, 3 in
Nov 22 04:07:56 compute-0 sudo[290920]: pam_unix(sudo:session): session closed for user root
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 04:07:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:07:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:07:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:07:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:07:56 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 12616fa1-39bd-4aa2-9795-9e12a1846604 does not exist
Nov 22 04:07:56 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev da69ffc6-4873-4453-8eba-473066412ef9 does not exist
Nov 22 04:07:56 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev fb1a3809-8f1f-4869-a1c1-4a562a67c0de does not exist
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:07:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:07:56 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:07:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:07:56 compute-0 sudo[290976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:07:56 compute-0 sudo[290976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:07:56 compute-0 sudo[290976]: pam_unix(sudo:session): session closed for user root
Nov 22 04:07:56 compute-0 sudo[291001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:07:56 compute-0 sudo[291001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:07:56 compute-0 sudo[291001]: pam_unix(sudo:session): session closed for user root
Nov 22 04:07:56 compute-0 sudo[291026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:07:56 compute-0 sudo[291026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:07:56 compute-0 sudo[291026]: pam_unix(sudo:session): session closed for user root
Nov 22 04:07:56 compute-0 sudo[291051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:07:56 compute-0 sudo[291051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:07:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 3.2 KiB/s wr, 106 op/s
Nov 22 04:07:56 compute-0 podman[291117]: 2025-11-22 04:07:56.853542977 +0000 UTC m=+0.118571325 container create b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_johnson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:07:56 compute-0 podman[291117]: 2025-11-22 04:07:56.759957977 +0000 UTC m=+0.024986315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e446 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:07:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Nov 22 04:07:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Nov 22 04:07:57 compute-0 systemd[1]: Started libpod-conmon-b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886.scope.
Nov 22 04:07:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Nov 22 04:07:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:07:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:07:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:07:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:07:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:07:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:07:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:07:57 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:07:57 compute-0 ceph-mon[75011]: pgmap v1736: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 3.2 KiB/s wr, 106 op/s
Nov 22 04:07:57 compute-0 podman[291117]: 2025-11-22 04:07:57.089752725 +0000 UTC m=+0.354781033 container init b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_johnson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:07:57 compute-0 podman[291117]: 2025-11-22 04:07:57.101566605 +0000 UTC m=+0.366594963 container start b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:07:57 compute-0 heuristic_johnson[291134]: 167 167
Nov 22 04:07:57 compute-0 systemd[1]: libpod-b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886.scope: Deactivated successfully.
Nov 22 04:07:57 compute-0 podman[291117]: 2025-11-22 04:07:57.605130473 +0000 UTC m=+0.870158811 container attach b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_johnson, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:07:57 compute-0 podman[291117]: 2025-11-22 04:07:57.606224821 +0000 UTC m=+0.871253159 container died b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_johnson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2eb42bdf4274b4f26560dc2600dc77152ac0d2969ba5aa7867d758b40fec3e3-merged.mount: Deactivated successfully.
Nov 22 04:07:58 compute-0 ceph-mon[75011]: osdmap e447: 3 total, 3 up, 3 in
Nov 22 04:07:58 compute-0 nova_compute[253461]: 2025-11-22 04:07:58.315 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.0 KiB/s wr, 77 op/s
Nov 22 04:07:58 compute-0 podman[291117]: 2025-11-22 04:07:58.561072738 +0000 UTC m=+1.826101076 container remove b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_johnson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:07:58 compute-0 systemd[1]: libpod-conmon-b8a5a1935501741baa22dadab23b7d381912c41317c07a88dcca2a5588279886.scope: Deactivated successfully.
Nov 22 04:07:58 compute-0 nova_compute[253461]: 2025-11-22 04:07:58.657 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:07:58 compute-0 podman[291158]: 2025-11-22 04:07:58.802757963 +0000 UTC m=+0.113702514 container create c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:07:58 compute-0 podman[291158]: 2025-11-22 04:07:58.71017393 +0000 UTC m=+0.021118501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:07:58 compute-0 systemd[1]: Started libpod-conmon-c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23.scope.
Nov 22 04:07:58 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6832058c7268ec037dfe64e814173209b2f3857a32a1ac3ef43f50539bfa643/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6832058c7268ec037dfe64e814173209b2f3857a32a1ac3ef43f50539bfa643/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6832058c7268ec037dfe64e814173209b2f3857a32a1ac3ef43f50539bfa643/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6832058c7268ec037dfe64e814173209b2f3857a32a1ac3ef43f50539bfa643/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6832058c7268ec037dfe64e814173209b2f3857a32a1ac3ef43f50539bfa643/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:07:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:07:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1864043100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:07:59 compute-0 podman[291158]: 2025-11-22 04:07:59.050073125 +0000 UTC m=+0.361017696 container init c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldstine, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:07:59 compute-0 podman[291158]: 2025-11-22 04:07:59.057645067 +0000 UTC m=+0.368589618 container start c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldstine, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Nov 22 04:07:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Nov 22 04:07:59 compute-0 podman[291158]: 2025-11-22 04:07:59.253746712 +0000 UTC m=+0.564691303 container attach c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:07:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Nov 22 04:07:59 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Nov 22 04:07:59 compute-0 ceph-mon[75011]: pgmap v1738: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.0 KiB/s wr, 77 op/s
Nov 22 04:07:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1864043100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:00 compute-0 admiring_goldstine[291174]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:08:00 compute-0 admiring_goldstine[291174]: --> relative data size: 1.0
Nov 22 04:08:00 compute-0 admiring_goldstine[291174]: --> All data devices are unavailable
Nov 22 04:08:00 compute-0 systemd[1]: libpod-c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23.scope: Deactivated successfully.
Nov 22 04:08:00 compute-0 systemd[1]: libpod-c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23.scope: Consumed 1.018s CPU time.
Nov 22 04:08:00 compute-0 podman[291158]: 2025-11-22 04:08:00.171532917 +0000 UTC m=+1.482477488 container died c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldstine, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:08:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:08:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1893782984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:08:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1893782984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.2 KiB/s wr, 60 op/s
Nov 22 04:08:00 compute-0 ceph-mon[75011]: osdmap e448: 3 total, 3 up, 3 in
Nov 22 04:08:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1893782984' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1893782984' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6832058c7268ec037dfe64e814173209b2f3857a32a1ac3ef43f50539bfa643-merged.mount: Deactivated successfully.
Nov 22 04:08:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Nov 22 04:08:02 compute-0 ceph-mon[75011]: pgmap v1740: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.2 KiB/s wr, 60 op/s
Nov 22 04:08:02 compute-0 podman[291158]: 2025-11-22 04:08:02.183077041 +0000 UTC m=+3.494021622 container remove c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_goldstine, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:08:02 compute-0 sudo[291051]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:02 compute-0 systemd[1]: libpod-conmon-c4502275785fec0e90bb4aff56a0b204bb91ad1dadb481d48370626f5d6c9d23.scope: Deactivated successfully.
Nov 22 04:08:02 compute-0 podman[291212]: 2025-11-22 04:08:02.258700789 +0000 UTC m=+1.417162315 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:08:02 compute-0 podman[291213]: 2025-11-22 04:08:02.291359825 +0000 UTC m=+1.449769391 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 04:08:02 compute-0 sudo[291244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:08:02 compute-0 sudo[291244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:02 compute-0 sudo[291244]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:02 compute-0 sudo[291281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:08:02 compute-0 sudo[291281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:02 compute-0 sudo[291281]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:02 compute-0 sudo[291306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:08:02 compute-0 sudo[291306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:02 compute-0 sudo[291306]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.2 KiB/s wr, 59 op/s
Nov 22 04:08:02 compute-0 sudo[291331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:08:02 compute-0 sudo[291331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Nov 22 04:08:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Nov 22 04:08:03 compute-0 podman[291397]: 2025-11-22 04:08:03.012108636 +0000 UTC m=+0.035205961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:08:03 compute-0 podman[291397]: 2025-11-22 04:08:03.336956937 +0000 UTC m=+0.360054252 container create 0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:08:03 compute-0 nova_compute[253461]: 2025-11-22 04:08:03.349 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:03 compute-0 ceph-mon[75011]: pgmap v1741: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.2 KiB/s wr, 59 op/s
Nov 22 04:08:03 compute-0 ceph-mon[75011]: osdmap e449: 3 total, 3 up, 3 in
Nov 22 04:08:03 compute-0 nova_compute[253461]: 2025-11-22 04:08:03.660 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:03 compute-0 systemd[1]: Started libpod-conmon-0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56.scope.
Nov 22 04:08:03 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:08:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Nov 22 04:08:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Nov 22 04:08:04 compute-0 podman[291397]: 2025-11-22 04:08:04.611536193 +0000 UTC m=+1.634633558 container init 0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:08:04 compute-0 podman[291397]: 2025-11-22 04:08:04.620581647 +0000 UTC m=+1.643678972 container start 0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:08:04 compute-0 eloquent_volhard[291414]: 167 167
Nov 22 04:08:04 compute-0 systemd[1]: libpod-0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56.scope: Deactivated successfully.
Nov 22 04:08:04 compute-0 podman[291397]: 2025-11-22 04:08:04.87748222 +0000 UTC m=+1.900579585 container attach 0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:08:04 compute-0 podman[291397]: 2025-11-22 04:08:04.878735926 +0000 UTC m=+1.901833241 container died 0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:08:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Nov 22 04:08:04 compute-0 ceph-mon[75011]: pgmap v1743: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Nov 22 04:08:05 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Nov 22 04:08:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c73bef51bdee8f7d94532d80e31319f1bc8b81878511a3ae2015232a51fe802-merged.mount: Deactivated successfully.
Nov 22 04:08:05 compute-0 podman[291397]: 2025-11-22 04:08:05.841684697 +0000 UTC m=+2.864781982 container remove 0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_volhard, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:08:05 compute-0 systemd[1]: libpod-conmon-0c2b0a9b2721fcd929115b79b202e3ae71f33e5f9b5ce08094d0b6be1e38db56.scope: Deactivated successfully.
Nov 22 04:08:06 compute-0 ceph-mon[75011]: osdmap e450: 3 total, 3 up, 3 in
Nov 22 04:08:06 compute-0 podman[291439]: 2025-11-22 04:08:06.065060401 +0000 UTC m=+0.040963759 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:08:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:06 compute-0 podman[291439]: 2025-11-22 04:08:06.383612122 +0000 UTC m=+0.359515431 container create 4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cori, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:08:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Nov 22 04:08:06 compute-0 systemd[1]: Started libpod-conmon-4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048.scope.
Nov 22 04:08:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17859ec68246ceb1953019e38faafc83620c9b6622fc5eedd8772c47de3fc1bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17859ec68246ceb1953019e38faafc83620c9b6622fc5eedd8772c47de3fc1bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17859ec68246ceb1953019e38faafc83620c9b6622fc5eedd8772c47de3fc1bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:08:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17859ec68246ceb1953019e38faafc83620c9b6622fc5eedd8772c47de3fc1bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:08:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e450 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Nov 22 04:08:07 compute-0 podman[291439]: 2025-11-22 04:08:07.070413769 +0000 UTC m=+1.046317058 container init 4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cori, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:08:07 compute-0 podman[291439]: 2025-11-22 04:08:07.078967111 +0000 UTC m=+1.054870390 container start 4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cori, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:08:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Nov 22 04:08:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Nov 22 04:08:07 compute-0 podman[291439]: 2025-11-22 04:08:07.347299293 +0000 UTC m=+1.323202592 container attach 4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:08:07 compute-0 ceph-mon[75011]: pgmap v1745: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.5 KiB/s wr, 20 op/s
Nov 22 04:08:08 compute-0 interesting_cori[291455]: {
Nov 22 04:08:08 compute-0 interesting_cori[291455]:     "0": [
Nov 22 04:08:08 compute-0 interesting_cori[291455]:         {
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "devices": [
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "/dev/loop3"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             ],
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_name": "ceph_lv0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_size": "21470642176",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "name": "ceph_lv0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "tags": {
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cluster_name": "ceph",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.crush_device_class": "",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.encrypted": "0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osd_id": "0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.type": "block",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.vdo": "0"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             },
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "type": "block",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "vg_name": "ceph_vg0"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:         }
Nov 22 04:08:08 compute-0 interesting_cori[291455]:     ],
Nov 22 04:08:08 compute-0 interesting_cori[291455]:     "1": [
Nov 22 04:08:08 compute-0 interesting_cori[291455]:         {
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "devices": [
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "/dev/loop4"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             ],
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_name": "ceph_lv1",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_size": "21470642176",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "name": "ceph_lv1",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "tags": {
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cluster_name": "ceph",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.crush_device_class": "",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.encrypted": "0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osd_id": "1",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.type": "block",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.vdo": "0"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             },
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "type": "block",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "vg_name": "ceph_vg1"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:         }
Nov 22 04:08:08 compute-0 interesting_cori[291455]:     ],
Nov 22 04:08:08 compute-0 interesting_cori[291455]:     "2": [
Nov 22 04:08:08 compute-0 interesting_cori[291455]:         {
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "devices": [
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "/dev/loop5"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             ],
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_name": "ceph_lv2",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_size": "21470642176",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "name": "ceph_lv2",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "tags": {
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.cluster_name": "ceph",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.crush_device_class": "",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.encrypted": "0",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osd_id": "2",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.type": "block",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:                 "ceph.vdo": "0"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             },
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "type": "block",
Nov 22 04:08:08 compute-0 interesting_cori[291455]:             "vg_name": "ceph_vg2"
Nov 22 04:08:08 compute-0 interesting_cori[291455]:         }
Nov 22 04:08:08 compute-0 interesting_cori[291455]:     ]
Nov 22 04:08:08 compute-0 interesting_cori[291455]: }
Nov 22 04:08:08 compute-0 systemd[1]: libpod-4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048.scope: Deactivated successfully.
Nov 22 04:08:08 compute-0 podman[291439]: 2025-11-22 04:08:08.045775988 +0000 UTC m=+2.021679267 container died 4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cori, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:08:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Nov 22 04:08:08 compute-0 nova_compute[253461]: 2025-11-22 04:08:08.352 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1023 B/s wr, 23 op/s
Nov 22 04:08:08 compute-0 nova_compute[253461]: 2025-11-22 04:08:08.662 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Nov 22 04:08:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Nov 22 04:08:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-17859ec68246ceb1953019e38faafc83620c9b6622fc5eedd8772c47de3fc1bd-merged.mount: Deactivated successfully.
Nov 22 04:08:09 compute-0 ceph-mon[75011]: osdmap e451: 3 total, 3 up, 3 in
Nov 22 04:08:09 compute-0 podman[291439]: 2025-11-22 04:08:09.975963059 +0000 UTC m=+3.951866357 container remove 4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_cori, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:08:10 compute-0 sudo[291331]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:10 compute-0 systemd[1]: libpod-conmon-4c54f4e3292181908ed52821c19e1a419b60abcac235b239376e81b03f547048.scope: Deactivated successfully.
Nov 22 04:08:10 compute-0 sudo[291475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:08:10 compute-0 sudo[291475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:10 compute-0 sudo[291475]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:10 compute-0 sudo[291500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:08:10 compute-0 sudo[291500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:10 compute-0 sudo[291500]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:10 compute-0 sudo[291525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:08:10 compute-0 sudo[291525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:10 compute-0 sudo[291525]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:10 compute-0 sudo[291550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:08:10 compute-0 sudo[291550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:10 compute-0 ceph-mon[75011]: pgmap v1747: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1023 B/s wr, 23 op/s
Nov 22 04:08:10 compute-0 ceph-mon[75011]: osdmap e452: 3 total, 3 up, 3 in
Nov 22 04:08:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1023 B/s wr, 25 op/s
Nov 22 04:08:10 compute-0 podman[291616]: 2025-11-22 04:08:10.733939757 +0000 UTC m=+0.021768469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:08:11 compute-0 podman[291616]: 2025-11-22 04:08:11.135171902 +0000 UTC m=+0.423000594 container create 6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:08:11 compute-0 systemd[1]: Started libpod-conmon-6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da.scope.
Nov 22 04:08:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:08:11 compute-0 podman[291616]: 2025-11-22 04:08:11.508119244 +0000 UTC m=+0.795948016 container init 6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:08:11 compute-0 ceph-mon[75011]: pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1023 B/s wr, 25 op/s
Nov 22 04:08:11 compute-0 podman[291616]: 2025-11-22 04:08:11.518484378 +0000 UTC m=+0.806313100 container start 6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_engelbart, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:08:11 compute-0 priceless_engelbart[291632]: 167 167
Nov 22 04:08:11 compute-0 systemd[1]: libpod-6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da.scope: Deactivated successfully.
Nov 22 04:08:11 compute-0 podman[291616]: 2025-11-22 04:08:11.68993941 +0000 UTC m=+0.977768202 container attach 6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:08:11 compute-0 podman[291616]: 2025-11-22 04:08:11.691102284 +0000 UTC m=+0.978931076 container died 6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:08:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aecfcaf13dbb39f2361fc3606f31cf1def9f8b383fa0c65e3936404cc2c47f6-merged.mount: Deactivated successfully.
Nov 22 04:08:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e452 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:12 compute-0 podman[291616]: 2025-11-22 04:08:12.505953859 +0000 UTC m=+1.793782551 container remove 6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:08:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 04:08:12 compute-0 systemd[1]: libpod-conmon-6fb6da4dc4f9a3474330e3f1bcf6f12306780785c627ea9cbeebc716fb2a24da.scope: Deactivated successfully.
Nov 22 04:08:12 compute-0 podman[291656]: 2025-11-22 04:08:12.709696061 +0000 UTC m=+0.046567631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:08:12 compute-0 podman[291656]: 2025-11-22 04:08:12.835127462 +0000 UTC m=+0.171998982 container create 0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:08:12 compute-0 ceph-mon[75011]: pgmap v1750: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 04:08:13 compute-0 systemd[1]: Started libpod-conmon-0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea.scope.
Nov 22 04:08:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017ef05fd52953a1facd726c0628a53080e244bbdce0d8877292d4afd6ae505f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017ef05fd52953a1facd726c0628a53080e244bbdce0d8877292d4afd6ae505f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017ef05fd52953a1facd726c0628a53080e244bbdce0d8877292d4afd6ae505f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:08:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/017ef05fd52953a1facd726c0628a53080e244bbdce0d8877292d4afd6ae505f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:08:13 compute-0 podman[291656]: 2025-11-22 04:08:13.304146297 +0000 UTC m=+0.641017787 container init 0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:08:13 compute-0 podman[291656]: 2025-11-22 04:08:13.31540508 +0000 UTC m=+0.652276610 container start 0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:08:13 compute-0 nova_compute[253461]: 2025-11-22 04:08:13.354 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:13 compute-0 podman[291656]: 2025-11-22 04:08:13.517329402 +0000 UTC m=+0.854200932 container attach 0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Nov 22 04:08:13 compute-0 nova_compute[253461]: 2025-11-22 04:08:13.699 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Nov 22 04:08:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Nov 22 04:08:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]: {
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "osd_id": 1,
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "type": "bluestore"
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:     },
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "osd_id": 0,
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "type": "bluestore"
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:     },
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "osd_id": 2,
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:         "type": "bluestore"
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]:     }
Nov 22 04:08:14 compute-0 nifty_northcutt[291672]: }
Nov 22 04:08:14 compute-0 systemd[1]: libpod-0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea.scope: Deactivated successfully.
Nov 22 04:08:14 compute-0 podman[291656]: 2025-11-22 04:08:14.442638589 +0000 UTC m=+1.779510119 container died 0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:08:14 compute-0 systemd[1]: libpod-0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea.scope: Consumed 1.129s CPU time.
Nov 22 04:08:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Nov 22 04:08:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-017ef05fd52953a1facd726c0628a53080e244bbdce0d8877292d4afd6ae505f-merged.mount: Deactivated successfully.
Nov 22 04:08:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3962400928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:15 compute-0 podman[291656]: 2025-11-22 04:08:15.321206361 +0000 UTC m=+2.658077861 container remove 0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_northcutt, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Nov 22 04:08:15 compute-0 systemd[1]: libpod-conmon-0606adf70da7cf1324d2c4033e31feac47538d5186c36c15dedb7d0d654a09ea.scope: Deactivated successfully.
Nov 22 04:08:15 compute-0 ceph-mon[75011]: osdmap e453: 3 total, 3 up, 3 in
Nov 22 04:08:15 compute-0 ceph-mon[75011]: pgmap v1752: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Nov 22 04:08:15 compute-0 sudo[291550]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:08:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:08:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:08:15 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:08:15 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6322a880-c681-421f-a235-05421c186d02 does not exist
Nov 22 04:08:15 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 983d8447-fa7e-4f64-a275-6122b790e1f7 does not exist
Nov 22 04:08:15 compute-0 sudo[291719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:08:15 compute-0 sudo[291719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:15 compute-0 sudo[291719]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:15 compute-0 sudo[291744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:08:15 compute-0 sudo[291744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:08:15 compute-0 sudo[291744]: pam_unix(sudo:session): session closed for user root
Nov 22 04:08:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3962400928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:08:16 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:08:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 22 04:08:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Nov 22 04:08:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Nov 22 04:08:16 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Nov 22 04:08:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e454 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:17 compute-0 podman[291769]: 2025-11-22 04:08:17.399330461 +0000 UTC m=+0.070248560 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 22 04:08:17 compute-0 ceph-mon[75011]: pgmap v1753: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 22 04:08:17 compute-0 ceph-mon[75011]: osdmap e454: 3 total, 3 up, 3 in
Nov 22 04:08:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Nov 22 04:08:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Nov 22 04:08:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Nov 22 04:08:18 compute-0 nova_compute[253461]: 2025-11-22 04:08:18.357 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 22 04:08:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Nov 22 04:08:18 compute-0 nova_compute[253461]: 2025-11-22 04:08:18.702 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Nov 22 04:08:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Nov 22 04:08:18 compute-0 ceph-mon[75011]: osdmap e455: 3 total, 3 up, 3 in
Nov 22 04:08:18 compute-0 ceph-mon[75011]: pgmap v1756: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Nov 22 04:08:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Nov 22 04:08:20 compute-0 ceph-mon[75011]: osdmap e456: 3 total, 3 up, 3 in
Nov 22 04:08:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Nov 22 04:08:20 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Nov 22 04:08:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:08:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1638042936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1.7 KiB/s wr, 67 op/s
Nov 22 04:08:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Nov 22 04:08:21 compute-0 ceph-mon[75011]: osdmap e457: 3 total, 3 up, 3 in
Nov 22 04:08:21 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1638042936' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:08:21 compute-0 ceph-mon[75011]: pgmap v1759: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 1.7 KiB/s wr, 67 op/s
Nov 22 04:08:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Nov 22 04:08:21 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Nov 22 04:08:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Nov 22 04:08:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Nov 22 04:08:22 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Nov 22 04:08:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 5.0 KiB/s wr, 148 op/s
Nov 22 04:08:22 compute-0 ceph-mon[75011]: osdmap e458: 3 total, 3 up, 3 in
Nov 22 04:08:22 compute-0 ceph-mon[75011]: osdmap e459: 3 total, 3 up, 3 in
Nov 22 04:08:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:08:23.018 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:08:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:08:23.019 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:08:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:08:23.019 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:08:23 compute-0 nova_compute[253461]: 2025-11-22 04:08:23.359 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:23 compute-0 nova_compute[253461]: 2025-11-22 04:08:23.704 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:23 compute-0 ceph-mon[75011]: pgmap v1762: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 5.0 KiB/s wr, 148 op/s
Nov 22 04:08:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 3.5 KiB/s wr, 112 op/s
Nov 22 04:08:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Nov 22 04:08:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Nov 22 04:08:25 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Nov 22 04:08:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:08:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1520208595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:08:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1520208595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:26 compute-0 ceph-mon[75011]: pgmap v1763: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 3.5 KiB/s wr, 112 op/s
Nov 22 04:08:26 compute-0 ceph-mon[75011]: osdmap e460: 3 total, 3 up, 3 in
Nov 22 04:08:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1520208595' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1520208595' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.5 KiB/s wr, 80 op/s
Nov 22 04:08:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e460 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:27 compute-0 nova_compute[253461]: 2025-11-22 04:08:27.611 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:08:27.611 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:08:27 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:08:27.612 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:08:28 compute-0 nova_compute[253461]: 2025-11-22 04:08:28.361 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:28 compute-0 ceph-mon[75011]: pgmap v1765: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.5 KiB/s wr, 80 op/s
Nov 22 04:08:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.0 KiB/s wr, 96 op/s
Nov 22 04:08:28 compute-0 nova_compute[253461]: 2025-11-22 04:08:28.707 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Nov 22 04:08:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Nov 22 04:08:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Nov 22 04:08:30 compute-0 ceph-mon[75011]: pgmap v1766: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 73 KiB/s rd, 3.0 KiB/s wr, 96 op/s
Nov 22 04:08:30 compute-0 ceph-mon[75011]: osdmap e461: 3 total, 3 up, 3 in
Nov 22 04:08:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.1 KiB/s wr, 34 op/s
Nov 22 04:08:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:08:30.615 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:08:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e461 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Nov 22 04:08:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Nov 22 04:08:32 compute-0 podman[291791]: 2025-11-22 04:08:32.381473998 +0000 UTC m=+0.060707114 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 22 04:08:32 compute-0 podman[291792]: 2025-11-22 04:08:32.41670345 +0000 UTC m=+0.092652985 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:08:32 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Nov 22 04:08:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.2 KiB/s wr, 50 op/s
Nov 22 04:08:32 compute-0 ceph-mon[75011]: pgmap v1768: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.1 KiB/s wr, 34 op/s
Nov 22 04:08:32 compute-0 ceph-mon[75011]: osdmap e462: 3 total, 3 up, 3 in
Nov 22 04:08:33 compute-0 nova_compute[253461]: 2025-11-22 04:08:33.397 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:33 compute-0 nova_compute[253461]: 2025-11-22 04:08:33.710 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 KiB/s wr, 46 op/s
Nov 22 04:08:34 compute-0 ceph-mon[75011]: pgmap v1770: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.2 KiB/s wr, 50 op/s
Nov 22 04:08:34 compute-0 sshd-session[291789]: Invalid user support from 27.79.43.64 port 51850
Nov 22 04:08:35 compute-0 sshd-session[291787]: Invalid user config from 27.79.43.64 port 51830
Nov 22 04:08:35 compute-0 sshd-session[291789]: Connection closed by invalid user support 27.79.43.64 port 51850 [preauth]
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:08:36
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr']
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:08:36 compute-0 sshd-session[291787]: Connection closed by invalid user config 27.79.43.64 port 51830 [preauth]
Nov 22 04:08:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 22 04:08:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:08:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:08:36 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Nov 22 04:08:37 compute-0 ceph-mon[75011]: pgmap v1771: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 KiB/s wr, 46 op/s
Nov 22 04:08:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e463 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Nov 22 04:08:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Nov 22 04:08:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Nov 22 04:08:38 compute-0 ceph-mon[75011]: pgmap v1772: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 22 04:08:38 compute-0 ceph-mon[75011]: osdmap e463: 3 total, 3 up, 3 in
Nov 22 04:08:38 compute-0 ceph-mon[75011]: osdmap e464: 3 total, 3 up, 3 in
Nov 22 04:08:38 compute-0 nova_compute[253461]: 2025-11-22 04:08:38.398 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 675 B/s wr, 15 op/s
Nov 22 04:08:38 compute-0 nova_compute[253461]: 2025-11-22 04:08:38.711 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 16 op/s
Nov 22 04:08:40 compute-0 ceph-mon[75011]: pgmap v1775: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 675 B/s wr, 15 op/s
Nov 22 04:08:42 compute-0 ceph-mon[75011]: pgmap v1776: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 16 op/s
Nov 22 04:08:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1023 B/s wr, 32 op/s
Nov 22 04:08:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e464 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Nov 22 04:08:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Nov 22 04:08:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.401 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.440 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.440 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.608 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.609 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.609 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.609 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.610 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:08:43 compute-0 nova_compute[253461]: 2025-11-22 04:08:43.713 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:43 compute-0 ceph-mon[75011]: pgmap v1777: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1023 B/s wr, 32 op/s
Nov 22 04:08:43 compute-0 ceph-mon[75011]: osdmap e465: 3 total, 3 up, 3 in
Nov 22 04:08:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:08:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4211671018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:44 compute-0 nova_compute[253461]: 2025-11-22 04:08:44.127 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:08:44 compute-0 nova_compute[253461]: 2025-11-22 04:08:44.320 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:08:44 compute-0 nova_compute[253461]: 2025-11-22 04:08:44.322 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4450MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:08:44 compute-0 nova_compute[253461]: 2025-11-22 04:08:44.322 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:08:44 compute-0 nova_compute[253461]: 2025-11-22 04:08:44.322 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:08:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.0 KiB/s wr, 32 op/s
Nov 22 04:08:44 compute-0 nova_compute[253461]: 2025-11-22 04:08:44.922 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:08:44 compute-0 nova_compute[253461]: 2025-11-22 04:08:44.923 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:08:44 compute-0 nova_compute[253461]: 2025-11-22 04:08:44.948 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:08:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Nov 22 04:08:45 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4211671018' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:08:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615686685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:45 compute-0 nova_compute[253461]: 2025-11-22 04:08:45.425 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:08:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Nov 22 04:08:45 compute-0 nova_compute[253461]: 2025-11-22 04:08:45.433 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:08:45 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Nov 22 04:08:45 compute-0 nova_compute[253461]: 2025-11-22 04:08:45.644 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:08:45 compute-0 nova_compute[253461]: 2025-11-22 04:08:45.647 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:08:45 compute-0 nova_compute[253461]: 2025-11-22 04:08:45.648 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:08:46 compute-0 ceph-mon[75011]: pgmap v1779: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.0 KiB/s wr, 32 op/s
Nov 22 04:08:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1615686685' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:08:46 compute-0 ceph-mon[75011]: osdmap e466: 3 total, 3 up, 3 in
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 511 B/s wr, 22 op/s
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003479683531939832 of space, bias 1.0, pg target 0.10439050595819496 quantized to 32 (current 32)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:08:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:08:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e466 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Nov 22 04:08:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.637 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.638 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.638 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.638 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:08:47 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.780 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.781 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.781 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.782 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:47 compute-0 nova_compute[253461]: 2025-11-22 04:08:47.782 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:08:48 compute-0 ceph-mon[75011]: pgmap v1781: 305 pgs: 305 active+clean; 88 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 511 B/s wr, 22 op/s
Nov 22 04:08:48 compute-0 ceph-mon[75011]: osdmap e467: 3 total, 3 up, 3 in
Nov 22 04:08:48 compute-0 podman[291880]: 2025-11-22 04:08:48.398170506 +0000 UTC m=+0.066979780 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0)
Nov 22 04:08:48 compute-0 nova_compute[253461]: 2025-11-22 04:08:48.404 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 88 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Nov 22 04:08:48 compute-0 nova_compute[253461]: 2025-11-22 04:08:48.717 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Nov 22 04:08:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Nov 22 04:08:49 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Nov 22 04:08:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Nov 22 04:08:50 compute-0 nova_compute[253461]: 2025-11-22 04:08:50.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 88 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 22 04:08:50 compute-0 ceph-mon[75011]: pgmap v1783: 305 pgs: 305 active+clean; 88 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 34 op/s
Nov 22 04:08:50 compute-0 ceph-mon[75011]: osdmap e468: 3 total, 3 up, 3 in
Nov 22 04:08:50 compute-0 nova_compute[253461]: 2025-11-22 04:08:50.560 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Nov 22 04:08:50 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Nov 22 04:08:51 compute-0 nova_compute[253461]: 2025-11-22 04:08:51.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:08:51 compute-0 ceph-mon[75011]: pgmap v1785: 305 pgs: 305 active+clean; 88 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 22 04:08:51 compute-0 ceph-mon[75011]: osdmap e469: 3 total, 3 up, 3 in
Nov 22 04:08:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Nov 22 04:08:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Nov 22 04:08:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Nov 22 04:08:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 88 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 KiB/s wr, 40 op/s
Nov 22 04:08:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e470 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Nov 22 04:08:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Nov 22 04:08:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Nov 22 04:08:52 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 04:08:53 compute-0 nova_compute[253461]: 2025-11-22 04:08:53.405 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:53 compute-0 ceph-mon[75011]: osdmap e470: 3 total, 3 up, 3 in
Nov 22 04:08:53 compute-0 ceph-mon[75011]: osdmap e471: 3 total, 3 up, 3 in
Nov 22 04:08:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Nov 22 04:08:53 compute-0 nova_compute[253461]: 2025-11-22 04:08:53.718 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Nov 22 04:08:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Nov 22 04:08:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.7 KiB/s wr, 49 op/s
Nov 22 04:08:54 compute-0 ceph-mon[75011]: pgmap v1788: 305 pgs: 305 active+clean; 88 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 KiB/s wr, 40 op/s
Nov 22 04:08:54 compute-0 ceph-mon[75011]: osdmap e472: 3 total, 3 up, 3 in
Nov 22 04:08:55 compute-0 ceph-mon[75011]: pgmap v1791: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.7 KiB/s wr, 49 op/s
Nov 22 04:08:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.6 KiB/s wr, 35 op/s
Nov 22 04:08:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:08:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096184661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:08:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2096184661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e472 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:08:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Nov 22 04:08:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Nov 22 04:08:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Nov 22 04:08:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:08:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3750198333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:08:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3750198333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:57 compute-0 ceph-mon[75011]: pgmap v1792: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.6 KiB/s wr, 35 op/s
Nov 22 04:08:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2096184661' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2096184661' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:57 compute-0 ceph-mon[75011]: osdmap e473: 3 total, 3 up, 3 in
Nov 22 04:08:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3750198333' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:08:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3750198333' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:08:58 compute-0 nova_compute[253461]: 2025-11-22 04:08:58.408 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 32 op/s
Nov 22 04:08:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Nov 22 04:08:58 compute-0 nova_compute[253461]: 2025-11-22 04:08:58.721 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:08:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Nov 22 04:08:58 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Nov 22 04:09:00 compute-0 ceph-mon[75011]: pgmap v1794: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.0 KiB/s wr, 32 op/s
Nov 22 04:09:00 compute-0 ceph-mon[75011]: osdmap e474: 3 total, 3 up, 3 in
Nov 22 04:09:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/389468298' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/389468298' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 KiB/s wr, 57 op/s
Nov 22 04:09:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/389468298' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/389468298' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:02 compute-0 ceph-mon[75011]: pgmap v1796: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.8 KiB/s wr, 57 op/s
Nov 22 04:09:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.5 KiB/s wr, 74 op/s
Nov 22 04:09:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e474 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Nov 22 04:09:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Nov 22 04:09:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Nov 22 04:09:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1975361724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1975361724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:03 compute-0 podman[291902]: 2025-11-22 04:09:03.406105528 +0000 UTC m=+0.080855502 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 04:09:03 compute-0 nova_compute[253461]: 2025-11-22 04:09:03.411 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:03 compute-0 podman[291903]: 2025-11-22 04:09:03.489283506 +0000 UTC m=+0.157448600 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 22 04:09:03 compute-0 nova_compute[253461]: 2025-11-22 04:09:03.724 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:04 compute-0 ceph-mon[75011]: pgmap v1797: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 2.5 KiB/s wr, 74 op/s
Nov 22 04:09:04 compute-0 ceph-mon[75011]: osdmap e475: 3 total, 3 up, 3 in
Nov 22 04:09:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1975361724' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1975361724' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.1 KiB/s wr, 70 op/s
Nov 22 04:09:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1780967666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1780967666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.7 KiB/s wr, 59 op/s
Nov 22 04:09:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Nov 22 04:09:06 compute-0 ceph-mon[75011]: pgmap v1799: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 2.1 KiB/s wr, 70 op/s
Nov 22 04:09:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1780967666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1780967666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Nov 22 04:09:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Nov 22 04:09:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Nov 22 04:09:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Nov 22 04:09:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Nov 22 04:09:08 compute-0 ceph-mon[75011]: pgmap v1800: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.7 KiB/s wr, 59 op/s
Nov 22 04:09:08 compute-0 ceph-mon[75011]: osdmap e476: 3 total, 3 up, 3 in
Nov 22 04:09:08 compute-0 ceph-mon[75011]: osdmap e477: 3 total, 3 up, 3 in
Nov 22 04:09:08 compute-0 nova_compute[253461]: 2025-11-22 04:09:08.413 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 852 B/s wr, 27 op/s
Nov 22 04:09:08 compute-0 nova_compute[253461]: 2025-11-22 04:09:08.725 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2761242039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2761242039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:10 compute-0 ceph-mon[75011]: pgmap v1803: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 852 B/s wr, 27 op/s
Nov 22 04:09:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2761242039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2761242039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.1 KiB/s wr, 46 op/s
Nov 22 04:09:12 compute-0 ceph-mon[75011]: pgmap v1804: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 1.1 KiB/s wr, 46 op/s
Nov 22 04:09:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Nov 22 04:09:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e477 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:13 compute-0 nova_compute[253461]: 2025-11-22 04:09:13.413 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:13 compute-0 nova_compute[253461]: 2025-11-22 04:09:13.727 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:14 compute-0 ceph-mon[75011]: pgmap v1805: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Nov 22 04:09:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Nov 22 04:09:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/189330011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/189330011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/189330011' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/189330011' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:15 compute-0 sudo[291947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:15 compute-0 sudo[291947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:15 compute-0 sudo[291947]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:15 compute-0 sudo[291972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:09:15 compute-0 sudo[291972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:15 compute-0 sudo[291972]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:15 compute-0 sudo[291997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:15 compute-0 sudo[291997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:15 compute-0 sudo[291997]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:16 compute-0 sudo[292022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 04:09:16 compute-0 sudo[292022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:16 compute-0 ceph-mon[75011]: pgmap v1806: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Nov 22 04:09:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 978 B/s wr, 23 op/s
Nov 22 04:09:16 compute-0 podman[292119]: 2025-11-22 04:09:16.810235697 +0000 UTC m=+0.346848497 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:09:16 compute-0 podman[292119]: 2025-11-22 04:09:16.966932869 +0000 UTC m=+0.503545699 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:09:17 compute-0 ceph-mon[75011]: pgmap v1807: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 978 B/s wr, 23 op/s
Nov 22 04:09:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e477 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Nov 22 04:09:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Nov 22 04:09:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Nov 22 04:09:17 compute-0 sudo[292022]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:09:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:09:17 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:18 compute-0 sudo[292278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:18 compute-0 sudo[292278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:18 compute-0 sudo[292278]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:18 compute-0 sudo[292303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:09:18 compute-0 sudo[292303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:18 compute-0 sudo[292303]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:18 compute-0 sudo[292328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:18 compute-0 sudo[292328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:18 compute-0 sudo[292328]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:18 compute-0 sudo[292353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:09:18 compute-0 sudo[292353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:18 compute-0 nova_compute[253461]: 2025-11-22 04:09:18.415 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Nov 22 04:09:18 compute-0 nova_compute[253461]: 2025-11-22 04:09:18.730 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:18 compute-0 ceph-mon[75011]: osdmap e478: 3 total, 3 up, 3 in
Nov 22 04:09:18 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:18 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:18 compute-0 sudo[292353]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:09:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:09:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:09:18 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:09:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:09:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:19 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 268fead6-f808-43f7-83ee-101a3c3632eb does not exist
Nov 22 04:09:19 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8233265f-b7e3-4767-ad7e-465c6f0151af does not exist
Nov 22 04:09:19 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 90d2c139-65ca-45c6-8667-4f8bcc57eacf does not exist
Nov 22 04:09:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:09:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:09:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:09:19 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:09:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:09:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:09:19 compute-0 sudo[292408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:19 compute-0 sudo[292408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:19 compute-0 sudo[292408]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:19 compute-0 sudo[292434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:09:19 compute-0 sudo[292434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:19 compute-0 sudo[292434]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:19 compute-0 podman[292432]: 2025-11-22 04:09:19.223684052 +0000 UTC m=+0.087094449 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd)
Nov 22 04:09:19 compute-0 sudo[292473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:19 compute-0 sudo[292473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:19 compute-0 sudo[292473]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:19 compute-0 sudo[292503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:09:19 compute-0 sudo[292503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:19 compute-0 podman[292568]: 2025-11-22 04:09:19.675771891 +0000 UTC m=+0.052983606 container create 4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hellman, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:09:19 compute-0 podman[292568]: 2025-11-22 04:09:19.645716966 +0000 UTC m=+0.022928691 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:19 compute-0 systemd[1]: Started libpod-conmon-4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448.scope.
Nov 22 04:09:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:09:19 compute-0 podman[292568]: 2025-11-22 04:09:19.81170092 +0000 UTC m=+0.188912635 container init 4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:09:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Nov 22 04:09:19 compute-0 podman[292568]: 2025-11-22 04:09:19.823042823 +0000 UTC m=+0.200254528 container start 4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hellman, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Nov 22 04:09:19 compute-0 festive_hellman[292585]: 167 167
Nov 22 04:09:19 compute-0 systemd[1]: libpod-4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448.scope: Deactivated successfully.
Nov 22 04:09:19 compute-0 podman[292568]: 2025-11-22 04:09:19.838877051 +0000 UTC m=+0.216088726 container attach 4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:09:19 compute-0 podman[292568]: 2025-11-22 04:09:19.840017794 +0000 UTC m=+0.217229469 container died 4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hellman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:09:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Nov 22 04:09:19 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Nov 22 04:09:19 compute-0 ceph-mon[75011]: pgmap v1809: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.4 KiB/s wr, 37 op/s
Nov 22 04:09:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:09:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:09:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:09:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:09:19 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:09:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-66c8fd466989398801ed33f44dd88a9ee82d2e568b441810240f9e78357de6ea-merged.mount: Deactivated successfully.
Nov 22 04:09:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153087015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2153087015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:20 compute-0 podman[292568]: 2025-11-22 04:09:20.10886664 +0000 UTC m=+0.486078355 container remove 4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:09:20 compute-0 systemd[1]: libpod-conmon-4db061cbcc2d45a152c1709ca6d08f467f9cff051f27ba1bae4bedf9670af448.scope: Deactivated successfully.
Nov 22 04:09:20 compute-0 podman[292610]: 2025-11-22 04:09:20.379996013 +0000 UTC m=+0.093114564 container create f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:09:20 compute-0 podman[292610]: 2025-11-22 04:09:20.327036869 +0000 UTC m=+0.040155470 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:20 compute-0 systemd[1]: Started libpod-conmon-f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae.scope.
Nov 22 04:09:20 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1c455735afe9aec09d65a140f8a3f4e7966bd4216c8290dc9136b51299c3af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1c455735afe9aec09d65a140f8a3f4e7966bd4216c8290dc9136b51299c3af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1c455735afe9aec09d65a140f8a3f4e7966bd4216c8290dc9136b51299c3af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1c455735afe9aec09d65a140f8a3f4e7966bd4216c8290dc9136b51299c3af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e1c455735afe9aec09d65a140f8a3f4e7966bd4216c8290dc9136b51299c3af/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:20 compute-0 podman[292610]: 2025-11-22 04:09:20.527779266 +0000 UTC m=+0.240897857 container init f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_benz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:09:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 KiB/s wr, 38 op/s
Nov 22 04:09:20 compute-0 podman[292610]: 2025-11-22 04:09:20.542060796 +0000 UTC m=+0.255179317 container start f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:09:20 compute-0 podman[292610]: 2025-11-22 04:09:20.568691612 +0000 UTC m=+0.281810123 container attach f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:09:20 compute-0 ceph-mon[75011]: osdmap e479: 3 total, 3 up, 3 in
Nov 22 04:09:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2153087015' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2153087015' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:21 compute-0 goofy_benz[292626]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:09:21 compute-0 goofy_benz[292626]: --> relative data size: 1.0
Nov 22 04:09:21 compute-0 goofy_benz[292626]: --> All data devices are unavailable
Nov 22 04:09:21 compute-0 systemd[1]: libpod-f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae.scope: Deactivated successfully.
Nov 22 04:09:21 compute-0 systemd[1]: libpod-f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae.scope: Consumed 1.053s CPU time.
Nov 22 04:09:21 compute-0 podman[292610]: 2025-11-22 04:09:21.634031389 +0000 UTC m=+1.347149900 container died f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:09:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e1c455735afe9aec09d65a140f8a3f4e7966bd4216c8290dc9136b51299c3af-merged.mount: Deactivated successfully.
Nov 22 04:09:21 compute-0 podman[292610]: 2025-11-22 04:09:21.920996609 +0000 UTC m=+1.634115120 container remove f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_benz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:09:21 compute-0 systemd[1]: libpod-conmon-f13829ab3ff05d3265082b98211ad7e7747554ed17fed1b3db5b163ca90f58ae.scope: Deactivated successfully.
Nov 22 04:09:21 compute-0 sudo[292503]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:22 compute-0 ceph-mon[75011]: pgmap v1811: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 KiB/s wr, 38 op/s
Nov 22 04:09:22 compute-0 sudo[292668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:22 compute-0 sudo[292668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:22 compute-0 sudo[292668]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:22 compute-0 sudo[292693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:09:22 compute-0 sudo[292693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:22 compute-0 sudo[292693]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:22 compute-0 sudo[292718]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:22 compute-0 sudo[292718]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:22 compute-0 sudo[292718]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:22 compute-0 sudo[292743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:09:22 compute-0 sudo[292743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.4 KiB/s wr, 54 op/s
Nov 22 04:09:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e479 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:22 compute-0 podman[292809]: 2025-11-22 04:09:22.690238473 +0000 UTC m=+0.045552193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:22 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Nov 22 04:09:22 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:22.891046) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:09:22 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Nov 22 04:09:22 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784562891121, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1815, "num_deletes": 280, "total_data_size": 2483394, "memory_usage": 2530272, "flush_reason": "Manual Compaction"}
Nov 22 04:09:22 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Nov 22 04:09:22 compute-0 podman[292809]: 2025-11-22 04:09:22.916525547 +0000 UTC m=+0.271839247 container create 64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:09:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:09:23.020 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:09:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:09:23.022 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:09:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:09:23.022 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784563108117, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2438251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34378, "largest_seqno": 36192, "table_properties": {"data_size": 2429551, "index_size": 5386, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18800, "raw_average_key_size": 21, "raw_value_size": 2411991, "raw_average_value_size": 2734, "num_data_blocks": 234, "num_entries": 882, "num_filter_entries": 882, "num_deletions": 280, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784449, "oldest_key_time": 1763784449, "file_creation_time": 1763784562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 217137 microseconds, and 10382 cpu microseconds.
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:09:23 compute-0 systemd[1]: Started libpod-conmon-64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a.scope.
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.108178) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2438251 bytes OK
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.108204) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.183906) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.183961) EVENT_LOG_v1 {"time_micros": 1763784563183948, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.183992) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2475149, prev total WAL file size 2501765, number of live WAL files 2.
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.185523) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2381KB)], [71(8896KB)]
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784563185574, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 11547999, "oldest_snapshot_seqno": -1}
Nov 22 04:09:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6715 keys, 11398278 bytes, temperature: kUnknown
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784563312321, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 11398278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11346483, "index_size": 33891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16837, "raw_key_size": 169343, "raw_average_key_size": 25, "raw_value_size": 11219051, "raw_average_value_size": 1670, "num_data_blocks": 1358, "num_entries": 6715, "num_filter_entries": 6715, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784563, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.312986) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 11398278 bytes
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.383954) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 91.1 rd, 89.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 8.7 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.4) write-amplify(4.7) OK, records in: 7273, records dropped: 558 output_compression: NoCompression
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.383987) EVENT_LOG_v1 {"time_micros": 1763784563383973, "job": 40, "event": "compaction_finished", "compaction_time_micros": 126818, "compaction_time_cpu_micros": 44947, "output_level": 6, "num_output_files": 1, "total_output_size": 11398278, "num_input_records": 7273, "num_output_records": 6715, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:09:23 compute-0 podman[292809]: 2025-11-22 04:09:23.384233522 +0000 UTC m=+0.739547282 container init 64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784563385158, "job": 40, "event": "table_file_deletion", "file_number": 73}
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784563388889, "job": 40, "event": "table_file_deletion", "file_number": 71}
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.185400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.388978) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.388985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.388988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.388991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:09:23 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:09:23.388994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:09:23 compute-0 podman[292809]: 2025-11-22 04:09:23.395937978 +0000 UTC m=+0.751251678 container start 64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:09:23 compute-0 fervent_shirley[292826]: 167 167
Nov 22 04:09:23 compute-0 systemd[1]: libpod-64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a.scope: Deactivated successfully.
Nov 22 04:09:23 compute-0 nova_compute[253461]: 2025-11-22 04:09:23.418 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:23 compute-0 podman[292809]: 2025-11-22 04:09:23.570261543 +0000 UTC m=+0.925575293 container attach 64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:09:23 compute-0 podman[292809]: 2025-11-22 04:09:23.570799867 +0000 UTC m=+0.926113557 container died 64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:09:23 compute-0 nova_compute[253461]: 2025-11-22 04:09:23.732 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f21262972100df0ddbd60f64bef592018b46ff9d2bac12949ae39afe058fad46-merged.mount: Deactivated successfully.
Nov 22 04:09:24 compute-0 ceph-mon[75011]: pgmap v1812: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.4 KiB/s wr, 54 op/s
Nov 22 04:09:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.0 KiB/s wr, 76 op/s
Nov 22 04:09:24 compute-0 podman[292809]: 2025-11-22 04:09:24.696636517 +0000 UTC m=+2.051950207 container remove 64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:09:24 compute-0 systemd[1]: libpod-conmon-64bb7d3cfe846bfbd42fc2cfc1a04d59ec6bdd7775fd4b209a52c42a589aa27a.scope: Deactivated successfully.
Nov 22 04:09:24 compute-0 podman[292851]: 2025-11-22 04:09:24.945484869 +0000 UTC m=+0.089808022 container create ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:09:24 compute-0 podman[292851]: 2025-11-22 04:09:24.891461928 +0000 UTC m=+0.035785091 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:25 compute-0 systemd[1]: Started libpod-conmon-ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3.scope.
Nov 22 04:09:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93563536242ae8122e24bfc35aa1a269aa6abcf6df828669a5475c20ed2860/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93563536242ae8122e24bfc35aa1a269aa6abcf6df828669a5475c20ed2860/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93563536242ae8122e24bfc35aa1a269aa6abcf6df828669a5475c20ed2860/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e93563536242ae8122e24bfc35aa1a269aa6abcf6df828669a5475c20ed2860/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:25 compute-0 podman[292851]: 2025-11-22 04:09:25.18592688 +0000 UTC m=+0.330250032 container init ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_tharp, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:09:25 compute-0 podman[292851]: 2025-11-22 04:09:25.198295403 +0000 UTC m=+0.342618526 container start ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:09:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3791554827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3791554827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:25 compute-0 podman[292851]: 2025-11-22 04:09:25.284795043 +0000 UTC m=+0.429118176 container attach ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_tharp, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:09:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3791554827' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3791554827' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]: {
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:     "0": [
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:         {
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "devices": [
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "/dev/loop3"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             ],
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_name": "ceph_lv0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_size": "21470642176",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "name": "ceph_lv0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "tags": {
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cluster_name": "ceph",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.crush_device_class": "",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.encrypted": "0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osd_id": "0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.type": "block",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.vdo": "0"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             },
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "type": "block",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "vg_name": "ceph_vg0"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:         }
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:     ],
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:     "1": [
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:         {
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "devices": [
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "/dev/loop4"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             ],
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_name": "ceph_lv1",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_size": "21470642176",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "name": "ceph_lv1",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "tags": {
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cluster_name": "ceph",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.crush_device_class": "",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.encrypted": "0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osd_id": "1",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.type": "block",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.vdo": "0"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             },
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "type": "block",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "vg_name": "ceph_vg1"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:         }
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:     ],
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:     "2": [
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:         {
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "devices": [
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "/dev/loop5"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             ],
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_name": "ceph_lv2",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_size": "21470642176",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "name": "ceph_lv2",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "tags": {
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.cluster_name": "ceph",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.crush_device_class": "",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.encrypted": "0",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osd_id": "2",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.type": "block",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:                 "ceph.vdo": "0"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             },
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "type": "block",
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:             "vg_name": "ceph_vg2"
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:         }
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]:     ]
Nov 22 04:09:25 compute-0 suspicious_tharp[292868]: }
Nov 22 04:09:25 compute-0 systemd[1]: libpod-ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3.scope: Deactivated successfully.
Nov 22 04:09:25 compute-0 podman[292851]: 2025-11-22 04:09:25.974845215 +0000 UTC m=+1.119168368 container died ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:09:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-8e93563536242ae8122e24bfc35aa1a269aa6abcf6df828669a5475c20ed2860-merged.mount: Deactivated successfully.
Nov 22 04:09:26 compute-0 podman[292851]: 2025-11-22 04:09:26.310490674 +0000 UTC m=+1.454813817 container remove ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_tharp, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Nov 22 04:09:26 compute-0 systemd[1]: libpod-conmon-ed77d3179223fc2c3008a878bee53d45fae9a1a6017e43a797538f6b3e0e46f3.scope: Deactivated successfully.
Nov 22 04:09:26 compute-0 sudo[292743]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:26 compute-0 sudo[292892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:26 compute-0 sudo[292892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:26 compute-0 sudo[292892]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:26 compute-0 sudo[292917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:09:26 compute-0 sudo[292917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:26 compute-0 sudo[292917]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.9 KiB/s wr, 63 op/s
Nov 22 04:09:26 compute-0 sudo[292942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:26 compute-0 sudo[292942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:26 compute-0 sudo[292942]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:26 compute-0 sudo[292967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:09:26 compute-0 sudo[292967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:26 compute-0 ceph-mon[75011]: pgmap v1813: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 4.0 KiB/s wr, 76 op/s
Nov 22 04:09:27 compute-0 podman[293032]: 2025-11-22 04:09:27.075771666 +0000 UTC m=+0.062753829 container create c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:09:27 compute-0 systemd[1]: Started libpod-conmon-c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7.scope.
Nov 22 04:09:27 compute-0 podman[293032]: 2025-11-22 04:09:27.044287527 +0000 UTC m=+0.031269690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:09:27 compute-0 podman[293032]: 2025-11-22 04:09:27.17714939 +0000 UTC m=+0.164131543 container init c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:09:27 compute-0 podman[293032]: 2025-11-22 04:09:27.186630514 +0000 UTC m=+0.173612647 container start c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:09:27 compute-0 distracted_khorana[293049]: 167 167
Nov 22 04:09:27 compute-0 systemd[1]: libpod-c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7.scope: Deactivated successfully.
Nov 22 04:09:27 compute-0 podman[293032]: 2025-11-22 04:09:27.193206778 +0000 UTC m=+0.180189010 container attach c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:09:27 compute-0 podman[293032]: 2025-11-22 04:09:27.193891594 +0000 UTC m=+0.180873747 container died c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:09:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3eb1a48ecb57dfe6a1c17da76f778d37ea8cfa54d241ee8b790f9694fb13131d-merged.mount: Deactivated successfully.
Nov 22 04:09:27 compute-0 podman[293032]: 2025-11-22 04:09:27.259895737 +0000 UTC m=+0.246877860 container remove c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khorana, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:09:27 compute-0 systemd[1]: libpod-conmon-c361fe04ca9e4e802afc8647cfe1b39d7463220a1142390e2cdc9c13fd8e9bf7.scope: Deactivated successfully.
Nov 22 04:09:27 compute-0 podman[293073]: 2025-11-22 04:09:27.464254555 +0000 UTC m=+0.029281118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:09:27 compute-0 podman[293073]: 2025-11-22 04:09:27.667472906 +0000 UTC m=+0.232499409 container create 5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:09:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e479 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:27 compute-0 systemd[1]: Started libpod-conmon-5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc.scope.
Nov 22 04:09:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab184e5a0eff8e51feb44b6d67ebfee615a1dc50f95848cb01d54efc7dbab357/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab184e5a0eff8e51feb44b6d67ebfee615a1dc50f95848cb01d54efc7dbab357/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab184e5a0eff8e51feb44b6d67ebfee615a1dc50f95848cb01d54efc7dbab357/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab184e5a0eff8e51feb44b6d67ebfee615a1dc50f95848cb01d54efc7dbab357/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:09:27 compute-0 podman[293073]: 2025-11-22 04:09:27.980706784 +0000 UTC m=+0.545733307 container init 5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bohr, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:09:27 compute-0 ceph-mon[75011]: pgmap v1814: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.9 KiB/s wr, 63 op/s
Nov 22 04:09:27 compute-0 podman[293073]: 2025-11-22 04:09:27.992735167 +0000 UTC m=+0.557761660 container start 5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:09:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1618588860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:28 compute-0 podman[293073]: 2025-11-22 04:09:28.003455609 +0000 UTC m=+0.568482122 container attach 5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:09:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1618588860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:28 compute-0 nova_compute[253461]: 2025-11-22 04:09:28.421 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:28 compute-0 sshd-session[292841]: Invalid user ubnt from 27.79.43.64 port 40674
Nov 22 04:09:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.9 KiB/s wr, 71 op/s
Nov 22 04:09:28 compute-0 nova_compute[253461]: 2025-11-22 04:09:28.734 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:29 compute-0 pensive_bohr[293089]: {
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "osd_id": 1,
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "type": "bluestore"
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:     },
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "osd_id": 0,
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "type": "bluestore"
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:     },
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "osd_id": 2,
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:         "type": "bluestore"
Nov 22 04:09:29 compute-0 pensive_bohr[293089]:     }
Nov 22 04:09:29 compute-0 pensive_bohr[293089]: }
Nov 22 04:09:29 compute-0 systemd[1]: libpod-5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc.scope: Deactivated successfully.
Nov 22 04:09:29 compute-0 systemd[1]: libpod-5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc.scope: Consumed 1.040s CPU time.
Nov 22 04:09:29 compute-0 podman[293073]: 2025-11-22 04:09:29.028215403 +0000 UTC m=+1.593241876 container died 5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:09:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1618588860' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1618588860' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Nov 22 04:09:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Nov 22 04:09:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Nov 22 04:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab184e5a0eff8e51feb44b6d67ebfee615a1dc50f95848cb01d54efc7dbab357-merged.mount: Deactivated successfully.
Nov 22 04:09:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1852680293' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1852680293' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:29 compute-0 podman[293073]: 2025-11-22 04:09:29.561205815 +0000 UTC m=+2.126232318 container remove 5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:09:29 compute-0 systemd[1]: libpod-conmon-5c4ef55c1b0788ac821830c8aa507c1246768e4350d993b2d9b9e3eab6d145bc.scope: Deactivated successfully.
Nov 22 04:09:29 compute-0 sudo[292967]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:09:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:09:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5f4377e2-fcaf-44ca-88ee-3b8cbe5f23ac does not exist
Nov 22 04:09:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 34ce8250-6d75-4134-9d28-dd70fbc9025b does not exist
Nov 22 04:09:30 compute-0 sudo[293136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:09:30 compute-0 sudo[293136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:30 compute-0 sudo[293136]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:30 compute-0 sudo[293161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:09:30 compute-0 sudo[293161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:09:30 compute-0 sudo[293161]: pam_unix(sudo:session): session closed for user root
Nov 22 04:09:30 compute-0 ceph-mon[75011]: pgmap v1815: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 3.9 KiB/s wr, 71 op/s
Nov 22 04:09:30 compute-0 ceph-mon[75011]: osdmap e480: 3 total, 3 up, 3 in
Nov 22 04:09:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1852680293' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1852680293' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:09:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.1 KiB/s wr, 94 op/s
Nov 22 04:09:31 compute-0 ceph-mon[75011]: pgmap v1817: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 4.1 KiB/s wr, 94 op/s
Nov 22 04:09:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:09:32.265 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:09:32 compute-0 nova_compute[253461]: 2025-11-22 04:09:32.266 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:32 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:09:32.267 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:09:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.3 KiB/s wr, 95 op/s
Nov 22 04:09:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:33 compute-0 nova_compute[253461]: 2025-11-22 04:09:33.423 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:33 compute-0 nova_compute[253461]: 2025-11-22 04:09:33.735 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:33 compute-0 ceph-mon[75011]: pgmap v1818: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.3 KiB/s wr, 95 op/s
Nov 22 04:09:34 compute-0 sshd-session[292841]: Connection closed by invalid user ubnt 27.79.43.64 port 40674 [preauth]
Nov 22 04:09:34 compute-0 podman[293186]: 2025-11-22 04:09:34.43310728 +0000 UTC m=+0.094961183 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:09:34 compute-0 podman[293187]: 2025-11-22 04:09:34.506955203 +0000 UTC m=+0.168235395 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:09:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 91 op/s
Nov 22 04:09:35 compute-0 ceph-mon[75011]: pgmap v1819: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 4.2 KiB/s wr, 91 op/s
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:09:36
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'volumes', 'images', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root']
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:09:36 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:09:36.269 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.5 KiB/s wr, 83 op/s
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:09:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:09:37 compute-0 nova_compute[253461]: 2025-11-22 04:09:37.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e480 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Nov 22 04:09:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Nov 22 04:09:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Nov 22 04:09:38 compute-0 ceph-mon[75011]: pgmap v1820: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.5 KiB/s wr, 83 op/s
Nov 22 04:09:38 compute-0 ceph-mon[75011]: osdmap e481: 3 total, 3 up, 3 in
Nov 22 04:09:38 compute-0 nova_compute[253461]: 2025-11-22 04:09:38.428 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Nov 22 04:09:38 compute-0 nova_compute[253461]: 2025-11-22 04:09:38.777 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:40 compute-0 ceph-mon[75011]: pgmap v1822: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 49 op/s
Nov 22 04:09:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Nov 22 04:09:40 compute-0 nova_compute[253461]: 2025-11-22 04:09:40.574 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:40 compute-0 nova_compute[253461]: 2025-11-22 04:09:40.575 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 04:09:42 compute-0 ceph-mon[75011]: pgmap v1823: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 30 op/s
Nov 22 04:09:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Nov 22 04:09:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:43 compute-0 nova_compute[253461]: 2025-11-22 04:09:43.430 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:43 compute-0 nova_compute[253461]: 2025-11-22 04:09:43.780 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:44 compute-0 nova_compute[253461]: 2025-11-22 04:09:44.515 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:44 compute-0 ceph-mon[75011]: pgmap v1824: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Nov 22 04:09:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:09:45 compute-0 ceph-mon[75011]: pgmap v1825: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.543 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.544 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.571 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.571 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.572 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.572 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:09:45 compute-0 nova_compute[253461]: 2025-11-22 04:09:45.573 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:09:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611984270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:46 compute-0 nova_compute[253461]: 2025-11-22 04:09:46.101 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:09:46 compute-0 nova_compute[253461]: 2025-11-22 04:09:46.319 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:09:46 compute-0 nova_compute[253461]: 2025-11-22 04:09:46.321 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4422MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:09:46 compute-0 nova_compute[253461]: 2025-11-22 04:09:46.321 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:09:46 compute-0 nova_compute[253461]: 2025-11-22 04:09:46.322 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034720526470013676 of space, bias 1.0, pg target 0.10416157941004103 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:09:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:09:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1611984270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:46 compute-0 nova_compute[253461]: 2025-11-22 04:09:46.607 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:09:46 compute-0 nova_compute[253461]: 2025-11-22 04:09:46.608 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:09:46 compute-0 nova_compute[253461]: 2025-11-22 04:09:46.681 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:09:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:09:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000949816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:47 compute-0 nova_compute[253461]: 2025-11-22 04:09:47.170 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:09:47 compute-0 nova_compute[253461]: 2025-11-22 04:09:47.177 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:09:47 compute-0 nova_compute[253461]: 2025-11-22 04:09:47.210 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:09:47 compute-0 nova_compute[253461]: 2025-11-22 04:09:47.213 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:09:47 compute-0 nova_compute[253461]: 2025-11-22 04:09:47.213 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:09:47 compute-0 ceph-mon[75011]: pgmap v1826: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:09:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1000949816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:09:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031711719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2031711719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:48 compute-0 nova_compute[253461]: 2025-11-22 04:09:48.432 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 193 B/s rd, 483 B/s wr, 1 op/s
Nov 22 04:09:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016232878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4016232878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:48 compute-0 nova_compute[253461]: 2025-11-22 04:09:48.781 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2031711719' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2031711719' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4016232878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4016232878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:49 compute-0 nova_compute[253461]: 2025-11-22 04:09:49.100 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:49 compute-0 nova_compute[253461]: 2025-11-22 04:09:49.101 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:49 compute-0 nova_compute[253461]: 2025-11-22 04:09:49.101 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:49 compute-0 nova_compute[253461]: 2025-11-22 04:09:49.101 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:49 compute-0 nova_compute[253461]: 2025-11-22 04:09:49.101 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:09:49 compute-0 podman[293273]: 2025-11-22 04:09:49.392153971 +0000 UTC m=+0.074206624 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 22 04:09:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/762077490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/762077490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:50 compute-0 ceph-mon[75011]: pgmap v1827: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 193 B/s rd, 483 B/s wr, 1 op/s
Nov 22 04:09:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/762077490' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/762077490' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.7 KiB/s wr, 19 op/s
Nov 22 04:09:51 compute-0 nova_compute[253461]: 2025-11-22 04:09:51.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:51 compute-0 nova_compute[253461]: 2025-11-22 04:09:51.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:09:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1050715703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1050715703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:52 compute-0 ceph-mon[75011]: pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.7 KiB/s wr, 19 op/s
Nov 22 04:09:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1050715703' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1050715703' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 46 op/s
Nov 22 04:09:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:09:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2937944850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:09:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2937944850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2937944850' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:09:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2937944850' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:09:53 compute-0 nova_compute[253461]: 2025-11-22 04:09:53.433 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:53 compute-0 nova_compute[253461]: 2025-11-22 04:09:53.783 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:54 compute-0 ceph-mon[75011]: pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.7 KiB/s wr, 46 op/s
Nov 22 04:09:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.7 KiB/s wr, 70 op/s
Nov 22 04:09:56 compute-0 ceph-mon[75011]: pgmap v1830: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 51 KiB/s rd, 2.7 KiB/s wr, 70 op/s
Nov 22 04:09:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.7 KiB/s wr, 71 op/s
Nov 22 04:09:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:09:58 compute-0 ceph-mon[75011]: pgmap v1831: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.7 KiB/s wr, 71 op/s
Nov 22 04:09:58 compute-0 nova_compute[253461]: 2025-11-22 04:09:58.435 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:09:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.9 KiB/s wr, 72 op/s
Nov 22 04:09:58 compute-0 nova_compute[253461]: 2025-11-22 04:09:58.785 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Nov 22 04:10:00 compute-0 ceph-mon[75011]: pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.9 KiB/s wr, 72 op/s
Nov 22 04:10:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Nov 22 04:10:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Nov 22 04:10:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3141276223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3141276223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 KiB/s wr, 63 op/s
Nov 22 04:10:01 compute-0 ceph-mon[75011]: osdmap e482: 3 total, 3 up, 3 in
Nov 22 04:10:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3141276223' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3141276223' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:01 compute-0 nova_compute[253461]: 2025-11-22 04:10:01.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:01 compute-0 nova_compute[253461]: 2025-11-22 04:10:01.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 04:10:01 compute-0 nova_compute[253461]: 2025-11-22 04:10:01.447 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 04:10:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Nov 22 04:10:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Nov 22 04:10:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Nov 22 04:10:02 compute-0 ceph-mon[75011]: pgmap v1834: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 1.8 KiB/s wr, 63 op/s
Nov 22 04:10:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Nov 22 04:10:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:03 compute-0 ceph-mon[75011]: osdmap e483: 3 total, 3 up, 3 in
Nov 22 04:10:03 compute-0 nova_compute[253461]: 2025-11-22 04:10:03.437 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:03 compute-0 nova_compute[253461]: 2025-11-22 04:10:03.787 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:04 compute-0 ceph-mon[75011]: pgmap v1836: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1.2 KiB/s wr, 6 op/s
Nov 22 04:10:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.7 KiB/s wr, 32 op/s
Nov 22 04:10:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2308701590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:04 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2308701590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2308701590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:05 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2308701590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:05 compute-0 podman[293294]: 2025-11-22 04:10:05.397805019 +0000 UTC m=+0.071149214 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 22 04:10:05 compute-0 podman[293295]: 2025-11-22 04:10:05.460138176 +0000 UTC m=+0.124995379 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:10:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255957650' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3255957650' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:06 compute-0 ceph-mon[75011]: pgmap v1837: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.7 KiB/s wr, 32 op/s
Nov 22 04:10:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3255957650' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:06 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3255957650' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.5 KiB/s wr, 37 op/s
Nov 22 04:10:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Nov 22 04:10:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Nov 22 04:10:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Nov 22 04:10:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2386208792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2386208792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1098907259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1098907259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:08 compute-0 ceph-mon[75011]: pgmap v1838: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 4.5 KiB/s wr, 37 op/s
Nov 22 04:10:08 compute-0 ceph-mon[75011]: osdmap e484: 3 total, 3 up, 3 in
Nov 22 04:10:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2386208792' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2386208792' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1098907259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1098907259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2448728690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2448728690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:08 compute-0 nova_compute[253461]: 2025-11-22 04:10:08.439 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 5.5 KiB/s wr, 63 op/s
Nov 22 04:10:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230199891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1230199891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:08 compute-0 nova_compute[253461]: 2025-11-22 04:10:08.789 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:08 compute-0 sshd-session[293339]: Invalid user squid from 27.79.43.64 port 32960
Nov 22 04:10:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2448728690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2448728690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1230199891' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1230199891' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:09 compute-0 sshd-session[293339]: Connection closed by invalid user squid 27.79.43.64 port 32960 [preauth]
Nov 22 04:10:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2345533323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2345533323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:10 compute-0 ceph-mon[75011]: pgmap v1840: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 5.5 KiB/s wr, 63 op/s
Nov 22 04:10:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2345533323' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2345533323' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 6.4 KiB/s wr, 187 op/s
Nov 22 04:10:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Nov 22 04:10:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Nov 22 04:10:11 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Nov 22 04:10:12 compute-0 ceph-mon[75011]: pgmap v1841: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 6.4 KiB/s wr, 187 op/s
Nov 22 04:10:12 compute-0 ceph-mon[75011]: osdmap e485: 3 total, 3 up, 3 in
Nov 22 04:10:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 6.1 KiB/s wr, 185 op/s
Nov 22 04:10:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:13 compute-0 nova_compute[253461]: 2025-11-22 04:10:13.441 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:13 compute-0 nova_compute[253461]: 2025-11-22 04:10:13.790 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2325102969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2325102969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:14 compute-0 ceph-mon[75011]: pgmap v1843: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 137 KiB/s rd, 6.1 KiB/s wr, 185 op/s
Nov 22 04:10:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2325102969' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2325102969' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 5.7 KiB/s wr, 224 op/s
Nov 22 04:10:16 compute-0 ceph-mon[75011]: pgmap v1844: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 170 KiB/s rd, 5.7 KiB/s wr, 224 op/s
Nov 22 04:10:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 3.7 KiB/s wr, 168 op/s
Nov 22 04:10:16 compute-0 ovn_controller[152691]: 2025-11-22T04:10:16Z|00250|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 04:10:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Nov 22 04:10:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Nov 22 04:10:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Nov 22 04:10:18 compute-0 ceph-mon[75011]: pgmap v1845: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 130 KiB/s rd, 3.7 KiB/s wr, 168 op/s
Nov 22 04:10:18 compute-0 ceph-mon[75011]: osdmap e486: 3 total, 3 up, 3 in
Nov 22 04:10:18 compute-0 nova_compute[253461]: 2025-11-22 04:10:18.443 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 63 op/s
Nov 22 04:10:18 compute-0 nova_compute[253461]: 2025-11-22 04:10:18.792 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:20 compute-0 podman[293341]: 2025-11-22 04:10:20.423936659 +0000 UTC m=+0.091564745 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:10:20 compute-0 ceph-mon[75011]: pgmap v1847: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 63 op/s
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.458585) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784620458642, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 857, "num_deletes": 254, "total_data_size": 1025713, "memory_usage": 1042760, "flush_reason": "Manual Compaction"}
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784620475721, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 1014429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36193, "largest_seqno": 37049, "table_properties": {"data_size": 1010036, "index_size": 2045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 10296, "raw_average_key_size": 20, "raw_value_size": 1000988, "raw_average_value_size": 1970, "num_data_blocks": 90, "num_entries": 508, "num_filter_entries": 508, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784563, "oldest_key_time": 1763784563, "file_creation_time": 1763784620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 17198 microseconds, and 4498 cpu microseconds.
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.475782) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 1014429 bytes OK
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.475813) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.483560) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.483594) EVENT_LOG_v1 {"time_micros": 1763784620483584, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.483621) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1021392, prev total WAL file size 1021392, number of live WAL files 2.
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.484417) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(990KB)], [74(10MB)]
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784620484492, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12412707, "oldest_snapshot_seqno": -1}
Nov 22 04:10:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6701 keys, 10688522 bytes, temperature: kUnknown
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784620571670, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10688522, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10637607, "index_size": 33075, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 169920, "raw_average_key_size": 25, "raw_value_size": 10511065, "raw_average_value_size": 1568, "num_data_blocks": 1315, "num_entries": 6701, "num_filter_entries": 6701, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.572016) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10688522 bytes
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.574475) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.2 rd, 122.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.9 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(22.8) write-amplify(10.5) OK, records in: 7223, records dropped: 522 output_compression: NoCompression
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.574507) EVENT_LOG_v1 {"time_micros": 1763784620574493, "job": 42, "event": "compaction_finished", "compaction_time_micros": 87310, "compaction_time_cpu_micros": 31280, "output_level": 6, "num_output_files": 1, "total_output_size": 10688522, "num_input_records": 7223, "num_output_records": 6701, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784620575009, "job": 42, "event": "table_file_deletion", "file_number": 76}
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784620579169, "job": 42, "event": "table_file_deletion", "file_number": 74}
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.484290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.579252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.579259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.579263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.579267) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:10:20 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:10:20.579271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:10:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/193780560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/193780560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:22 compute-0 ceph-mon[75011]: pgmap v1848: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 KiB/s wr, 54 op/s
Nov 22 04:10:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/193780560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/193780560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 22 04:10:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e486 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:10:23.022 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:10:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:10:23.023 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:10:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:10:23.023 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:10:23 compute-0 nova_compute[253461]: 2025-11-22 04:10:23.445 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074725968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2074725968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:23 compute-0 nova_compute[253461]: 2025-11-22 04:10:23.794 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:24 compute-0 ceph-mon[75011]: pgmap v1849: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Nov 22 04:10:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2074725968' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2074725968' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.2 KiB/s wr, 24 op/s
Nov 22 04:10:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2996595328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2996595328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2996595328' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2996595328' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/344651091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/344651091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:26 compute-0 ceph-mon[75011]: pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.2 KiB/s wr, 24 op/s
Nov 22 04:10:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/344651091' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/344651091' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.7 KiB/s wr, 26 op/s
Nov 22 04:10:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4012184415' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4012184415' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4012184415' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4012184415' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e486 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:28 compute-0 nova_compute[253461]: 2025-11-22 04:10:28.447 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:28 compute-0 ceph-mon[75011]: pgmap v1851: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.7 KiB/s wr, 26 op/s
Nov 22 04:10:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 3.4 KiB/s wr, 40 op/s
Nov 22 04:10:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3113884908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3113884908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:28 compute-0 nova_compute[253461]: 2025-11-22 04:10:28.796 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:29 compute-0 ceph-mon[75011]: pgmap v1852: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 3.4 KiB/s wr, 40 op/s
Nov 22 04:10:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3113884908' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3113884908' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:30 compute-0 sudo[293361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:30 compute-0 sudo[293361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:30 compute-0 sudo[293361]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:30 compute-0 sudo[293386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:10:30 compute-0 sudo[293386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:30 compute-0 sudo[293386]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:30 compute-0 sudo[293411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:30 compute-0 sudo[293411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:30 compute-0 sudo[293411]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:30 compute-0 sudo[293436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:10:30 compute-0 sudo[293436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Nov 22 04:10:31 compute-0 sudo[293436]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:10:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:10:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:10:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:10:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:10:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:10:31 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 55c78103-d487-4510-a26d-9183be87f163 does not exist
Nov 22 04:10:31 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e4c5df5e-1e5c-46f6-b94a-ed0ae9ab954c does not exist
Nov 22 04:10:31 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b931e337-804c-47a0-9482-9d6ba5d712b4 does not exist
Nov 22 04:10:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:10:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:10:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:10:31 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:10:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:10:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:10:31 compute-0 sudo[293493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:31 compute-0 sudo[293493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:31 compute-0 sudo[293493]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:31 compute-0 sudo[293518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:10:31 compute-0 sudo[293518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:31 compute-0 sudo[293518]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:31 compute-0 sudo[293543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:31 compute-0 sudo[293543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:31 compute-0 sudo[293543]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:31 compute-0 sudo[293568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:10:31 compute-0 sudo[293568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:31 compute-0 ceph-mon[75011]: pgmap v1853: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.0 KiB/s wr, 73 op/s
Nov 22 04:10:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:10:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:10:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:10:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:10:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:10:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:10:31 compute-0 podman[293634]: 2025-11-22 04:10:31.89704402 +0000 UTC m=+0.038517506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:32 compute-0 podman[293634]: 2025-11-22 04:10:32.144090359 +0000 UTC m=+0.285563785 container create 03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:10:32 compute-0 systemd[1]: Started libpod-conmon-03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8.scope.
Nov 22 04:10:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:10:32 compute-0 podman[293634]: 2025-11-22 04:10:32.2990979 +0000 UTC m=+0.440571396 container init 03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:10:32 compute-0 podman[293634]: 2025-11-22 04:10:32.311507392 +0000 UTC m=+0.452980828 container start 03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:10:32 compute-0 crazy_bohr[293651]: 167 167
Nov 22 04:10:32 compute-0 systemd[1]: libpod-03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8.scope: Deactivated successfully.
Nov 22 04:10:32 compute-0 podman[293634]: 2025-11-22 04:10:32.327616772 +0000 UTC m=+0.469090258 container attach 03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:10:32 compute-0 podman[293634]: 2025-11-22 04:10:32.33001391 +0000 UTC m=+0.471487336 container died 03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:10:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-50bc7ca3c72a88e3e36955db9606272d1ea92b49ca68ee0d319d6cf18445b4dc-merged.mount: Deactivated successfully.
Nov 22 04:10:32 compute-0 podman[293634]: 2025-11-22 04:10:32.520806302 +0000 UTC m=+0.662279738 container remove 03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:10:32 compute-0 systemd[1]: libpod-conmon-03999894c74b44610c9ae4292f7028fe6f393a60b84dd234f4aaf63a4dbe27d8.scope: Deactivated successfully.
Nov 22 04:10:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.4 KiB/s wr, 86 op/s
Nov 22 04:10:32 compute-0 podman[293675]: 2025-11-22 04:10:32.754388221 +0000 UTC m=+0.067306169 container create c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:32 compute-0 systemd[1]: Started libpod-conmon-c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667.scope.
Nov 22 04:10:32 compute-0 podman[293675]: 2025-11-22 04:10:32.728614115 +0000 UTC m=+0.041532083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a4a2431a22daba4c485f1b1af3e0d73aeb86c1dd959c0316d13a5fda01689/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a4a2431a22daba4c485f1b1af3e0d73aeb86c1dd959c0316d13a5fda01689/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a4a2431a22daba4c485f1b1af3e0d73aeb86c1dd959c0316d13a5fda01689/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a4a2431a22daba4c485f1b1af3e0d73aeb86c1dd959c0316d13a5fda01689/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/714a4a2431a22daba4c485f1b1af3e0d73aeb86c1dd959c0316d13a5fda01689/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:32 compute-0 podman[293675]: 2025-11-22 04:10:32.881917318 +0000 UTC m=+0.194835316 container init c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:10:32 compute-0 podman[293675]: 2025-11-22 04:10:32.90030684 +0000 UTC m=+0.213224798 container start c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:10:32 compute-0 podman[293675]: 2025-11-22 04:10:32.905492695 +0000 UTC m=+0.218410703 container attach c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:10:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e486 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:33 compute-0 nova_compute[253461]: 2025-11-22 04:10:33.450 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:33 compute-0 nova_compute[253461]: 2025-11-22 04:10:33.798 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:33 compute-0 ceph-mon[75011]: pgmap v1854: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 3.4 KiB/s wr, 86 op/s
Nov 22 04:10:33 compute-0 trusting_hopper[293692]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:10:33 compute-0 trusting_hopper[293692]: --> relative data size: 1.0
Nov 22 04:10:33 compute-0 trusting_hopper[293692]: --> All data devices are unavailable
Nov 22 04:10:34 compute-0 systemd[1]: libpod-c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667.scope: Deactivated successfully.
Nov 22 04:10:34 compute-0 systemd[1]: libpod-c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667.scope: Consumed 1.077s CPU time.
Nov 22 04:10:34 compute-0 podman[293675]: 2025-11-22 04:10:34.024854414 +0000 UTC m=+1.337772372 container died c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:10:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-714a4a2431a22daba4c485f1b1af3e0d73aeb86c1dd959c0316d13a5fda01689-merged.mount: Deactivated successfully.
Nov 22 04:10:34 compute-0 podman[293675]: 2025-11-22 04:10:34.109685917 +0000 UTC m=+1.422603875 container remove c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Nov 22 04:10:34 compute-0 systemd[1]: libpod-conmon-c370e8792d012f34d0a78dd838931561c1b2e7c5575dde672b5b910def58e667.scope: Deactivated successfully.
Nov 22 04:10:34 compute-0 sudo[293568]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:34 compute-0 sudo[293734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:34 compute-0 sudo[293734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:34 compute-0 sudo[293734]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:34 compute-0 sudo[293759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:10:34 compute-0 sudo[293759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:34 compute-0 sudo[293759]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:34 compute-0 sudo[293784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:34 compute-0 sudo[293784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:34 compute-0 sudo[293784]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:34 compute-0 sudo[293809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:10:34 compute-0 sudo[293809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.8 KiB/s wr, 84 op/s
Nov 22 04:10:34 compute-0 podman[293875]: 2025-11-22 04:10:34.993167282 +0000 UTC m=+0.067891577 container create 804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:10:35 compute-0 systemd[1]: Started libpod-conmon-804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4.scope.
Nov 22 04:10:35 compute-0 podman[293875]: 2025-11-22 04:10:34.964571257 +0000 UTC m=+0.039295612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:10:35 compute-0 podman[293875]: 2025-11-22 04:10:35.087741826 +0000 UTC m=+0.162466181 container init 804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:10:35 compute-0 podman[293875]: 2025-11-22 04:10:35.099822122 +0000 UTC m=+0.174546407 container start 804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:10:35 compute-0 podman[293875]: 2025-11-22 04:10:35.104140958 +0000 UTC m=+0.178865253 container attach 804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:10:35 compute-0 sharp_sutherland[293892]: 167 167
Nov 22 04:10:35 compute-0 systemd[1]: libpod-804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4.scope: Deactivated successfully.
Nov 22 04:10:35 compute-0 podman[293875]: 2025-11-22 04:10:35.107598685 +0000 UTC m=+0.182323010 container died 804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0436af434090614380fd60aa7501ffd1a3fd0b6b4acbc623fd7b886712a002f1-merged.mount: Deactivated successfully.
Nov 22 04:10:35 compute-0 podman[293875]: 2025-11-22 04:10:35.162673849 +0000 UTC m=+0.237398144 container remove 804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_sutherland, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:10:35 compute-0 systemd[1]: libpod-conmon-804cd88bf8cee6e6ecd9d924a522ca431357078ea0632357da5d178dd5ea05a4.scope: Deactivated successfully.
Nov 22 04:10:35 compute-0 podman[293916]: 2025-11-22 04:10:35.367501326 +0000 UTC m=+0.062682709 container create ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:10:35 compute-0 systemd[1]: Started libpod-conmon-ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb.scope.
Nov 22 04:10:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:10:35 compute-0 podman[293916]: 2025-11-22 04:10:35.338734404 +0000 UTC m=+0.033915867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42fabb097e036eea8ab43804a051fa74dacd78b31fe4ed21bd42c88ea36e519/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42fabb097e036eea8ab43804a051fa74dacd78b31fe4ed21bd42c88ea36e519/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42fabb097e036eea8ab43804a051fa74dacd78b31fe4ed21bd42c88ea36e519/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d42fabb097e036eea8ab43804a051fa74dacd78b31fe4ed21bd42c88ea36e519/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:35 compute-0 podman[293916]: 2025-11-22 04:10:35.452573 +0000 UTC m=+0.147754413 container init ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williams, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:10:35 compute-0 podman[293916]: 2025-11-22 04:10:35.465151119 +0000 UTC m=+0.160332492 container start ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:10:35 compute-0 podman[293916]: 2025-11-22 04:10:35.46868037 +0000 UTC m=+0.163861783 container attach ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:35 compute-0 podman[293934]: 2025-11-22 04:10:35.508190939 +0000 UTC m=+0.079908040 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 04:10:35 compute-0 podman[293956]: 2025-11-22 04:10:35.680895951 +0000 UTC m=+0.137241147 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 22 04:10:35 compute-0 ceph-mon[75011]: pgmap v1855: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 62 KiB/s rd, 2.8 KiB/s wr, 84 op/s
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:10:36 compute-0 affectionate_williams[293933]: {
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:     "0": [
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:         {
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "devices": [
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "/dev/loop3"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             ],
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_name": "ceph_lv0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_size": "21470642176",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "name": "ceph_lv0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "tags": {
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cluster_name": "ceph",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.crush_device_class": "",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.encrypted": "0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osd_id": "0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.type": "block",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.vdo": "0"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             },
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "type": "block",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "vg_name": "ceph_vg0"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:         }
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:     ],
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:     "1": [
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:         {
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "devices": [
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "/dev/loop4"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             ],
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_name": "ceph_lv1",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_size": "21470642176",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "name": "ceph_lv1",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "tags": {
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cluster_name": "ceph",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.crush_device_class": "",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.encrypted": "0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osd_id": "1",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.type": "block",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.vdo": "0"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             },
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "type": "block",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "vg_name": "ceph_vg1"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:         }
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:     ],
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:     "2": [
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:         {
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "devices": [
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "/dev/loop5"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             ],
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_name": "ceph_lv2",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_size": "21470642176",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "name": "ceph_lv2",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "tags": {
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.cluster_name": "ceph",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.crush_device_class": "",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.encrypted": "0",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osd_id": "2",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.type": "block",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:                 "ceph.vdo": "0"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             },
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "type": "block",
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:             "vg_name": "ceph_vg2"
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:         }
Nov 22 04:10:36 compute-0 affectionate_williams[293933]:     ]
Nov 22 04:10:36 compute-0 affectionate_williams[293933]: }
Nov 22 04:10:36 compute-0 systemd[1]: libpod-ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb.scope: Deactivated successfully.
Nov 22 04:10:36 compute-0 podman[293916]: 2025-11-22 04:10:36.257278277 +0000 UTC m=+0.952459710 container died ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williams, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:10:36
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', '.mgr', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'volumes']
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42fabb097e036eea8ab43804a051fa74dacd78b31fe4ed21bd42c88ea36e519-merged.mount: Deactivated successfully.
Nov 22 04:10:36 compute-0 podman[293916]: 2025-11-22 04:10:36.351371489 +0000 UTC m=+1.046552862 container remove ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_williams, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Nov 22 04:10:36 compute-0 systemd[1]: libpod-conmon-ea1283bd7bcec9941acfb053792f48f5650149a8b8c9b14c0ff5cbdc873bd5bb.scope: Deactivated successfully.
Nov 22 04:10:36 compute-0 sudo[293809]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:36 compute-0 sudo[294000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:36 compute-0 sudo[294000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:36 compute-0 sudo[294000]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.9 KiB/s wr, 68 op/s
Nov 22 04:10:36 compute-0 sudo[294025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:10:36 compute-0 sudo[294025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:36 compute-0 sudo[294025]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:10:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:10:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1136020577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:36 compute-0 sudo[294050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:36 compute-0 sudo[294050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:36 compute-0 sudo[294050]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:36 compute-0 sudo[294075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:10:36 compute-0 sudo[294075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:36 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1136020577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:37 compute-0 podman[294140]: 2025-11-22 04:10:37.240908618 +0000 UTC m=+0.069831644 container create 83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:10:37 compute-0 systemd[1]: Started libpod-conmon-83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e.scope.
Nov 22 04:10:37 compute-0 podman[294140]: 2025-11-22 04:10:37.199290024 +0000 UTC m=+0.028213130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:10:37 compute-0 podman[294140]: 2025-11-22 04:10:37.347765658 +0000 UTC m=+0.176688764 container init 83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:10:37 compute-0 podman[294140]: 2025-11-22 04:10:37.36106187 +0000 UTC m=+0.189984926 container start 83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:10:37 compute-0 podman[294140]: 2025-11-22 04:10:37.365602676 +0000 UTC m=+0.194525782 container attach 83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:10:37 compute-0 infallible_wright[294156]: 167 167
Nov 22 04:10:37 compute-0 systemd[1]: libpod-83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e.scope: Deactivated successfully.
Nov 22 04:10:37 compute-0 podman[294140]: 2025-11-22 04:10:37.368458645 +0000 UTC m=+0.197381701 container died 83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-af3bdc7d7dbc61b0bc96728645fed233ca46c9ba67a73cebb97047aef8f7527c-merged.mount: Deactivated successfully.
Nov 22 04:10:37 compute-0 podman[294140]: 2025-11-22 04:10:37.457158643 +0000 UTC m=+0.286081709 container remove 83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_wright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:10:37 compute-0 systemd[1]: libpod-conmon-83e65a62c36df97ef53d586c4651c7c386bd476bf6e58458e6ffdbc295c6c26e.scope: Deactivated successfully.
Nov 22 04:10:37 compute-0 podman[294180]: 2025-11-22 04:10:37.723283816 +0000 UTC m=+0.088247797 container create c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kilby, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:10:37 compute-0 podman[294180]: 2025-11-22 04:10:37.674714846 +0000 UTC m=+0.039678867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:10:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e486 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Nov 22 04:10:37 compute-0 systemd[1]: Started libpod-conmon-c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef.scope.
Nov 22 04:10:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f12410b187bd56d15978dc436b3fe8408d8421fa4c861608d51e367da9a91c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f12410b187bd56d15978dc436b3fe8408d8421fa4c861608d51e367da9a91c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f12410b187bd56d15978dc436b3fe8408d8421fa4c861608d51e367da9a91c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f12410b187bd56d15978dc436b3fe8408d8421fa4c861608d51e367da9a91c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:10:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Nov 22 04:10:38 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Nov 22 04:10:38 compute-0 podman[294180]: 2025-11-22 04:10:38.1766293 +0000 UTC m=+0.541593291 container init c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:10:38 compute-0 podman[294180]: 2025-11-22 04:10:38.19054564 +0000 UTC m=+0.555509631 container start c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kilby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:10:38 compute-0 podman[294180]: 2025-11-22 04:10:38.400385814 +0000 UTC m=+0.765349795 container attach c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:10:38 compute-0 nova_compute[253461]: 2025-11-22 04:10:38.452 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:38 compute-0 ceph-mon[75011]: pgmap v1856: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 1.9 KiB/s wr, 68 op/s
Nov 22 04:10:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 818 B/s wr, 62 op/s
Nov 22 04:10:38 compute-0 nova_compute[253461]: 2025-11-22 04:10:38.800 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:39 compute-0 magical_kilby[294198]: {
Nov 22 04:10:39 compute-0 magical_kilby[294198]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "osd_id": 1,
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "type": "bluestore"
Nov 22 04:10:39 compute-0 magical_kilby[294198]:     },
Nov 22 04:10:39 compute-0 magical_kilby[294198]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "osd_id": 0,
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "type": "bluestore"
Nov 22 04:10:39 compute-0 magical_kilby[294198]:     },
Nov 22 04:10:39 compute-0 magical_kilby[294198]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "osd_id": 2,
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:10:39 compute-0 magical_kilby[294198]:         "type": "bluestore"
Nov 22 04:10:39 compute-0 magical_kilby[294198]:     }
Nov 22 04:10:39 compute-0 magical_kilby[294198]: }
Nov 22 04:10:39 compute-0 systemd[1]: libpod-c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef.scope: Deactivated successfully.
Nov 22 04:10:39 compute-0 podman[294180]: 2025-11-22 04:10:39.214211885 +0000 UTC m=+1.579175866 container died c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kilby, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:10:39 compute-0 systemd[1]: libpod-c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef.scope: Consumed 1.026s CPU time.
Nov 22 04:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f12410b187bd56d15978dc436b3fe8408d8421fa4c861608d51e367da9a91c6-merged.mount: Deactivated successfully.
Nov 22 04:10:39 compute-0 podman[294180]: 2025-11-22 04:10:39.276256375 +0000 UTC m=+1.641220336 container remove c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_kilby, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Nov 22 04:10:39 compute-0 systemd[1]: libpod-conmon-c123c4595cd812d0f04b143ac9921a102b234253ee22895b2dd2b7d7413a7bef.scope: Deactivated successfully.
Nov 22 04:10:39 compute-0 sudo[294075]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:10:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:10:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:10:39 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:10:39 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 04ba96de-32d1-4793-9f63-a92536f0a258 does not exist
Nov 22 04:10:39 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 1a443ce2-cefc-4b2b-905a-4118a9d8140a does not exist
Nov 22 04:10:39 compute-0 sudo[294246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:10:39 compute-0 sudo[294246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:39 compute-0 sudo[294246]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Nov 22 04:10:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Nov 22 04:10:39 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Nov 22 04:10:39 compute-0 ceph-mon[75011]: osdmap e487: 3 total, 3 up, 3 in
Nov 22 04:10:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:10:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:10:39 compute-0 sudo[294271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:10:39 compute-0 sudo[294271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:10:39 compute-0 sudo[294271]: pam_unix(sudo:session): session closed for user root
Nov 22 04:10:40 compute-0 ceph-mon[75011]: pgmap v1858: 305 pgs: 305 active+clean; 88 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 818 B/s wr, 62 op/s
Nov 22 04:10:40 compute-0 ceph-mon[75011]: osdmap e488: 3 total, 3 up, 3 in
Nov 22 04:10:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 895 B/s wr, 7 op/s
Nov 22 04:10:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Nov 22 04:10:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Nov 22 04:10:41 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Nov 22 04:10:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Nov 22 04:10:42 compute-0 ceph-mon[75011]: pgmap v1860: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.6 KiB/s rd, 895 B/s wr, 7 op/s
Nov 22 04:10:42 compute-0 ceph-mon[75011]: osdmap e489: 3 total, 3 up, 3 in
Nov 22 04:10:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Nov 22 04:10:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Nov 22 04:10:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 KiB/s wr, 48 op/s
Nov 22 04:10:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:10:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2280164055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:10:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2280164055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:43 compute-0 nova_compute[253461]: 2025-11-22 04:10:43.454 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:43 compute-0 ceph-mon[75011]: osdmap e490: 3 total, 3 up, 3 in
Nov 22 04:10:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2280164055' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:10:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2280164055' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:10:43 compute-0 nova_compute[253461]: 2025-11-22 04:10:43.801 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:44 compute-0 nova_compute[253461]: 2025-11-22 04:10:44.447 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:44 compute-0 ceph-mon[75011]: pgmap v1863: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 KiB/s wr, 48 op/s
Nov 22 04:10:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:10:44.558 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:10:44 compute-0 nova_compute[253461]: 2025-11-22 04:10:44.559 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:44 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:10:44.560 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:10:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Nov 22 04:10:45 compute-0 nova_compute[253461]: 2025-11-22 04:10:45.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:45 compute-0 nova_compute[253461]: 2025-11-22 04:10:45.466 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:10:45 compute-0 nova_compute[253461]: 2025-11-22 04:10:45.467 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:10:45 compute-0 nova_compute[253461]: 2025-11-22 04:10:45.467 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:10:45 compute-0 nova_compute[253461]: 2025-11-22 04:10:45.467 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:10:45 compute-0 nova_compute[253461]: 2025-11-22 04:10:45.468 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:10:45 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:45 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3491918558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:45 compute-0 nova_compute[253461]: 2025-11-22 04:10:45.959 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.170 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.171 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4413MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.171 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.172 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.312 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.313 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:10:46 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:10:46.562 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003472688554079573 of space, bias 1.0, pg target 0.10418065662238718 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.2718141564107572e-07 of space, bias 1.0, pg target 3.815442469232272e-05 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:10:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.4 KiB/s wr, 81 op/s
Nov 22 04:10:46 compute-0 ceph-mon[75011]: pgmap v1864: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 3.0 KiB/s wr, 41 op/s
Nov 22 04:10:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3491918558' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.598 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing inventories for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.615 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating ProviderTree inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.616 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.650 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing aggregate associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.681 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing trait associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 04:10:46 compute-0 nova_compute[253461]: 2025-11-22 04:10:46.701 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:10:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:10:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595449266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:47 compute-0 nova_compute[253461]: 2025-11-22 04:10:47.131 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:10:47 compute-0 nova_compute[253461]: 2025-11-22 04:10:47.138 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:10:47 compute-0 nova_compute[253461]: 2025-11-22 04:10:47.158 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:10:47 compute-0 nova_compute[253461]: 2025-11-22 04:10:47.161 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:10:47 compute-0 nova_compute[253461]: 2025-11-22 04:10:47.161 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:10:47 compute-0 sshd-session[294318]: Connection closed by authenticating user root 27.79.43.64 port 39904 [preauth]
Nov 22 04:10:47 compute-0 ceph-mon[75011]: pgmap v1865: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.4 KiB/s wr, 81 op/s
Nov 22 04:10:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3595449266' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:10:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.163 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.163 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.163 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:10:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Nov 22 04:10:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.202 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.202 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.203 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.203 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.456 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.465 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.3 KiB/s wr, 80 op/s
Nov 22 04:10:48 compute-0 nova_compute[253461]: 2025-11-22 04:10:48.803 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:49 compute-0 ceph-mon[75011]: osdmap e491: 3 total, 3 up, 3 in
Nov 22 04:10:49 compute-0 nova_compute[253461]: 2025-11-22 04:10:49.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:50 compute-0 ceph-mon[75011]: pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.3 KiB/s wr, 80 op/s
Nov 22 04:10:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 KiB/s wr, 49 op/s
Nov 22 04:10:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1642345248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1642345248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:51 compute-0 podman[294342]: 2025-11-22 04:10:51.408774469 +0000 UTC m=+0.081248161 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:10:51 compute-0 nova_compute[253461]: 2025-11-22 04:10:51.425 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:51 compute-0 nova_compute[253461]: 2025-11-22 04:10:51.485 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Nov 22 04:10:52 compute-0 ceph-mon[75011]: pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 2.0 KiB/s wr, 49 op/s
Nov 22 04:10:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Nov 22 04:10:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Nov 22 04:10:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.9 KiB/s wr, 49 op/s
Nov 22 04:10:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:53 compute-0 nova_compute[253461]: 2025-11-22 04:10:53.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:10:53 compute-0 nova_compute[253461]: 2025-11-22 04:10:53.459 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Nov 22 04:10:53 compute-0 ceph-mon[75011]: osdmap e492: 3 total, 3 up, 3 in
Nov 22 04:10:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Nov 22 04:10:53 compute-0 nova_compute[253461]: 2025-11-22 04:10:53.806 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Nov 22 04:10:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 801 B/s wr, 6 op/s
Nov 22 04:10:54 compute-0 ceph-mon[75011]: pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.9 KiB/s wr, 49 op/s
Nov 22 04:10:54 compute-0 ceph-mon[75011]: osdmap e493: 3 total, 3 up, 3 in
Nov 22 04:10:56 compute-0 ceph-mon[75011]: pgmap v1872: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 801 B/s wr, 6 op/s
Nov 22 04:10:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Nov 22 04:10:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:10:58 compute-0 ceph-mon[75011]: pgmap v1873: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.6 KiB/s wr, 23 op/s
Nov 22 04:10:58 compute-0 nova_compute[253461]: 2025-11-22 04:10:58.460 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 22 04:10:58 compute-0 nova_compute[253461]: 2025-11-22 04:10:58.808 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:10:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:10:59 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2794477918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:10:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2794477918' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:11:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e493 do_prune osdmap full prune enabled
Nov 22 04:11:00 compute-0 ceph-mon[75011]: pgmap v1874: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 22 04:11:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e494 e494: 3 total, 3 up, 3 in
Nov 22 04:11:00 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e494: 3 total, 3 up, 3 in
Nov 22 04:11:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:11:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1949838340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:11:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:11:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1949838340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:11:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 KiB/s wr, 21 op/s
Nov 22 04:11:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e494 do_prune osdmap full prune enabled
Nov 22 04:11:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e495 e495: 3 total, 3 up, 3 in
Nov 22 04:11:01 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e495: 3 total, 3 up, 3 in
Nov 22 04:11:01 compute-0 ceph-mon[75011]: osdmap e494: 3 total, 3 up, 3 in
Nov 22 04:11:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1949838340' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:11:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1949838340' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:11:02 compute-0 ceph-mon[75011]: pgmap v1876: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 KiB/s wr, 21 op/s
Nov 22 04:11:02 compute-0 ceph-mon[75011]: osdmap e495: 3 total, 3 up, 3 in
Nov 22 04:11:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 KiB/s wr, 22 op/s
Nov 22 04:11:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e495 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e495 do_prune osdmap full prune enabled
Nov 22 04:11:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e496 e496: 3 total, 3 up, 3 in
Nov 22 04:11:03 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e496: 3 total, 3 up, 3 in
Nov 22 04:11:03 compute-0 nova_compute[253461]: 2025-11-22 04:11:03.463 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:03 compute-0 nova_compute[253461]: 2025-11-22 04:11:03.810 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:11:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1309629690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:11:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:11:03 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1309629690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:11:04 compute-0 ceph-mon[75011]: pgmap v1878: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.2 KiB/s wr, 22 op/s
Nov 22 04:11:04 compute-0 ceph-mon[75011]: osdmap e496: 3 total, 3 up, 3 in
Nov 22 04:11:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1309629690' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:11:04 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1309629690' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:11:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Nov 22 04:11:05 compute-0 ceph-mon[75011]: pgmap v1880: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.3 KiB/s wr, 59 op/s
Nov 22 04:11:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:06 compute-0 podman[294363]: 2025-11-22 04:11:06.40548469 +0000 UTC m=+0.075933302 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:11:06 compute-0 podman[294364]: 2025-11-22 04:11:06.425197432 +0000 UTC m=+0.096213329 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:11:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.9 KiB/s wr, 61 op/s
Nov 22 04:11:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:11:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3201098947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:11:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e496 do_prune osdmap full prune enabled
Nov 22 04:11:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e497 e497: 3 total, 3 up, 3 in
Nov 22 04:11:07 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e497: 3 total, 3 up, 3 in
Nov 22 04:11:07 compute-0 ceph-mon[75011]: pgmap v1881: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.9 KiB/s wr, 61 op/s
Nov 22 04:11:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3201098947' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:11:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e497 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e497 do_prune osdmap full prune enabled
Nov 22 04:11:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e498 e498: 3 total, 3 up, 3 in
Nov 22 04:11:08 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e498: 3 total, 3 up, 3 in
Nov 22 04:11:08 compute-0 nova_compute[253461]: 2025-11-22 04:11:08.465 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.7 KiB/s wr, 62 op/s
Nov 22 04:11:08 compute-0 ceph-mon[75011]: osdmap e497: 3 total, 3 up, 3 in
Nov 22 04:11:08 compute-0 ceph-mon[75011]: osdmap e498: 3 total, 3 up, 3 in
Nov 22 04:11:08 compute-0 nova_compute[253461]: 2025-11-22 04:11:08.813 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e498 do_prune osdmap full prune enabled
Nov 22 04:11:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e499 e499: 3 total, 3 up, 3 in
Nov 22 04:11:09 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e499: 3 total, 3 up, 3 in
Nov 22 04:11:10 compute-0 ceph-mon[75011]: pgmap v1884: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.7 KiB/s wr, 62 op/s
Nov 22 04:11:10 compute-0 ceph-mon[75011]: osdmap e499: 3 total, 3 up, 3 in
Nov 22 04:11:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.7 KiB/s wr, 34 op/s
Nov 22 04:11:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:11:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3285742952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:11:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:11:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3285742952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:11:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.2 KiB/s wr, 35 op/s
Nov 22 04:11:12 compute-0 ceph-mon[75011]: pgmap v1886: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.7 KiB/s wr, 34 op/s
Nov 22 04:11:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3285742952' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:11:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3285742952' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:11:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e499 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:13 compute-0 nova_compute[253461]: 2025-11-22 04:11:13.468 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:13 compute-0 nova_compute[253461]: 2025-11-22 04:11:13.815 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e499 do_prune osdmap full prune enabled
Nov 22 04:11:14 compute-0 ceph-mon[75011]: pgmap v1887: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 3.2 KiB/s wr, 35 op/s
Nov 22 04:11:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e500 e500: 3 total, 3 up, 3 in
Nov 22 04:11:14 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e500: 3 total, 3 up, 3 in
Nov 22 04:11:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.6 KiB/s wr, 66 op/s
Nov 22 04:11:15 compute-0 ceph-mon[75011]: osdmap e500: 3 total, 3 up, 3 in
Nov 22 04:11:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.7 KiB/s wr, 75 op/s
Nov 22 04:11:16 compute-0 ceph-mon[75011]: pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.6 KiB/s wr, 66 op/s
Nov 22 04:11:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e500 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e500 do_prune osdmap full prune enabled
Nov 22 04:11:18 compute-0 nova_compute[253461]: 2025-11-22 04:11:18.472 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 22 04:11:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e501 e501: 3 total, 3 up, 3 in
Nov 22 04:11:18 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e501: 3 total, 3 up, 3 in
Nov 22 04:11:18 compute-0 nova_compute[253461]: 2025-11-22 04:11:18.816 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.7 KiB/s wr, 58 op/s
Nov 22 04:11:22 compute-0 podman[294406]: 2025-11-22 04:11:22.418917781 +0000 UTC m=+0.089883662 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:11:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 22 04:11:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:11:23.023 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:11:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:11:23.024 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:11:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:11:23.024 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:11:23 compute-0 nova_compute[253461]: 2025-11-22 04:11:23.472 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:23 compute-0 nova_compute[253461]: 2025-11-22 04:11:23.818 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:24 compute-0 ceph-mds[101332]: mds.beacon.cephfs.compute-0.fzlata missed beacon ack from the monitors
Nov 22 04:11:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1023 B/s wr, 26 op/s
Nov 22 04:11:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 8 op/s
Nov 22 04:11:28 compute-0 ceph-mds[101332]: mds.beacon.cephfs.compute-0.fzlata missed beacon ack from the monitors
Nov 22 04:11:28 compute-0 nova_compute[253461]: 2025-11-22 04:11:28.474 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 8 op/s
Nov 22 04:11:28 compute-0 nova_compute[253461]: 2025-11-22 04:11:28.820 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 257 B/s wr, 6 op/s
Nov 22 04:11:32 compute-0 ceph-mds[101332]: mds.beacon.cephfs.compute-0.fzlata missed beacon ack from the monitors
Nov 22 04:11:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 04:11:33 compute-0 nova_compute[253461]: 2025-11-22 04:11:33.476 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:33 compute-0 nova_compute[253461]: 2025-11-22 04:11:33.822 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:11:35 compute-0 ceph-mds[101332]: mds.beacon.cephfs.compute-0.fzlata MDS connection to Monitors appears to be laggy; 19.4899s since last acked beacon
Nov 22 04:11:35 compute-0 ceph-mds[101332]: mds.0.3 skipping upkeep work because connection to Monitors appears laggy
Nov 22 04:11:36 compute-0 ceph-mds[101332]: mds.beacon.cephfs.compute-0.fzlata missed beacon ack from the monitors
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:11:36
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta']
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:11:36 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.921861649s, txc = 0x560950676000
Nov 22 04:11:36 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 17.921804428s
Nov 22 04:11:36 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 17.921804428s
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:11:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:11:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).mds e4 check_health: resetting beacon timeouts due to mon delay (slow election?) of 18.6406 seconds
Nov 22 04:11:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e501 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:36 compute-0 ceph-mds[101332]: mds.beacon.cephfs.compute-0.fzlata  MDS is no longer laggy
Nov 22 04:11:36 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 18.307771683s
Nov 22 04:11:36 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 18.307771683s
Nov 22 04:11:36 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.308208466s, txc = 0x56036f777b00
Nov 22 04:11:36 compute-0 ceph-mon[75011]: pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.7 KiB/s wr, 75 op/s
Nov 22 04:11:37 compute-0 podman[294426]: 2025-11-22 04:11:37.430922781 +0000 UTC m=+0.096961601 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, io.buildah.version=1.41.3)
Nov 22 04:11:37 compute-0 podman[294427]: 2025-11-22 04:11:37.475246018 +0000 UTC m=+0.135869518 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:11:37 compute-0 nova_compute[253461]: 2025-11-22 04:11:37.938 253465 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 6.06 sec
Nov 22 04:11:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e501 do_prune osdmap full prune enabled
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.3 KiB/s wr, 51 op/s
Nov 22 04:11:38 compute-0 ceph-mon[75011]: osdmap e501: 3 total, 3 up, 3 in
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 1.7 KiB/s wr, 58 op/s
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1023 B/s wr, 26 op/s
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 8 op/s
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 307 B/s wr, 8 op/s
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 257 B/s wr, 6 op/s
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1899: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1900: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:11:38 compute-0 ceph-mon[75011]: pgmap v1901: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:11:38 compute-0 nova_compute[253461]: 2025-11-22 04:11:38.479 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:11:38 compute-0 nova_compute[253461]: 2025-11-22 04:11:38.824 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e502 e502: 3 total, 3 up, 3 in
Nov 22 04:11:39 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e502: 3 total, 3 up, 3 in
Nov 22 04:11:39 compute-0 sudo[294472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:11:39 compute-0 sudo[294472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:39 compute-0 sudo[294472]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:39 compute-0 sudo[294497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:11:39 compute-0 sudo[294497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:39 compute-0 sudo[294497]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:39 compute-0 sudo[294522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:11:39 compute-0 sudo[294522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:39 compute-0 sudo[294522]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:39 compute-0 sudo[294547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:11:39 compute-0 sudo[294547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:40 compute-0 ceph-mon[75011]: pgmap v1902: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:11:40 compute-0 ceph-mon[75011]: osdmap e502: 3 total, 3 up, 3 in
Nov 22 04:11:40 compute-0 sudo[294547]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:11:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:11:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:11:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:11:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:11:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:11:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:11:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3640be4b-5d2c-4814-a490-f479cbb37023 does not exist
Nov 22 04:11:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev debb1bcb-32e5-41d8-9b61-37dbd02e8c4c does not exist
Nov 22 04:11:41 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e92eb012-ac2e-4152-9714-813af669d115 does not exist
Nov 22 04:11:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:11:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:11:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:11:41 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:11:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:11:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:11:41 compute-0 sudo[294604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:11:41 compute-0 sudo[294604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:41 compute-0 sudo[294604]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:41 compute-0 sudo[294629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:11:41 compute-0 sudo[294629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:41 compute-0 sudo[294629]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:41 compute-0 sudo[294654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:11:41 compute-0 sudo[294654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:41 compute-0 sudo[294654]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:41 compute-0 sudo[294679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:11:41 compute-0 sudo[294679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:42 compute-0 podman[294744]: 2025-11-22 04:11:42.083900751 +0000 UTC m=+0.039184874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e502 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e502 do_prune osdmap full prune enabled
Nov 22 04:11:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:11:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:11:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:11:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:11:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:11:42 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:11:42 compute-0 podman[294744]: 2025-11-22 04:11:42.474009912 +0000 UTC m=+0.429293955 container create 5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mclaren, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:11:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:11:42 compute-0 systemd[1]: Started libpod-conmon-5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57.scope.
Nov 22 04:11:42 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:11:43 compute-0 podman[294744]: 2025-11-22 04:11:43.479970564 +0000 UTC m=+1.435254617 container init 5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mclaren, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:11:43 compute-0 nova_compute[253461]: 2025-11-22 04:11:43.481 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:43 compute-0 podman[294744]: 2025-11-22 04:11:43.495023896 +0000 UTC m=+1.450307979 container start 5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mclaren, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:11:43 compute-0 unruffled_mclaren[294760]: 167 167
Nov 22 04:11:43 compute-0 systemd[1]: libpod-5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57.scope: Deactivated successfully.
Nov 22 04:11:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e503 e503: 3 total, 3 up, 3 in
Nov 22 04:11:43 compute-0 nova_compute[253461]: 2025-11-22 04:11:43.826 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:43 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e503: 3 total, 3 up, 3 in
Nov 22 04:11:43 compute-0 podman[294744]: 2025-11-22 04:11:43.922043979 +0000 UTC m=+1.877328062 container attach 5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:11:43 compute-0 podman[294744]: 2025-11-22 04:11:43.923204575 +0000 UTC m=+1.878488688 container died 5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:11:44 compute-0 ceph-mon[75011]: pgmap v1904: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:11:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 127 B/s wr, 8 op/s
Nov 22 04:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ee515508b14a867556f3fedfba08bebcd65b1bf82e894e12a34297b6ae77b76-merged.mount: Deactivated successfully.
Nov 22 04:11:45 compute-0 podman[294744]: 2025-11-22 04:11:45.435098427 +0000 UTC m=+3.390382470 container remove 5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:11:45 compute-0 systemd[1]: libpod-conmon-5ce1cc8d72e5655e07e5fa82d16496304e7da264681cc6be5595c7e40e562a57.scope: Deactivated successfully.
Nov 22 04:11:45 compute-0 ceph-mon[75011]: pgmap v1905: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 204 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:11:45 compute-0 ceph-mon[75011]: osdmap e503: 3 total, 3 up, 3 in
Nov 22 04:11:45 compute-0 podman[294784]: 2025-11-22 04:11:45.608066418 +0000 UTC m=+0.025311619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:45 compute-0 podman[294784]: 2025-11-22 04:11:45.788414202 +0000 UTC m=+0.205659323 container create 73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:11:45 compute-0 systemd[1]: Started libpod-conmon-73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451.scope.
Nov 22 04:11:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbe90c61e891b985bd8e3e5a744bdbc6dceaac08d7abc93978ebc2557137922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbe90c61e891b985bd8e3e5a744bdbc6dceaac08d7abc93978ebc2557137922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbe90c61e891b985bd8e3e5a744bdbc6dceaac08d7abc93978ebc2557137922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbe90c61e891b985bd8e3e5a744bdbc6dceaac08d7abc93978ebc2557137922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bbbe90c61e891b985bd8e3e5a744bdbc6dceaac08d7abc93978ebc2557137922/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:46 compute-0 podman[294784]: 2025-11-22 04:11:46.11191588 +0000 UTC m=+0.529161031 container init 73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:11:46 compute-0 podman[294784]: 2025-11-22 04:11:46.11995552 +0000 UTC m=+0.537200671 container start 73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:11:46 compute-0 podman[294784]: 2025-11-22 04:11:46.191695386 +0000 UTC m=+0.608940507 container attach 73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:11:46 compute-0 nova_compute[253461]: 2025-11-22 04:11:46.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:46 compute-0 nova_compute[253461]: 2025-11-22 04:11:46.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034739603682359835 of space, bias 1.0, pg target 0.1042188110470795 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 5.087256625643029e-07 of space, bias 1.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:11:46 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:11:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 383 B/s wr, 9 op/s
Nov 22 04:11:46 compute-0 nova_compute[253461]: 2025-11-22 04:11:46.676 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:11:46 compute-0 nova_compute[253461]: 2025-11-22 04:11:46.676 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:11:46 compute-0 nova_compute[253461]: 2025-11-22 04:11:46.677 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:11:46 compute-0 nova_compute[253461]: 2025-11-22 04:11:46.677 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:11:46 compute-0 nova_compute[253461]: 2025-11-22 04:11:46.678 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:11:46 compute-0 ceph-mon[75011]: pgmap v1907: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 127 B/s wr, 8 op/s
Nov 22 04:11:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:47 compute-0 competent_sinoussi[294800]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:11:47 compute-0 competent_sinoussi[294800]: --> relative data size: 1.0
Nov 22 04:11:47 compute-0 competent_sinoussi[294800]: --> All data devices are unavailable
Nov 22 04:11:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:11:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/728198835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:11:47 compute-0 nova_compute[253461]: 2025-11-22 04:11:47.300 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:11:47 compute-0 systemd[1]: libpod-73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451.scope: Deactivated successfully.
Nov 22 04:11:47 compute-0 systemd[1]: libpod-73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451.scope: Consumed 1.093s CPU time.
Nov 22 04:11:47 compute-0 conmon[294800]: conmon 73195461fba6cd6ab18a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451.scope/container/memory.events
Nov 22 04:11:47 compute-0 podman[294784]: 2025-11-22 04:11:47.306794135 +0000 UTC m=+1.724039256 container died 73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:11:47 compute-0 nova_compute[253461]: 2025-11-22 04:11:47.465 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:11:47 compute-0 nova_compute[253461]: 2025-11-22 04:11:47.466 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4405MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:11:47 compute-0 nova_compute[253461]: 2025-11-22 04:11:47.466 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:11:47 compute-0 nova_compute[253461]: 2025-11-22 04:11:47.466 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:11:47 compute-0 nova_compute[253461]: 2025-11-22 04:11:47.681 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:11:47 compute-0 nova_compute[253461]: 2025-11-22 04:11:47.682 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:11:47 compute-0 nova_compute[253461]: 2025-11-22 04:11:47.725 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:11:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:11:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218067760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:11:48 compute-0 nova_compute[253461]: 2025-11-22 04:11:48.243 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:11:48 compute-0 ceph-mon[75011]: pgmap v1908: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 383 B/s wr, 9 op/s
Nov 22 04:11:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/728198835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:11:48 compute-0 nova_compute[253461]: 2025-11-22 04:11:48.251 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:11:48 compute-0 nova_compute[253461]: 2025-11-22 04:11:48.309 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:11:48 compute-0 nova_compute[253461]: 2025-11-22 04:11:48.311 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:11:48 compute-0 nova_compute[253461]: 2025-11-22 04:11:48.311 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:11:48 compute-0 nova_compute[253461]: 2025-11-22 04:11:48.482 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-bbbe90c61e891b985bd8e3e5a744bdbc6dceaac08d7abc93978ebc2557137922-merged.mount: Deactivated successfully.
Nov 22 04:11:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 327 B/s wr, 7 op/s
Nov 22 04:11:48 compute-0 nova_compute[253461]: 2025-11-22 04:11:48.828 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:49 compute-0 podman[294784]: 2025-11-22 04:11:49.169504238 +0000 UTC m=+3.586749359 container remove 73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_sinoussi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:11:49 compute-0 sudo[294679]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:49 compute-0 systemd[1]: libpod-conmon-73195461fba6cd6ab18a838e1aa5abb11225b624caf4179b17c2ad8adf505451.scope: Deactivated successfully.
Nov 22 04:11:49 compute-0 sudo[294886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:11:49 compute-0 sudo[294886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:49 compute-0 sudo[294886]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:49 compute-0 nova_compute[253461]: 2025-11-22 04:11:49.312 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:49 compute-0 nova_compute[253461]: 2025-11-22 04:11:49.313 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:11:49 compute-0 nova_compute[253461]: 2025-11-22 04:11:49.313 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:11:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2218067760' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:11:49 compute-0 sudo[294911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:11:49 compute-0 sudo[294911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:49 compute-0 sudo[294911]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:49 compute-0 nova_compute[253461]: 2025-11-22 04:11:49.387 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:11:49 compute-0 nova_compute[253461]: 2025-11-22 04:11:49.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:49 compute-0 nova_compute[253461]: 2025-11-22 04:11:49.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:49 compute-0 nova_compute[253461]: 2025-11-22 04:11:49.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:49 compute-0 nova_compute[253461]: 2025-11-22 04:11:49.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:11:49 compute-0 sudo[294936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:11:49 compute-0 sudo[294936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:49 compute-0 sudo[294936]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:49 compute-0 sudo[294961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:11:49 compute-0 sudo[294961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:50 compute-0 podman[295028]: 2025-11-22 04:11:49.943728056 +0000 UTC m=+0.036586135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:50 compute-0 podman[295028]: 2025-11-22 04:11:50.098293444 +0000 UTC m=+0.191151493 container create 8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:11:50 compute-0 systemd[1]: Started libpod-conmon-8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd.scope.
Nov 22 04:11:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:11:50 compute-0 nova_compute[253461]: 2025-11-22 04:11:50.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:50 compute-0 podman[295028]: 2025-11-22 04:11:50.438127896 +0000 UTC m=+0.530986045 container init 8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:11:50 compute-0 podman[295028]: 2025-11-22 04:11:50.451605397 +0000 UTC m=+0.544463486 container start 8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:11:50 compute-0 eloquent_williams[295044]: 167 167
Nov 22 04:11:50 compute-0 systemd[1]: libpod-8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd.scope: Deactivated successfully.
Nov 22 04:11:50 compute-0 ceph-mon[75011]: pgmap v1909: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 327 B/s wr, 7 op/s
Nov 22 04:11:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 307 B/s wr, 15 op/s
Nov 22 04:11:50 compute-0 podman[295028]: 2025-11-22 04:11:50.609070091 +0000 UTC m=+0.701928230 container attach 8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:11:50 compute-0 podman[295028]: 2025-11-22 04:11:50.609988203 +0000 UTC m=+0.702846292 container died 8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:11:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-74c0d3d4e15650fd310c1d89c3bc8bae64a4f358744ceefb25b8af6fbc05da82-merged.mount: Deactivated successfully.
Nov 22 04:11:51 compute-0 podman[295028]: 2025-11-22 04:11:51.713261795 +0000 UTC m=+1.806119884 container remove 8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_williams, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:11:51 compute-0 systemd[1]: libpod-conmon-8cfa1e59c44ebacc955f045e842f54a9559af14e462af460d5614430839b54cd.scope: Deactivated successfully.
Nov 22 04:11:51 compute-0 ceph-mon[75011]: pgmap v1910: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 307 B/s wr, 15 op/s
Nov 22 04:11:52 compute-0 podman[295070]: 2025-11-22 04:11:51.943056989 +0000 UTC m=+0.043533083 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e503 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e503 do_prune osdmap full prune enabled
Nov 22 04:11:52 compute-0 podman[295070]: 2025-11-22 04:11:52.281878483 +0000 UTC m=+0.382354497 container create d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kowalevski, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:11:52 compute-0 nova_compute[253461]: 2025-11-22 04:11:52.431 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 409 B/s wr, 17 op/s
Nov 22 04:11:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e504 e504: 3 total, 3 up, 3 in
Nov 22 04:11:52 compute-0 systemd[1]: Started libpod-conmon-d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279.scope.
Nov 22 04:11:52 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484e42efea332bdc30d8e9330ebade92967124adfd6334547439406c7a541f37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484e42efea332bdc30d8e9330ebade92967124adfd6334547439406c7a541f37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484e42efea332bdc30d8e9330ebade92967124adfd6334547439406c7a541f37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/484e42efea332bdc30d8e9330ebade92967124adfd6334547439406c7a541f37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:11:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e504: 3 total, 3 up, 3 in
Nov 22 04:11:53 compute-0 podman[295070]: 2025-11-22 04:11:53.387777275 +0000 UTC m=+1.488253369 container init d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:11:53 compute-0 podman[295070]: 2025-11-22 04:11:53.39886284 +0000 UTC m=+1.499338914 container start d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:11:53 compute-0 nova_compute[253461]: 2025-11-22 04:11:53.506 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e504 do_prune osdmap full prune enabled
Nov 22 04:11:53 compute-0 podman[295070]: 2025-11-22 04:11:53.700905879 +0000 UTC m=+1.801381903 container attach d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kowalevski, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:11:53 compute-0 podman[295089]: 2025-11-22 04:11:53.763783519 +0000 UTC m=+1.045092684 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:11:53 compute-0 nova_compute[253461]: 2025-11-22 04:11:53.830 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e505 e505: 3 total, 3 up, 3 in
Nov 22 04:11:54 compute-0 ceph-mon[75011]: pgmap v1911: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 409 B/s wr, 17 op/s
Nov 22 04:11:54 compute-0 ceph-mon[75011]: osdmap e504: 3 total, 3 up, 3 in
Nov 22 04:11:54 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e505: 3 total, 3 up, 3 in
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]: {
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:     "0": [
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:         {
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "devices": [
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "/dev/loop3"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             ],
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_name": "ceph_lv0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_size": "21470642176",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "name": "ceph_lv0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "tags": {
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cluster_name": "ceph",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.crush_device_class": "",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.encrypted": "0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osd_id": "0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.type": "block",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.vdo": "0"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             },
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "type": "block",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "vg_name": "ceph_vg0"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:         }
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:     ],
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:     "1": [
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:         {
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "devices": [
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "/dev/loop4"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             ],
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_name": "ceph_lv1",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_size": "21470642176",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "name": "ceph_lv1",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "tags": {
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cluster_name": "ceph",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.crush_device_class": "",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.encrypted": "0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osd_id": "1",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.type": "block",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.vdo": "0"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             },
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "type": "block",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "vg_name": "ceph_vg1"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:         }
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:     ],
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:     "2": [
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:         {
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "devices": [
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "/dev/loop5"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             ],
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_name": "ceph_lv2",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_size": "21470642176",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "name": "ceph_lv2",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "tags": {
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.cluster_name": "ceph",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.crush_device_class": "",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.encrypted": "0",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osd_id": "2",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.type": "block",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:                 "ceph.vdo": "0"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             },
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "type": "block",
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:             "vg_name": "ceph_vg2"
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:         }
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]:     ]
Nov 22 04:11:54 compute-0 boring_kowalevski[295087]: }
Nov 22 04:11:54 compute-0 systemd[1]: libpod-d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279.scope: Deactivated successfully.
Nov 22 04:11:54 compute-0 conmon[295087]: conmon d84da7d93f120d5ed7ef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279.scope/container/memory.events
Nov 22 04:11:54 compute-0 podman[295070]: 2025-11-22 04:11:54.602238902 +0000 UTC m=+2.702714946 container died d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:11:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 127 B/s wr, 12 op/s
Nov 22 04:11:55 compute-0 nova_compute[253461]: 2025-11-22 04:11:55.431 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:11:55 compute-0 ceph-mon[75011]: osdmap e505: 3 total, 3 up, 3 in
Nov 22 04:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-484e42efea332bdc30d8e9330ebade92967124adfd6334547439406c7a541f37-merged.mount: Deactivated successfully.
Nov 22 04:11:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 639 B/s wr, 20 op/s
Nov 22 04:11:56 compute-0 ceph-mon[75011]: pgmap v1914: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 127 B/s wr, 12 op/s
Nov 22 04:11:57 compute-0 podman[295070]: 2025-11-22 04:11:57.262576662 +0000 UTC m=+5.363052676 container remove d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:11:57 compute-0 sudo[294961]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:57 compute-0 systemd[1]: libpod-conmon-d84da7d93f120d5ed7efc9a414fc1fb74a0b7466ece36e3c2626aae8bc4d0279.scope: Deactivated successfully.
Nov 22 04:11:57 compute-0 sudo[295129]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:11:57 compute-0 sudo[295129]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:57 compute-0 sudo[295129]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:11:57 compute-0 sudo[295154]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:11:57 compute-0 sudo[295154]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:57 compute-0 sudo[295154]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:57 compute-0 sudo[295179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:11:57 compute-0 sudo[295179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:57 compute-0 sudo[295179]: pam_unix(sudo:session): session closed for user root
Nov 22 04:11:57 compute-0 sshd-session[295127]: Invalid user admin from 27.79.43.64 port 58844
Nov 22 04:11:57 compute-0 sudo[295204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:11:57 compute-0 sudo[295204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:11:58 compute-0 podman[295269]: 2025-11-22 04:11:58.09619263 +0000 UTC m=+0.039125481 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:11:58 compute-0 nova_compute[253461]: 2025-11-22 04:11:58.505 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:58 compute-0 ceph-mon[75011]: pgmap v1915: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 639 B/s wr, 20 op/s
Nov 22 04:11:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 639 B/s wr, 9 op/s
Nov 22 04:11:58 compute-0 podman[295269]: 2025-11-22 04:11:58.681456601 +0000 UTC m=+0.624389372 container create 8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:11:58 compute-0 nova_compute[253461]: 2025-11-22 04:11:58.831 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:11:59 compute-0 systemd[1]: Started libpod-conmon-8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc.scope.
Nov 22 04:11:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:11:59 compute-0 podman[295269]: 2025-11-22 04:11:59.459054631 +0000 UTC m=+1.401987482 container init 8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:11:59 compute-0 podman[295269]: 2025-11-22 04:11:59.47211349 +0000 UTC m=+1.415046261 container start 8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:11:59 compute-0 great_lamarr[295285]: 167 167
Nov 22 04:11:59 compute-0 systemd[1]: libpod-8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc.scope: Deactivated successfully.
Nov 22 04:11:59 compute-0 podman[295269]: 2025-11-22 04:11:59.94590185 +0000 UTC m=+1.888834721 container attach 8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:11:59 compute-0 podman[295269]: 2025-11-22 04:11:59.946588549 +0000 UTC m=+1.889521370 container died 8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:12:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:12:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/260013535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 511 B/s wr, 18 op/s
Nov 22 04:12:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:12:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/260013535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:00 compute-0 ceph-mon[75011]: pgmap v1916: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 639 B/s wr, 9 op/s
Nov 22 04:12:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e61039aaa3f2c03f9e0a7fc9bb4c09e1d8f25e1438412da999f1b13b86169bb-merged.mount: Deactivated successfully.
Nov 22 04:12:02 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/260013535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:02 compute-0 ceph-mon[75011]: pgmap v1917: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 511 B/s wr, 18 op/s
Nov 22 04:12:02 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/260013535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:02 compute-0 podman[295269]: 2025-11-22 04:12:02.168550227 +0000 UTC m=+4.111483038 container remove 8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:12:02 compute-0 systemd[1]: libpod-conmon-8d0f9bd7b365f3e43feefb61cf91ad315742369f6196153709ea89a317cae9fc.scope: Deactivated successfully.
Nov 22 04:12:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e505 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e505 do_prune osdmap full prune enabled
Nov 22 04:12:02 compute-0 podman[295310]: 2025-11-22 04:12:02.370153807 +0000 UTC m=+0.041179528 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:12:02 compute-0 podman[295310]: 2025-11-22 04:12:02.521765245 +0000 UTC m=+0.192790986 container create be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Nov 22 04:12:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e506 e506: 3 total, 3 up, 3 in
Nov 22 04:12:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 418 B/s wr, 16 op/s
Nov 22 04:12:02 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e506: 3 total, 3 up, 3 in
Nov 22 04:12:02 compute-0 systemd[1]: Started libpod-conmon-be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f.scope.
Nov 22 04:12:02 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe892a998fd5e884a0e913fded0e9d7378a812a067e4f2b4d1cfb72f093ab5db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe892a998fd5e884a0e913fded0e9d7378a812a067e4f2b4d1cfb72f093ab5db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe892a998fd5e884a0e913fded0e9d7378a812a067e4f2b4d1cfb72f093ab5db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe892a998fd5e884a0e913fded0e9d7378a812a067e4f2b4d1cfb72f093ab5db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:12:02 compute-0 podman[295310]: 2025-11-22 04:12:02.913217423 +0000 UTC m=+0.584243214 container init be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:12:02 compute-0 podman[295310]: 2025-11-22 04:12:02.923846481 +0000 UTC m=+0.594872182 container start be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:12:03 compute-0 podman[295310]: 2025-11-22 04:12:03.03251158 +0000 UTC m=+0.703537321 container attach be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:12:03 compute-0 nova_compute[253461]: 2025-11-22 04:12:03.506 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:03 compute-0 nova_compute[253461]: 2025-11-22 04:12:03.833 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:03 compute-0 ceph-mon[75011]: pgmap v1918: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 418 B/s wr, 16 op/s
Nov 22 04:12:03 compute-0 ceph-mon[75011]: osdmap e506: 3 total, 3 up, 3 in
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]: {
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "osd_id": 1,
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "type": "bluestore"
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:     },
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "osd_id": 0,
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "type": "bluestore"
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:     },
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "osd_id": 2,
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:         "type": "bluestore"
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]:     }
Nov 22 04:12:04 compute-0 fervent_ramanujan[295327]: }
Nov 22 04:12:04 compute-0 systemd[1]: libpod-be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f.scope: Deactivated successfully.
Nov 22 04:12:04 compute-0 podman[295310]: 2025-11-22 04:12:04.048451924 +0000 UTC m=+1.719477625 container died be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:12:04 compute-0 systemd[1]: libpod-be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f.scope: Consumed 1.121s CPU time.
Nov 22 04:12:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe892a998fd5e884a0e913fded0e9d7378a812a067e4f2b4d1cfb72f093ab5db-merged.mount: Deactivated successfully.
Nov 22 04:12:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 409 B/s wr, 15 op/s
Nov 22 04:12:04 compute-0 podman[295310]: 2025-11-22 04:12:04.652637768 +0000 UTC m=+2.323663509 container remove be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ramanujan, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:12:04 compute-0 sudo[295204]: pam_unix(sudo:session): session closed for user root
Nov 22 04:12:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:12:04 compute-0 systemd[1]: libpod-conmon-be02fb0710b91a2f706604b3c8c68141e5c767eb513e0ed508cd5d59c3e28f5f.scope: Deactivated successfully.
Nov 22 04:12:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:12:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:12:04 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:12:04 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ae090883-5636-4244-a00c-939355d18f68 does not exist
Nov 22 04:12:04 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 2aa4f640-7029-47f9-9828-762b41e3ea00 does not exist
Nov 22 04:12:04 compute-0 sudo[295371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:12:04 compute-0 sudo[295371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:12:04 compute-0 sudo[295371]: pam_unix(sudo:session): session closed for user root
Nov 22 04:12:05 compute-0 sudo[295396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:12:05 compute-0 sudo[295396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:12:05 compute-0 sudo[295396]: pam_unix(sudo:session): session closed for user root
Nov 22 04:12:05 compute-0 ceph-mon[75011]: pgmap v1920: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 409 B/s wr, 15 op/s
Nov 22 04:12:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:12:05 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:12:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e506 do_prune osdmap full prune enabled
Nov 22 04:12:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e507 e507: 3 total, 3 up, 3 in
Nov 22 04:12:06 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e507: 3 total, 3 up, 3 in
Nov 22 04:12:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 767 B/s wr, 24 op/s
Nov 22 04:12:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e507 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:07 compute-0 ceph-mon[75011]: osdmap e507: 3 total, 3 up, 3 in
Nov 22 04:12:07 compute-0 sshd-session[295127]: Connection closed by invalid user admin 27.79.43.64 port 58844 [preauth]
Nov 22 04:12:08 compute-0 podman[295421]: 2025-11-22 04:12:08.382415696 +0000 UTC m=+0.060477613 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:12:08 compute-0 podman[295422]: 2025-11-22 04:12:08.403670202 +0000 UTC m=+0.080426694 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 22 04:12:08 compute-0 nova_compute[253461]: 2025-11-22 04:12:08.508 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 14 op/s
Nov 22 04:12:08 compute-0 nova_compute[253461]: 2025-11-22 04:12:08.834 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:09 compute-0 ceph-mon[75011]: pgmap v1922: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 767 B/s wr, 24 op/s
Nov 22 04:12:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 767 B/s wr, 16 op/s
Nov 22 04:12:10 compute-0 ceph-mon[75011]: pgmap v1923: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 767 B/s wr, 14 op/s
Nov 22 04:12:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:12:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1737113537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:12:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1737113537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:12 compute-0 ceph-mon[75011]: pgmap v1924: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 767 B/s wr, 16 op/s
Nov 22 04:12:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1737113537' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1737113537' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.0 KiB/s wr, 26 op/s
Nov 22 04:12:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e507 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:13 compute-0 nova_compute[253461]: 2025-11-22 04:12:13.512 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:13 compute-0 nova_compute[253461]: 2025-11-22 04:12:13.836 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:14 compute-0 ceph-mon[75011]: pgmap v1925: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.0 KiB/s wr, 26 op/s
Nov 22 04:12:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 27 op/s
Nov 22 04:12:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:12:14.703 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:12:14 compute-0 nova_compute[253461]: 2025-11-22 04:12:14.703 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:12:14.705 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:12:16 compute-0 ceph-mon[75011]: pgmap v1926: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1023 B/s wr, 27 op/s
Nov 22 04:12:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 407 B/s wr, 17 op/s
Nov 22 04:12:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e507 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e507 do_prune osdmap full prune enabled
Nov 22 04:12:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e508 e508: 3 total, 3 up, 3 in
Nov 22 04:12:17 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e508: 3 total, 3 up, 3 in
Nov 22 04:12:18 compute-0 ceph-mon[75011]: pgmap v1927: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 407 B/s wr, 17 op/s
Nov 22 04:12:18 compute-0 ceph-mon[75011]: osdmap e508: 3 total, 3 up, 3 in
Nov 22 04:12:18 compute-0 nova_compute[253461]: 2025-11-22 04:12:18.512 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 409 B/s wr, 17 op/s
Nov 22 04:12:18 compute-0 nova_compute[253461]: 2025-11-22 04:12:18.838 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e508 do_prune osdmap full prune enabled
Nov 22 04:12:20 compute-0 ceph-mon[75011]: pgmap v1929: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 409 B/s wr, 17 op/s
Nov 22 04:12:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e509 e509: 3 total, 3 up, 3 in
Nov 22 04:12:20 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e509: 3 total, 3 up, 3 in
Nov 22 04:12:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s
Nov 22 04:12:21 compute-0 ceph-mon[75011]: osdmap e509: 3 total, 3 up, 3 in
Nov 22 04:12:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1023 B/s wr, 10 op/s
Nov 22 04:12:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e509 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:22 compute-0 ceph-mon[75011]: pgmap v1931: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 255 B/s wr, 2 op/s
Nov 22 04:12:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:12:23.025 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:12:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:12:23.026 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:12:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:12:23.026 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:12:23 compute-0 nova_compute[253461]: 2025-11-22 04:12:23.514 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:23 compute-0 nova_compute[253461]: 2025-11-22 04:12:23.839 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e509 do_prune osdmap full prune enabled
Nov 22 04:12:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e510 e510: 3 total, 3 up, 3 in
Nov 22 04:12:24 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e510: 3 total, 3 up, 3 in
Nov 22 04:12:24 compute-0 ceph-mon[75011]: pgmap v1932: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 1023 B/s wr, 10 op/s
Nov 22 04:12:24 compute-0 podman[295465]: 2025-11-22 04:12:24.441498544 +0000 UTC m=+0.104749446 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:12:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.2 KiB/s wr, 14 op/s
Nov 22 04:12:24 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:12:24.708 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:12:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:12:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1113697156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:12:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1113697156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:25 compute-0 ceph-mon[75011]: osdmap e510: 3 total, 3 up, 3 in
Nov 22 04:12:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1113697156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1113697156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:26 compute-0 ceph-mon[75011]: pgmap v1934: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 9.9 KiB/s rd, 1.2 KiB/s wr, 14 op/s
Nov 22 04:12:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Nov 22 04:12:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e510 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e510 do_prune osdmap full prune enabled
Nov 22 04:12:28 compute-0 ceph-mon[75011]: pgmap v1935: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.2 KiB/s wr, 41 op/s
Nov 22 04:12:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e511 e511: 3 total, 3 up, 3 in
Nov 22 04:12:28 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e511: 3 total, 3 up, 3 in
Nov 22 04:12:28 compute-0 nova_compute[253461]: 2025-11-22 04:12:28.517 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.0 KiB/s wr, 40 op/s
Nov 22 04:12:28 compute-0 nova_compute[253461]: 2025-11-22 04:12:28.840 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e511 do_prune osdmap full prune enabled
Nov 22 04:12:29 compute-0 ceph-mon[75011]: osdmap e511: 3 total, 3 up, 3 in
Nov 22 04:12:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e512 e512: 3 total, 3 up, 3 in
Nov 22 04:12:29 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e512: 3 total, 3 up, 3 in
Nov 22 04:12:30 compute-0 ceph-mon[75011]: pgmap v1937: 305 pgs: 305 active+clean; 88 MiB data, 448 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 2.0 KiB/s wr, 40 op/s
Nov 22 04:12:30 compute-0 ceph-mon[75011]: osdmap e512: 3 total, 3 up, 3 in
Nov 22 04:12:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:12:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/689442304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:12:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/689442304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 88 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 67 op/s
Nov 22 04:12:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/689442304' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/689442304' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:32 compute-0 ceph-mon[75011]: pgmap v1939: 305 pgs: 305 active+clean; 88 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.2 KiB/s wr, 67 op/s
Nov 22 04:12:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 88 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.1 KiB/s wr, 58 op/s
Nov 22 04:12:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e512 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e512 do_prune osdmap full prune enabled
Nov 22 04:12:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e513 e513: 3 total, 3 up, 3 in
Nov 22 04:12:32 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e513: 3 total, 3 up, 3 in
Nov 22 04:12:33 compute-0 nova_compute[253461]: 2025-11-22 04:12:33.521 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:33 compute-0 ceph-mon[75011]: pgmap v1940: 305 pgs: 305 active+clean; 88 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 3.1 KiB/s wr, 58 op/s
Nov 22 04:12:33 compute-0 ceph-mon[75011]: osdmap e513: 3 total, 3 up, 3 in
Nov 22 04:12:33 compute-0 nova_compute[253461]: 2025-11-22 04:12:33.842 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 88 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 62 op/s
Nov 22 04:12:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e513 do_prune osdmap full prune enabled
Nov 22 04:12:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e514 e514: 3 total, 3 up, 3 in
Nov 22 04:12:34 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e514: 3 total, 3 up, 3 in
Nov 22 04:12:35 compute-0 ceph-mon[75011]: pgmap v1942: 305 pgs: 305 active+clean; 88 MiB data, 452 MiB used, 60 GiB / 60 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 62 op/s
Nov 22 04:12:35 compute-0 ceph-mon[75011]: osdmap e514: 3 total, 3 up, 3 in
Nov 22 04:12:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e514 do_prune osdmap full prune enabled
Nov 22 04:12:35 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e515 e515: 3 total, 3 up, 3 in
Nov 22 04:12:35 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e515: 3 total, 3 up, 3 in
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:12:36
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'backups', 'volumes']
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 40 op/s
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:12:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:12:36 compute-0 ceph-mon[75011]: osdmap e515: 3 total, 3 up, 3 in
Nov 22 04:12:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:12:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/890543660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:12:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/890543660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e515 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e515 do_prune osdmap full prune enabled
Nov 22 04:12:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e516 e516: 3 total, 3 up, 3 in
Nov 22 04:12:37 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e516: 3 total, 3 up, 3 in
Nov 22 04:12:37 compute-0 ceph-mon[75011]: pgmap v1945: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.8 KiB/s wr, 40 op/s
Nov 22 04:12:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/890543660' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/890543660' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:37 compute-0 ceph-mon[75011]: osdmap e516: 3 total, 3 up, 3 in
Nov 22 04:12:38 compute-0 nova_compute[253461]: 2025-11-22 04:12:38.521 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 36 op/s
Nov 22 04:12:38 compute-0 nova_compute[253461]: 2025-11-22 04:12:38.845 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:39 compute-0 sshd-session[295487]: Connection closed by authenticating user root 27.79.43.64 port 50662 [preauth]
Nov 22 04:12:39 compute-0 podman[295489]: 2025-11-22 04:12:39.434088149 +0000 UTC m=+0.102239031 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:12:39 compute-0 podman[295490]: 2025-11-22 04:12:39.475807669 +0000 UTC m=+0.143278443 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:12:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e516 do_prune osdmap full prune enabled
Nov 22 04:12:40 compute-0 ceph-mon[75011]: pgmap v1947: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.2 KiB/s wr, 36 op/s
Nov 22 04:12:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e517 e517: 3 total, 3 up, 3 in
Nov 22 04:12:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.5 KiB/s wr, 58 op/s
Nov 22 04:12:40 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e517: 3 total, 3 up, 3 in
Nov 22 04:12:41 compute-0 ceph-mon[75011]: pgmap v1948: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 2.5 KiB/s wr, 58 op/s
Nov 22 04:12:41 compute-0 ceph-mon[75011]: osdmap e517: 3 total, 3 up, 3 in
Nov 22 04:12:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 48 op/s
Nov 22 04:12:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e517 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e517 do_prune osdmap full prune enabled
Nov 22 04:12:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e518 e518: 3 total, 3 up, 3 in
Nov 22 04:12:42 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e518: 3 total, 3 up, 3 in
Nov 22 04:12:43 compute-0 nova_compute[253461]: 2025-11-22 04:12:43.523 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:43 compute-0 nova_compute[253461]: 2025-11-22 04:12:43.880 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:44 compute-0 ceph-mon[75011]: pgmap v1950: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 48 op/s
Nov 22 04:12:44 compute-0 ceph-mon[75011]: osdmap e518: 3 total, 3 up, 3 in
Nov 22 04:12:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Nov 22 04:12:46 compute-0 ceph-mon[75011]: pgmap v1952: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 53 KiB/s rd, 3.5 KiB/s wr, 71 op/s
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034733244611577784 of space, bias 1.0, pg target 0.10419973383473335 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:12:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.1 KiB/s wr, 63 op/s
Nov 22 04:12:47 compute-0 nova_compute[253461]: 2025-11-22 04:12:47.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:47 compute-0 nova_compute[253461]: 2025-11-22 04:12:47.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:47 compute-0 nova_compute[253461]: 2025-11-22 04:12:47.497 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:12:47 compute-0 nova_compute[253461]: 2025-11-22 04:12:47.497 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:12:47 compute-0 nova_compute[253461]: 2025-11-22 04:12:47.497 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:12:47 compute-0 nova_compute[253461]: 2025-11-22 04:12:47.498 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:12:47 compute-0 nova_compute[253461]: 2025-11-22 04:12:47.498 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:12:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e518 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e518 do_prune osdmap full prune enabled
Nov 22 04:12:47 compute-0 ceph-mon[75011]: pgmap v1953: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 3.1 KiB/s wr, 63 op/s
Nov 22 04:12:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/971180234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.143 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:12:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e519 e519: 3 total, 3 up, 3 in
Nov 22 04:12:48 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e519: 3 total, 3 up, 3 in
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.374 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.375 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4480MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.376 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.376 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.514 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.514 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.527 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.536 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:12:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.881 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:12:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2252788398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.970 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:12:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/971180234' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:48 compute-0 ceph-mon[75011]: osdmap e519: 3 total, 3 up, 3 in
Nov 22 04:12:48 compute-0 nova_compute[253461]: 2025-11-22 04:12:48.977 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:12:49 compute-0 nova_compute[253461]: 2025-11-22 04:12:49.003 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:12:49 compute-0 nova_compute[253461]: 2025-11-22 04:12:49.006 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:12:49 compute-0 nova_compute[253461]: 2025-11-22 04:12:49.007 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:12:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:12:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1877186101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:12:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1877186101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:49 compute-0 ceph-mon[75011]: pgmap v1955: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 KiB/s wr, 25 op/s
Nov 22 04:12:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2252788398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:12:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1877186101' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1877186101' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.006 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.007 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.007 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.025 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.026 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.026 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:51 compute-0 nova_compute[253461]: 2025-11-22 04:12:51.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e519 do_prune osdmap full prune enabled
Nov 22 04:12:51 compute-0 ceph-mon[75011]: pgmap v1956: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Nov 22 04:12:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e520 e520: 3 total, 3 up, 3 in
Nov 22 04:12:52 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e520: 3 total, 3 up, 3 in
Nov 22 04:12:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 639 B/s wr, 22 op/s
Nov 22 04:12:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e520 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e520 do_prune osdmap full prune enabled
Nov 22 04:12:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e521 e521: 3 total, 3 up, 3 in
Nov 22 04:12:53 compute-0 ceph-mon[75011]: osdmap e520: 3 total, 3 up, 3 in
Nov 22 04:12:53 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e521: 3 total, 3 up, 3 in
Nov 22 04:12:53 compute-0 nova_compute[253461]: 2025-11-22 04:12:53.425 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:53 compute-0 nova_compute[253461]: 2025-11-22 04:12:53.529 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:53 compute-0 nova_compute[253461]: 2025-11-22 04:12:53.885 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:54 compute-0 ceph-mon[75011]: pgmap v1958: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 639 B/s wr, 22 op/s
Nov 22 04:12:54 compute-0 ceph-mon[75011]: osdmap e521: 3 total, 3 up, 3 in
Nov 22 04:12:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:12:54 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324142786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:12:54 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1324142786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:54 compute-0 nova_compute[253461]: 2025-11-22 04:12:54.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 964 B/s wr, 35 op/s
Nov 22 04:12:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1324142786' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:12:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1324142786' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:12:55 compute-0 podman[295580]: 2025-11-22 04:12:55.426606907 +0000 UTC m=+0.102885815 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 22 04:12:56 compute-0 ceph-mon[75011]: pgmap v1960: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 964 B/s wr, 35 op/s
Nov 22 04:12:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 2.5 KiB/s wr, 65 op/s
Nov 22 04:12:57 compute-0 nova_compute[253461]: 2025-11-22 04:12:57.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:12:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e521 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:12:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e521 do_prune osdmap full prune enabled
Nov 22 04:12:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 e522: 3 total, 3 up, 3 in
Nov 22 04:12:57 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e522: 3 total, 3 up, 3 in
Nov 22 04:12:58 compute-0 sshd-session[295578]: Connection closed by authenticating user root 27.79.43.64 port 43338 [preauth]
Nov 22 04:12:58 compute-0 ceph-mon[75011]: pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 2.5 KiB/s wr, 65 op/s
Nov 22 04:12:58 compute-0 ceph-mon[75011]: osdmap e522: 3 total, 3 up, 3 in
Nov 22 04:12:58 compute-0 nova_compute[253461]: 2025-11-22 04:12:58.580 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:12:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.4 KiB/s wr, 53 op/s
Nov 22 04:12:58 compute-0 nova_compute[253461]: 2025-11-22 04:12:58.887 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:00 compute-0 ceph-mon[75011]: pgmap v1963: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.4 KiB/s wr, 53 op/s
Nov 22 04:13:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:13:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457045427' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:13:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:13:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/457045427' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:13:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Nov 22 04:13:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/457045427' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:13:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/457045427' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:13:02 compute-0 ceph-mon[75011]: pgmap v1964: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Nov 22 04:13:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Nov 22 04:13:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:03 compute-0 nova_compute[253461]: 2025-11-22 04:13:03.581 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:03 compute-0 nova_compute[253461]: 2025-11-22 04:13:03.890 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:04 compute-0 ceph-mon[75011]: pgmap v1965: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.7 KiB/s wr, 37 op/s
Nov 22 04:13:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Nov 22 04:13:05 compute-0 sudo[295600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:05 compute-0 sudo[295600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:05 compute-0 sudo[295600]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:05 compute-0 sudo[295625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:13:05 compute-0 sudo[295625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:05 compute-0 sudo[295625]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:05 compute-0 sudo[295650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:05 compute-0 sudo[295650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:05 compute-0 sudo[295650]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:05 compute-0 sudo[295675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:13:05 compute-0 sudo[295675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:05 compute-0 sudo[295675]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:13:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:13:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:13:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:13:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:13:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:13:05 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 2143c4ad-13ff-4486-9c4f-0bde696dce9f does not exist
Nov 22 04:13:05 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev b02e4360-ac6f-41ad-ae6c-7c263faead04 does not exist
Nov 22 04:13:05 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3749ad4b-eaac-4cfe-96a1-e737c6ed880f does not exist
Nov 22 04:13:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:13:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:13:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:13:05 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:13:05 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:13:05 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:13:06 compute-0 sudo[295731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:06 compute-0 sudo[295731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:06 compute-0 sudo[295731]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:06 compute-0 sudo[295756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:13:06 compute-0 sudo[295756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:06 compute-0 sudo[295756]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:06 compute-0 sudo[295781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:06 compute-0 sudo[295781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:06 compute-0 sudo[295781]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:06 compute-0 sudo[295806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:13:06 compute-0 sudo[295806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:06 compute-0 ceph-mon[75011]: pgmap v1966: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Nov 22 04:13:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:13:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:13:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:13:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:13:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:13:06 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:13:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:06 compute-0 podman[295874]: 2025-11-22 04:13:06.720290486 +0000 UTC m=+0.060600124 container create 0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:13:06 compute-0 systemd[1]: Started libpod-conmon-0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4.scope.
Nov 22 04:13:06 compute-0 podman[295874]: 2025-11-22 04:13:06.696203183 +0000 UTC m=+0.036512830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:13:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:13:06 compute-0 podman[295874]: 2025-11-22 04:13:06.851370841 +0000 UTC m=+0.191680518 container init 0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:13:06 compute-0 podman[295874]: 2025-11-22 04:13:06.867102142 +0000 UTC m=+0.207411789 container start 0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:13:06 compute-0 podman[295874]: 2025-11-22 04:13:06.873104167 +0000 UTC m=+0.213413785 container attach 0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:13:06 compute-0 gallant_varahamihira[295890]: 167 167
Nov 22 04:13:06 compute-0 systemd[1]: libpod-0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4.scope: Deactivated successfully.
Nov 22 04:13:06 compute-0 podman[295874]: 2025-11-22 04:13:06.878762565 +0000 UTC m=+0.219072212 container died 0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:13:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b36fcd7cea348d01015fb5cf23a555fd422751fd36b0b4eede5acdf31a7efedf-merged.mount: Deactivated successfully.
Nov 22 04:13:06 compute-0 podman[295874]: 2025-11-22 04:13:06.932041891 +0000 UTC m=+0.272351509 container remove 0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:13:06 compute-0 systemd[1]: libpod-conmon-0064bfcc3c7727b446b160636afc851136a1dc8c9f39d37e0a91ca049f0af8c4.scope: Deactivated successfully.
Nov 22 04:13:07 compute-0 podman[295914]: 2025-11-22 04:13:07.150800006 +0000 UTC m=+0.058100706 container create e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:13:07 compute-0 systemd[1]: Started libpod-conmon-e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8.scope.
Nov 22 04:13:07 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d93d7d738332a7b5c0dacb0262d041ce87d5b8e77319ba7ce307448592e6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d93d7d738332a7b5c0dacb0262d041ce87d5b8e77319ba7ce307448592e6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d93d7d738332a7b5c0dacb0262d041ce87d5b8e77319ba7ce307448592e6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d93d7d738332a7b5c0dacb0262d041ce87d5b8e77319ba7ce307448592e6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63d93d7d738332a7b5c0dacb0262d041ce87d5b8e77319ba7ce307448592e6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:07 compute-0 podman[295914]: 2025-11-22 04:13:07.133190902 +0000 UTC m=+0.040491622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:13:07 compute-0 podman[295914]: 2025-11-22 04:13:07.234441955 +0000 UTC m=+0.141742675 container init e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_einstein, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:13:07 compute-0 podman[295914]: 2025-11-22 04:13:07.242595219 +0000 UTC m=+0.149895919 container start e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:13:07 compute-0 podman[295914]: 2025-11-22 04:13:07.247032897 +0000 UTC m=+0.154333627 container attach e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_einstein, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:13:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:08 compute-0 ceph-mon[75011]: pgmap v1967: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:08 compute-0 blissful_einstein[295931]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:13:08 compute-0 blissful_einstein[295931]: --> relative data size: 1.0
Nov 22 04:13:08 compute-0 blissful_einstein[295931]: --> All data devices are unavailable
Nov 22 04:13:08 compute-0 systemd[1]: libpod-e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8.scope: Deactivated successfully.
Nov 22 04:13:08 compute-0 podman[295914]: 2025-11-22 04:13:08.417844302 +0000 UTC m=+1.325145012 container died e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:13:08 compute-0 systemd[1]: libpod-e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8.scope: Consumed 1.129s CPU time.
Nov 22 04:13:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-c63d93d7d738332a7b5c0dacb0262d041ce87d5b8e77319ba7ce307448592e6f-merged.mount: Deactivated successfully.
Nov 22 04:13:08 compute-0 podman[295914]: 2025-11-22 04:13:08.485933355 +0000 UTC m=+1.393234055 container remove e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_einstein, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:13:08 compute-0 systemd[1]: libpod-conmon-e241dcd54b5f123a48d069bbe94109ccda93dd2e97265c8067e50c5046b8b9d8.scope: Deactivated successfully.
Nov 22 04:13:08 compute-0 sudo[295806]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:08 compute-0 nova_compute[253461]: 2025-11-22 04:13:08.583 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:08 compute-0 sudo[295972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:08 compute-0 sudo[295972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:08 compute-0 sudo[295972]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:08 compute-0 sudo[295997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:13:08 compute-0 sudo[295997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:08 compute-0 sudo[295997]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:08 compute-0 sudo[296022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:08 compute-0 sudo[296022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:08 compute-0 sudo[296022]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:08 compute-0 sudo[296047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:13:08 compute-0 sudo[296047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:08 compute-0 nova_compute[253461]: 2025-11-22 04:13:08.892 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:09 compute-0 podman[296109]: 2025-11-22 04:13:09.291162366 +0000 UTC m=+0.064124019 container create 3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_easley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:13:09 compute-0 systemd[1]: Started libpod-conmon-3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc.scope.
Nov 22 04:13:09 compute-0 podman[296109]: 2025-11-22 04:13:09.268247689 +0000 UTC m=+0.041209422 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:13:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:13:09 compute-0 podman[296109]: 2025-11-22 04:13:09.391147735 +0000 UTC m=+0.164109468 container init 3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_easley, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:13:09 compute-0 podman[296109]: 2025-11-22 04:13:09.397475852 +0000 UTC m=+0.170437525 container start 3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Nov 22 04:13:09 compute-0 mystifying_easley[296125]: 167 167
Nov 22 04:13:09 compute-0 podman[296109]: 2025-11-22 04:13:09.401606422 +0000 UTC m=+0.174568146 container attach 3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_easley, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:13:09 compute-0 systemd[1]: libpod-3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc.scope: Deactivated successfully.
Nov 22 04:13:09 compute-0 podman[296109]: 2025-11-22 04:13:09.402906641 +0000 UTC m=+0.175868314 container died 3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_easley, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Nov 22 04:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f75135c01a9acc5e0c395ecdb29206f8f0e1c83dc22e346e702b4260247a41b-merged.mount: Deactivated successfully.
Nov 22 04:13:09 compute-0 podman[296109]: 2025-11-22 04:13:09.445951077 +0000 UTC m=+0.218912751 container remove 3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:13:09 compute-0 systemd[1]: libpod-conmon-3cc90d90e9f72e43a5489e4eec9f5228c476b2748c713cde97c1a30887f290bc.scope: Deactivated successfully.
Nov 22 04:13:09 compute-0 podman[296142]: 2025-11-22 04:13:09.584201874 +0000 UTC m=+0.075280603 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:13:09 compute-0 podman[296144]: 2025-11-22 04:13:09.632348002 +0000 UTC m=+0.113116844 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:13:09 compute-0 podman[296186]: 2025-11-22 04:13:09.653651651 +0000 UTC m=+0.053483902 container create 61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Nov 22 04:13:09 compute-0 systemd[1]: Started libpod-conmon-61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752.scope.
Nov 22 04:13:09 compute-0 podman[296186]: 2025-11-22 04:13:09.627771069 +0000 UTC m=+0.027603319 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:13:09 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b710d4c6ffbd06588c0d23240c16acc389ec682d77fd5606de2b2e1e8230633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b710d4c6ffbd06588c0d23240c16acc389ec682d77fd5606de2b2e1e8230633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b710d4c6ffbd06588c0d23240c16acc389ec682d77fd5606de2b2e1e8230633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b710d4c6ffbd06588c0d23240c16acc389ec682d77fd5606de2b2e1e8230633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:09 compute-0 podman[296186]: 2025-11-22 04:13:09.751603478 +0000 UTC m=+0.151435769 container init 61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:13:09 compute-0 podman[296186]: 2025-11-22 04:13:09.763878208 +0000 UTC m=+0.163710488 container start 61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:13:09 compute-0 podman[296186]: 2025-11-22 04:13:09.768516089 +0000 UTC m=+0.168348359 container attach 61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:13:10 compute-0 ceph-mon[75011]: pgmap v1968: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]: {
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:     "0": [
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:         {
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "devices": [
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "/dev/loop3"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             ],
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_name": "ceph_lv0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_size": "21470642176",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "name": "ceph_lv0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "tags": {
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cluster_name": "ceph",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.crush_device_class": "",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.encrypted": "0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osd_id": "0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.type": "block",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.vdo": "0"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             },
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "type": "block",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "vg_name": "ceph_vg0"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:         }
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:     ],
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:     "1": [
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:         {
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "devices": [
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "/dev/loop4"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             ],
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_name": "ceph_lv1",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_size": "21470642176",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "name": "ceph_lv1",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "tags": {
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cluster_name": "ceph",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.crush_device_class": "",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.encrypted": "0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osd_id": "1",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.type": "block",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.vdo": "0"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             },
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "type": "block",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "vg_name": "ceph_vg1"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:         }
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:     ],
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:     "2": [
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:         {
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "devices": [
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "/dev/loop5"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             ],
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_name": "ceph_lv2",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_size": "21470642176",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "name": "ceph_lv2",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "tags": {
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.cluster_name": "ceph",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.crush_device_class": "",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.encrypted": "0",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osd_id": "2",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.type": "block",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:                 "ceph.vdo": "0"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             },
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "type": "block",
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:             "vg_name": "ceph_vg2"
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:         }
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]:     ]
Nov 22 04:13:10 compute-0 distracted_ganguly[296210]: }
Nov 22 04:13:10 compute-0 systemd[1]: libpod-61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752.scope: Deactivated successfully.
Nov 22 04:13:10 compute-0 podman[296186]: 2025-11-22 04:13:10.539049477 +0000 UTC m=+0.938881727 container died 61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:13:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b710d4c6ffbd06588c0d23240c16acc389ec682d77fd5606de2b2e1e8230633-merged.mount: Deactivated successfully.
Nov 22 04:13:10 compute-0 podman[296186]: 2025-11-22 04:13:10.598193209 +0000 UTC m=+0.998025440 container remove 61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:13:10 compute-0 systemd[1]: libpod-conmon-61cc23543cbd4b5b105138d3d886d3c0443220751c38a41574a0b55d14ad7752.scope: Deactivated successfully.
Nov 22 04:13:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:10 compute-0 sudo[296047]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:10 compute-0 sudo[296234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:10 compute-0 sudo[296234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:10 compute-0 sudo[296234]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:10 compute-0 sudo[296259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:13:10 compute-0 sudo[296259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:10 compute-0 sudo[296259]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:10 compute-0 sudo[296284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:10 compute-0 sudo[296284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:10 compute-0 sudo[296284]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:10 compute-0 sudo[296309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:13:10 compute-0 sudo[296309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:11 compute-0 podman[296375]: 2025-11-22 04:13:11.377348555 +0000 UTC m=+0.054040678 container create 8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:13:11 compute-0 systemd[1]: Started libpod-conmon-8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb.scope.
Nov 22 04:13:11 compute-0 podman[296375]: 2025-11-22 04:13:11.350610435 +0000 UTC m=+0.027302608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:13:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:13:11 compute-0 podman[296375]: 2025-11-22 04:13:11.486286214 +0000 UTC m=+0.162978347 container init 8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:13:11 compute-0 podman[296375]: 2025-11-22 04:13:11.493270662 +0000 UTC m=+0.169962805 container start 8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:13:11 compute-0 wizardly_heisenberg[296392]: 167 167
Nov 22 04:13:11 compute-0 systemd[1]: libpod-8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb.scope: Deactivated successfully.
Nov 22 04:13:11 compute-0 conmon[296392]: conmon 8da81e360a91bf58c7b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb.scope/container/memory.events
Nov 22 04:13:11 compute-0 podman[296375]: 2025-11-22 04:13:11.498007676 +0000 UTC m=+0.174699819 container attach 8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:13:11 compute-0 podman[296397]: 2025-11-22 04:13:11.538404104 +0000 UTC m=+0.028245331 container died 8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:13:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1b44c1ec426a874a0c5cb3eceeee66f2009def1370f787b0b020ed304b9ce84-merged.mount: Deactivated successfully.
Nov 22 04:13:11 compute-0 podman[296397]: 2025-11-22 04:13:11.624200451 +0000 UTC m=+0.114041697 container remove 8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:13:11 compute-0 systemd[1]: libpod-conmon-8da81e360a91bf58c7b47e6c7413e1e9c8ccf6b92078fa3dd1108073d6ec08bb.scope: Deactivated successfully.
Nov 22 04:13:11 compute-0 podman[296419]: 2025-11-22 04:13:11.893501719 +0000 UTC m=+0.072116239 container create ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:13:11 compute-0 systemd[1]: Started libpod-conmon-ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762.scope.
Nov 22 04:13:11 compute-0 podman[296419]: 2025-11-22 04:13:11.862848413 +0000 UTC m=+0.041462983 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:13:11 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785fa71326bd5035a3c9a1e9d48b8c556f945a3e97e8b7e14ff528e9bfcf9119/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785fa71326bd5035a3c9a1e9d48b8c556f945a3e97e8b7e14ff528e9bfcf9119/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785fa71326bd5035a3c9a1e9d48b8c556f945a3e97e8b7e14ff528e9bfcf9119/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785fa71326bd5035a3c9a1e9d48b8c556f945a3e97e8b7e14ff528e9bfcf9119/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:13:12 compute-0 podman[296419]: 2025-11-22 04:13:12.018112989 +0000 UTC m=+0.196727519 container init ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:13:12 compute-0 podman[296419]: 2025-11-22 04:13:12.037457029 +0000 UTC m=+0.216071520 container start ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:13:12 compute-0 podman[296419]: 2025-11-22 04:13:12.045204012 +0000 UTC m=+0.223818573 container attach ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:13:12 compute-0 ceph-mon[75011]: pgmap v1969: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:13 compute-0 amazing_bartik[296435]: {
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "osd_id": 1,
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "type": "bluestore"
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:     },
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "osd_id": 0,
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "type": "bluestore"
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:     },
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "osd_id": 2,
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:         "type": "bluestore"
Nov 22 04:13:13 compute-0 amazing_bartik[296435]:     }
Nov 22 04:13:13 compute-0 amazing_bartik[296435]: }
Nov 22 04:13:13 compute-0 systemd[1]: libpod-ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762.scope: Deactivated successfully.
Nov 22 04:13:13 compute-0 systemd[1]: libpod-ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762.scope: Consumed 1.101s CPU time.
Nov 22 04:13:13 compute-0 podman[296419]: 2025-11-22 04:13:13.132939129 +0000 UTC m=+1.311553629 container died ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 04:13:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-785fa71326bd5035a3c9a1e9d48b8c556f945a3e97e8b7e14ff528e9bfcf9119-merged.mount: Deactivated successfully.
Nov 22 04:13:13 compute-0 podman[296419]: 2025-11-22 04:13:13.204945353 +0000 UTC m=+1.383559843 container remove ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:13:13 compute-0 systemd[1]: libpod-conmon-ae0b27377956a0c7b516b95437ebab3ea2e6cacd4f929e3be629e58f98c06762.scope: Deactivated successfully.
Nov 22 04:13:13 compute-0 sudo[296309]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:13:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:13:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:13:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:13:13 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 167d1fe1-5b73-4254-9fc7-72b4d19bee31 does not exist
Nov 22 04:13:13 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 61d0d980-a44c-4541-b3b0-dce954c18cca does not exist
Nov 22 04:13:13 compute-0 sudo[296483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:13:13 compute-0 sudo[296483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:13 compute-0 sudo[296483]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:13 compute-0 sudo[296508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:13:13 compute-0 sudo[296508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:13:13 compute-0 sudo[296508]: pam_unix(sudo:session): session closed for user root
Nov 22 04:13:13 compute-0 nova_compute[253461]: 2025-11-22 04:13:13.586 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:13 compute-0 nova_compute[253461]: 2025-11-22 04:13:13.894 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:14 compute-0 ceph-mon[75011]: pgmap v1970: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:13:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:13:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:16 compute-0 ceph-mon[75011]: pgmap v1971: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:17 compute-0 nova_compute[253461]: 2025-11-22 04:13:17.807 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:17 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:13:17.806 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:13:17 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:13:17.807 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:13:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:18 compute-0 ceph-mon[75011]: pgmap v1972: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:18 compute-0 nova_compute[253461]: 2025-11-22 04:13:18.589 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:18 compute-0 nova_compute[253461]: 2025-11-22 04:13:18.896 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:20 compute-0 ceph-mon[75011]: pgmap v1973: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:22 compute-0 ceph-mon[75011]: pgmap v1974: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:13:23.027 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:13:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:13:23.027 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:13:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:13:23.028 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:13:23 compute-0 nova_compute[253461]: 2025-11-22 04:13:23.592 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:23 compute-0 nova_compute[253461]: 2025-11-22 04:13:23.897 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:24 compute-0 ceph-mon[75011]: pgmap v1975: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:13:25.809 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:13:26 compute-0 ceph-mon[75011]: pgmap v1976: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:26 compute-0 podman[296533]: 2025-11-22 04:13:26.458474476 +0000 UTC m=+0.089537885 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:13:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:28 compute-0 ceph-mon[75011]: pgmap v1977: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:28 compute-0 nova_compute[253461]: 2025-11-22 04:13:28.593 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:28 compute-0 nova_compute[253461]: 2025-11-22 04:13:28.899 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:29 compute-0 ceph-mon[75011]: pgmap v1978: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:32 compute-0 ceph-mon[75011]: pgmap v1979: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:33 compute-0 nova_compute[253461]: 2025-11-22 04:13:33.596 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:33 compute-0 nova_compute[253461]: 2025-11-22 04:13:33.901 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:34 compute-0 ceph-mon[75011]: pgmap v1980: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:13:36
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'backups', 'default.rgw.log', '.mgr', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:13:36 compute-0 ceph-mon[75011]: pgmap v1981: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:13:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:13:37 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:38 compute-0 ceph-mon[75011]: pgmap v1982: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:38 compute-0 nova_compute[253461]: 2025-11-22 04:13:38.598 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:38 compute-0 nova_compute[253461]: 2025-11-22 04:13:38.904 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:40 compute-0 podman[296552]: 2025-11-22 04:13:40.388388493 +0000 UTC m=+0.070758039 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 22 04:13:40 compute-0 podman[296553]: 2025-11-22 04:13:40.465437971 +0000 UTC m=+0.140318109 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:13:40 compute-0 ceph-mon[75011]: pgmap v1983: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:42 compute-0 ceph-mon[75011]: pgmap v1984: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:43 compute-0 nova_compute[253461]: 2025-11-22 04:13:43.600 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:43 compute-0 nova_compute[253461]: 2025-11-22 04:13:43.906 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:44 compute-0 ceph-mon[75011]: pgmap v1985: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:45 compute-0 ceph-mon[75011]: pgmap v1986: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.675551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784825675611, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2283, "num_deletes": 264, "total_data_size": 3440418, "memory_usage": 3500448, "flush_reason": "Manual Compaction"}
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784825706531, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3377022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37050, "largest_seqno": 39332, "table_properties": {"data_size": 3366367, "index_size": 6893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22166, "raw_average_key_size": 21, "raw_value_size": 3345039, "raw_average_value_size": 3185, "num_data_blocks": 301, "num_entries": 1050, "num_filter_entries": 1050, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784621, "oldest_key_time": 1763784621, "file_creation_time": 1763784825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 31055 microseconds, and 9947 cpu microseconds.
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.706597) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3377022 bytes OK
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.706632) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.708815) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.708831) EVENT_LOG_v1 {"time_micros": 1763784825708825, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.708860) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3430629, prev total WAL file size 3430629, number of live WAL files 2.
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.710260) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3297KB)], [77(10MB)]
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784825710379, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14065544, "oldest_snapshot_seqno": -1}
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7218 keys, 12345952 bytes, temperature: kUnknown
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784825826129, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12345952, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12289581, "index_size": 37251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 181846, "raw_average_key_size": 25, "raw_value_size": 12151938, "raw_average_value_size": 1683, "num_data_blocks": 1484, "num_entries": 7218, "num_filter_entries": 7218, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784825, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.826614) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12345952 bytes
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.828390) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.4 rd, 106.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.2 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(7.8) write-amplify(3.7) OK, records in: 7751, records dropped: 533 output_compression: NoCompression
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.828416) EVENT_LOG_v1 {"time_micros": 1763784825828403, "job": 44, "event": "compaction_finished", "compaction_time_micros": 115872, "compaction_time_cpu_micros": 48741, "output_level": 6, "num_output_files": 1, "total_output_size": 12345952, "num_input_records": 7751, "num_output_records": 7218, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784825829515, "job": 44, "event": "table_file_deletion", "file_number": 79}
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784825832404, "job": 44, "event": "table_file_deletion", "file_number": 77}
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.710017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.832497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.832506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.832507) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.832509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:13:45 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:13:45.832512) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034720526470013676 of space, bias 1.0, pg target 0.10416157941004103 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:13:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:47 compute-0 ceph-mon[75011]: pgmap v1987: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.601 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.621 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.622 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.622 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.622 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.623 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:13:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:48 compute-0 nova_compute[253461]: 2025-11-22 04:13:48.909 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2776763212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:49 compute-0 nova_compute[253461]: 2025-11-22 04:13:49.091 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:13:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2776763212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:49 compute-0 nova_compute[253461]: 2025-11-22 04:13:49.306 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:13:49 compute-0 nova_compute[253461]: 2025-11-22 04:13:49.308 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4458MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:13:49 compute-0 nova_compute[253461]: 2025-11-22 04:13:49.309 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:13:49 compute-0 nova_compute[253461]: 2025-11-22 04:13:49.310 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:13:49 compute-0 nova_compute[253461]: 2025-11-22 04:13:49.816 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:13:49 compute-0 nova_compute[253461]: 2025-11-22 04:13:49.818 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:13:49 compute-0 nova_compute[253461]: 2025-11-22 04:13:49.854 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:13:50 compute-0 ceph-mon[75011]: pgmap v1988: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:13:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398635165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:50 compute-0 nova_compute[253461]: 2025-11-22 04:13:50.332 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:13:50 compute-0 nova_compute[253461]: 2025-11-22 04:13:50.340 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:13:50 compute-0 nova_compute[253461]: 2025-11-22 04:13:50.419 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:13:50 compute-0 nova_compute[253461]: 2025-11-22 04:13:50.422 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:13:50 compute-0 nova_compute[253461]: 2025-11-22 04:13:50.422 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:13:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:13:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8549 writes, 39K keys, 8549 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8549 writes, 8549 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1795 writes, 8384 keys, 1795 commit groups, 1.0 writes per commit group, ingest: 10.76 MB, 0.02 MB/s
                                           Interval WAL: 1795 writes, 1795 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     45.6      1.02              0.17        22    0.046       0      0       0.0       0.0
                                             L6      1/0   11.77 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.8    117.4     98.3      1.81              0.63        21    0.086    116K    12K       0.0       0.0
                                            Sum      1/0   11.77 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.8     75.1     79.3      2.83              0.80        43    0.066    116K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.2     67.3     69.9      1.06              0.29        12    0.088     43K   3649       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    117.4     98.3      1.81              0.63        21    0.086    116K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     45.6      1.02              0.17        21    0.048       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.045, interval 0.012
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.22 GB write, 0.07 MB/s write, 0.21 GB read, 0.07 MB/s read, 2.8 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 1.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5574942991f0#2 capacity: 304.00 MB usage: 24.34 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000251 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1674,23.37 MB,7.68869%) FilterBlock(44,334.36 KB,0.107409%) IndexBlock(44,657.48 KB,0.211209%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 04:13:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2398635165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:13:51 compute-0 nova_compute[253461]: 2025-11-22 04:13:51.423 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:13:51 compute-0 nova_compute[253461]: 2025-11-22 04:13:51.423 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:13:51 compute-0 nova_compute[253461]: 2025-11-22 04:13:51.423 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:13:51 compute-0 nova_compute[253461]: 2025-11-22 04:13:51.499 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:13:51 compute-0 nova_compute[253461]: 2025-11-22 04:13:51.500 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:13:51 compute-0 nova_compute[253461]: 2025-11-22 04:13:51.501 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:13:51 compute-0 nova_compute[253461]: 2025-11-22 04:13:51.502 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:13:51 compute-0 nova_compute[253461]: 2025-11-22 04:13:51.504 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:13:52 compute-0 sshd-session[296618]: Invalid user system from 27.79.46.85 port 42472
Nov 22 04:13:52 compute-0 ceph-mon[75011]: pgmap v1989: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:53 compute-0 nova_compute[253461]: 2025-11-22 04:13:53.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:13:53 compute-0 sshd-session[296618]: Connection closed by invalid user system 27.79.46.85 port 42472 [preauth]
Nov 22 04:13:53 compute-0 nova_compute[253461]: 2025-11-22 04:13:53.602 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:53 compute-0 nova_compute[253461]: 2025-11-22 04:13:53.912 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:54 compute-0 ceph-mon[75011]: pgmap v1990: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:55 compute-0 nova_compute[253461]: 2025-11-22 04:13:55.431 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:13:56 compute-0 ceph-mon[75011]: pgmap v1991: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:57 compute-0 podman[296643]: 2025-11-22 04:13:57.417169397 +0000 UTC m=+0.085004932 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:13:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:13:58 compute-0 ceph-mon[75011]: pgmap v1992: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:58 compute-0 nova_compute[253461]: 2025-11-22 04:13:58.605 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:13:58 compute-0 nova_compute[253461]: 2025-11-22 04:13:58.913 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:13:59 compute-0 nova_compute[253461]: 2025-11-22 04:13:59.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:14:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686945324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:14:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:14:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686945324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:14:00 compute-0 ceph-mon[75011]: pgmap v1993: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:14:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/686945324' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:14:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/686945324' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:14:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:14:01 compute-0 ceph-mon[75011]: pgmap v1994: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:14:02 compute-0 sshd-session[296663]: Invalid user test from 27.79.46.85 port 56384
Nov 22 04:14:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:14:02 compute-0 sshd-session[296663]: Connection closed by invalid user test 27.79.46.85 port 56384 [preauth]
Nov 22 04:14:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:03 compute-0 nova_compute[253461]: 2025-11-22 04:14:03.607 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:03 compute-0 nova_compute[253461]: 2025-11-22 04:14:03.915 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:03 compute-0 ceph-mon[75011]: pgmap v1995: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:14:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:14:06 compute-0 ceph-mon[75011]: pgmap v1996: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:14:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 04:14:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:08 compute-0 ceph-mon[75011]: pgmap v1997: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 04:14:08 compute-0 nova_compute[253461]: 2025-11-22 04:14:08.656 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 04:14:08 compute-0 nova_compute[253461]: 2025-11-22 04:14:08.917 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:10 compute-0 ceph-mon[75011]: pgmap v1998: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Nov 22 04:14:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 687 KiB/s rd, 85 B/s wr, 6 op/s
Nov 22 04:14:11 compute-0 podman[296668]: 2025-11-22 04:14:11.442489665 +0000 UTC m=+0.111888678 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:14:11 compute-0 podman[296667]: 2025-11-22 04:14:11.442752383 +0000 UTC m=+0.107414298 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:14:12 compute-0 ceph-mon[75011]: pgmap v1999: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 687 KiB/s rd, 85 B/s wr, 6 op/s
Nov 22 04:14:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 22 04:14:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:13 compute-0 sudo[296712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:13 compute-0 sudo[296712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:13 compute-0 sudo[296712]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:13 compute-0 sudo[296737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:14:13 compute-0 sudo[296737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:13 compute-0 sudo[296737]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:13 compute-0 nova_compute[253461]: 2025-11-22 04:14:13.658 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:13 compute-0 sudo[296762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:13 compute-0 sudo[296762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:13 compute-0 sudo[296762]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:13 compute-0 sudo[296787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:14:13 compute-0 sudo[296787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:13 compute-0 nova_compute[253461]: 2025-11-22 04:14:13.941 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:14 compute-0 sudo[296787]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:14:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:14:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:14:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:14:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:14:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:14:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3569454c-15de-4fd0-a750-0dc6096affb7 does not exist
Nov 22 04:14:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 00b4a999-c3fb-4279-9754-8bc1c6d52d7d does not exist
Nov 22 04:14:14 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ecbf0b0f-3b28-4ab6-b678-74dd6501ee80 does not exist
Nov 22 04:14:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:14:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:14:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:14:14 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:14:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:14:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:14:14 compute-0 sudo[296843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:14 compute-0 sudo[296843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:14 compute-0 sudo[296843]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:14 compute-0 ceph-mon[75011]: pgmap v2000: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 22 04:14:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:14:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:14:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:14:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:14:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:14:14 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:14:14 compute-0 sudo[296868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:14:14 compute-0 sudo[296868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:14 compute-0 sudo[296868]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:14 compute-0 sudo[296893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:14 compute-0 sudo[296893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:14 compute-0 sudo[296893]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:14 compute-0 sudo[296918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:14:14 compute-0 sudo[296918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 8 op/s
Nov 22 04:14:14 compute-0 podman[296983]: 2025-11-22 04:14:14.951758054 +0000 UTC m=+0.045354528 container create d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:14:14 compute-0 systemd[1]: Started libpod-conmon-d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de.scope.
Nov 22 04:14:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:14:15 compute-0 podman[296983]: 2025-11-22 04:14:14.934209267 +0000 UTC m=+0.027805771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:14:15 compute-0 podman[296983]: 2025-11-22 04:14:15.042964978 +0000 UTC m=+0.136561482 container init d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:14:15 compute-0 podman[296983]: 2025-11-22 04:14:15.052415165 +0000 UTC m=+0.146011639 container start d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_greider, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:14:15 compute-0 podman[296983]: 2025-11-22 04:14:15.05581136 +0000 UTC m=+0.149407864 container attach d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_greider, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:14:15 compute-0 nifty_greider[297000]: 167 167
Nov 22 04:14:15 compute-0 systemd[1]: libpod-d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de.scope: Deactivated successfully.
Nov 22 04:14:15 compute-0 conmon[297000]: conmon d2bca82a6f1afe389262 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de.scope/container/memory.events
Nov 22 04:14:15 compute-0 podman[296983]: 2025-11-22 04:14:15.060603209 +0000 UTC m=+0.154199693 container died d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_greider, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:14:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-68223f2bb38e6a1275dd4685cf13081ba8b85bff497b379e25b368bab94c0e33-merged.mount: Deactivated successfully.
Nov 22 04:14:15 compute-0 podman[296983]: 2025-11-22 04:14:15.12257852 +0000 UTC m=+0.216174994 container remove d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:14:15 compute-0 systemd[1]: libpod-conmon-d2bca82a6f1afe389262da297ce698f76e315df8e78ce3fda7fc7080baf938de.scope: Deactivated successfully.
Nov 22 04:14:15 compute-0 podman[297024]: 2025-11-22 04:14:15.368975178 +0000 UTC m=+0.074996048 container create 944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:14:15 compute-0 podman[297024]: 2025-11-22 04:14:15.325773959 +0000 UTC m=+0.031794879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:14:15 compute-0 systemd[1]: Started libpod-conmon-944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af.scope.
Nov 22 04:14:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bc39444d34953f8c40990216919f6cdb37b5cde80a488ad6515d5722b7af9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bc39444d34953f8c40990216919f6cdb37b5cde80a488ad6515d5722b7af9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bc39444d34953f8c40990216919f6cdb37b5cde80a488ad6515d5722b7af9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bc39444d34953f8c40990216919f6cdb37b5cde80a488ad6515d5722b7af9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67bc39444d34953f8c40990216919f6cdb37b5cde80a488ad6515d5722b7af9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:15 compute-0 podman[297024]: 2025-11-22 04:14:15.492028104 +0000 UTC m=+0.198049004 container init 944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:14:15 compute-0 podman[297024]: 2025-11-22 04:14:15.499577999 +0000 UTC m=+0.205598869 container start 944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:14:15 compute-0 podman[297024]: 2025-11-22 04:14:15.506495441 +0000 UTC m=+0.212516341 container attach 944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:14:15 compute-0 sshd-session[296665]: Invalid user admin from 27.79.46.85 port 52574
Nov 22 04:14:15 compute-0 sshd-session[296641]: Invalid user guest from 27.79.46.85 port 42452
Nov 22 04:14:15 compute-0 sshd-session[296665]: Connection closed by invalid user admin 27.79.46.85 port 52574 [preauth]
Nov 22 04:14:16 compute-0 sshd-session[296641]: Connection closed by invalid user guest 27.79.46.85 port 42452 [preauth]
Nov 22 04:14:16 compute-0 ceph-mon[75011]: pgmap v2001: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 8 op/s
Nov 22 04:14:16 compute-0 blissful_wright[297040]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:14:16 compute-0 blissful_wright[297040]: --> relative data size: 1.0
Nov 22 04:14:16 compute-0 blissful_wright[297040]: --> All data devices are unavailable
Nov 22 04:14:16 compute-0 systemd[1]: libpod-944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af.scope: Deactivated successfully.
Nov 22 04:14:16 compute-0 podman[297024]: 2025-11-22 04:14:16.595045337 +0000 UTC m=+1.301066247 container died 944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:14:16 compute-0 systemd[1]: libpod-944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af.scope: Consumed 1.051s CPU time.
Nov 22 04:14:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:14:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-67bc39444d34953f8c40990216919f6cdb37b5cde80a488ad6515d5722b7af9e-merged.mount: Deactivated successfully.
Nov 22 04:14:17 compute-0 podman[297024]: 2025-11-22 04:14:17.183071182 +0000 UTC m=+1.889092072 container remove 944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_wright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:14:17 compute-0 systemd[1]: libpod-conmon-944cb6cb49cf7f03ce1abfc798ac8f2af24e32d383f45a711adc0afc711d66af.scope: Deactivated successfully.
Nov 22 04:14:17 compute-0 sudo[296918]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:17 compute-0 sudo[297083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:17 compute-0 sudo[297083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:17 compute-0 sudo[297083]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:17 compute-0 sudo[297108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:14:17 compute-0 sudo[297108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:17 compute-0 sudo[297108]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:17 compute-0 sudo[297133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:17 compute-0 sudo[297133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:17 compute-0 sudo[297133]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:17 compute-0 sudo[297158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:14:17 compute-0 sudo[297158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:18 compute-0 podman[297225]: 2025-11-22 04:14:17.927797122 +0000 UTC m=+0.032038462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:14:18 compute-0 podman[297225]: 2025-11-22 04:14:18.080304059 +0000 UTC m=+0.184545349 container create e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:14:18 compute-0 nova_compute[253461]: 2025-11-22 04:14:18.661 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:14:18 compute-0 ceph-mon[75011]: pgmap v2002: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:14:18 compute-0 systemd[1]: Started libpod-conmon-e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40.scope.
Nov 22 04:14:18 compute-0 nova_compute[253461]: 2025-11-22 04:14:18.990 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:14:19 compute-0 podman[297225]: 2025-11-22 04:14:19.062803802 +0000 UTC m=+1.167045092 container init e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:14:19 compute-0 podman[297225]: 2025-11-22 04:14:19.076024906 +0000 UTC m=+1.180266166 container start e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:14:19 compute-0 affectionate_meitner[297241]: 167 167
Nov 22 04:14:19 compute-0 systemd[1]: libpod-e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40.scope: Deactivated successfully.
Nov 22 04:14:19 compute-0 podman[297225]: 2025-11-22 04:14:19.09112095 +0000 UTC m=+1.195362230 container attach e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:14:19 compute-0 podman[297225]: 2025-11-22 04:14:19.091719109 +0000 UTC m=+1.195960369 container died e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:14:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8998a79c7fdd4e23b8d8562783ae5659445462613e5b0304e28277af8d89341-merged.mount: Deactivated successfully.
Nov 22 04:14:19 compute-0 podman[297225]: 2025-11-22 04:14:19.501526528 +0000 UTC m=+1.605767788 container remove e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:14:19 compute-0 systemd[1]: libpod-conmon-e5285fb72afd6584a891ce30a0c8138390a283fc998eebebab94d70377bcff40.scope: Deactivated successfully.
Nov 22 04:14:19 compute-0 podman[297268]: 2025-11-22 04:14:19.714595358 +0000 UTC m=+0.071962018 container create 10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:14:19 compute-0 podman[297268]: 2025-11-22 04:14:19.669274264 +0000 UTC m=+0.026640964 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:14:19 compute-0 systemd[1]: Started libpod-conmon-10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a.scope.
Nov 22 04:14:19 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3450b3fa55cfe2e5eabf5f3823a680e28767636a5ba44ca60f75c273df96c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3450b3fa55cfe2e5eabf5f3823a680e28767636a5ba44ca60f75c273df96c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3450b3fa55cfe2e5eabf5f3823a680e28767636a5ba44ca60f75c273df96c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc3450b3fa55cfe2e5eabf5f3823a680e28767636a5ba44ca60f75c273df96c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:19 compute-0 podman[297268]: 2025-11-22 04:14:19.904056853 +0000 UTC m=+0.261423524 container init 10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:14:19 compute-0 podman[297268]: 2025-11-22 04:14:19.913251903 +0000 UTC m=+0.270618543 container start 10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:14:19 compute-0 podman[297268]: 2025-11-22 04:14:19.944095918 +0000 UTC m=+0.301462579 container attach 10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lamarr, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:14:20 compute-0 ceph-mon[75011]: pgmap v2003: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:14:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]: {
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:     "0": [
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:         {
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "devices": [
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "/dev/loop3"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             ],
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_name": "ceph_lv0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_size": "21470642176",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "name": "ceph_lv0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "tags": {
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cluster_name": "ceph",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.crush_device_class": "",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.encrypted": "0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osd_id": "0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.type": "block",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.vdo": "0"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             },
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "type": "block",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "vg_name": "ceph_vg0"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:         }
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:     ],
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:     "1": [
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:         {
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "devices": [
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "/dev/loop4"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             ],
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_name": "ceph_lv1",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_size": "21470642176",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "name": "ceph_lv1",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "tags": {
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cluster_name": "ceph",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.crush_device_class": "",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.encrypted": "0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osd_id": "1",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.type": "block",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.vdo": "0"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             },
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "type": "block",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "vg_name": "ceph_vg1"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:         }
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:     ],
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:     "2": [
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:         {
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "devices": [
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "/dev/loop5"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             ],
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_name": "ceph_lv2",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_size": "21470642176",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "name": "ceph_lv2",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "tags": {
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.cluster_name": "ceph",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.crush_device_class": "",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.encrypted": "0",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osd_id": "2",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.type": "block",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:                 "ceph.vdo": "0"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             },
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "type": "block",
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:             "vg_name": "ceph_vg2"
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:         }
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]:     ]
Nov 22 04:14:20 compute-0 distracted_lamarr[297284]: }
Nov 22 04:14:20 compute-0 podman[297268]: 2025-11-22 04:14:20.735682101 +0000 UTC m=+1.093048791 container died 10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lamarr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:14:20 compute-0 systemd[1]: libpod-10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a.scope: Deactivated successfully.
Nov 22 04:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc3450b3fa55cfe2e5eabf5f3823a680e28767636a5ba44ca60f75c273df96c6-merged.mount: Deactivated successfully.
Nov 22 04:14:21 compute-0 podman[297268]: 2025-11-22 04:14:21.897034024 +0000 UTC m=+2.254400704 container remove 10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:14:21 compute-0 systemd[1]: libpod-conmon-10e1f8899a92e79819228e84cdc31b899fecd4914a96d8318ae7b7400337203a.scope: Deactivated successfully.
Nov 22 04:14:21 compute-0 sudo[297158]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:22 compute-0 sudo[297306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:22 compute-0 sudo[297306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:22 compute-0 sudo[297306]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:22 compute-0 sudo[297331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:14:22 compute-0 sudo[297331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:22 compute-0 sudo[297331]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:22 compute-0 sudo[297356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:22 compute-0 sudo[297356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:22 compute-0 sudo[297356]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:22 compute-0 sudo[297381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:14:22 compute-0 sudo[297381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:22 compute-0 ceph-mon[75011]: pgmap v2004: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:14:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 22 KiB/s wr, 5 op/s
Nov 22 04:14:22 compute-0 podman[297446]: 2025-11-22 04:14:22.708654387 +0000 UTC m=+0.029944690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:14:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:22 compute-0 podman[297446]: 2025-11-22 04:14:22.988326597 +0000 UTC m=+0.309616860 container create 14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shirley, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:14:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:14:23.029 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:14:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:14:23.032 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:14:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:14:23.032 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:14:23 compute-0 systemd[1]: Started libpod-conmon-14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744.scope.
Nov 22 04:14:23 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:14:23 compute-0 podman[297446]: 2025-11-22 04:14:23.488247715 +0000 UTC m=+0.809538038 container init 14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shirley, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Nov 22 04:14:23 compute-0 podman[297446]: 2025-11-22 04:14:23.502230147 +0000 UTC m=+0.823520410 container start 14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shirley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:14:23 compute-0 priceless_shirley[297462]: 167 167
Nov 22 04:14:23 compute-0 systemd[1]: libpod-14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744.scope: Deactivated successfully.
Nov 22 04:14:23 compute-0 nova_compute[253461]: 2025-11-22 04:14:23.665 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:23 compute-0 podman[297446]: 2025-11-22 04:14:23.7463745 +0000 UTC m=+1.067664763 container attach 14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shirley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:14:23 compute-0 podman[297446]: 2025-11-22 04:14:23.747500408 +0000 UTC m=+1.068790681 container died 14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:14:24 compute-0 nova_compute[253461]: 2025-11-22 04:14:24.046 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:24 compute-0 ceph-mon[75011]: pgmap v2005: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 22 KiB/s wr, 5 op/s
Nov 22 04:14:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 22 04:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d5f735b63b9b4c571b55370cd250fe4027f9fdc6682dcb585dd95bcf96439b-merged.mount: Deactivated successfully.
Nov 22 04:14:25 compute-0 podman[297446]: 2025-11-22 04:14:25.578709169 +0000 UTC m=+2.899999422 container remove 14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_shirley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:14:25 compute-0 systemd[1]: libpod-conmon-14d2e7d45a9a647f4d789a39baf18e0237c663893686ccf95fa5ab08ce575744.scope: Deactivated successfully.
Nov 22 04:14:25 compute-0 podman[297486]: 2025-11-22 04:14:25.870152336 +0000 UTC m=+0.093616780 container create c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:14:25 compute-0 podman[297486]: 2025-11-22 04:14:25.822214209 +0000 UTC m=+0.045678683 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:14:25 compute-0 systemd[1]: Started libpod-conmon-c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141.scope.
Nov 22 04:14:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b17507978c6070519283bd5ad9baf6ce02a4e98090e647bd572a186e2c95df4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b17507978c6070519283bd5ad9baf6ce02a4e98090e647bd572a186e2c95df4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b17507978c6070519283bd5ad9baf6ce02a4e98090e647bd572a186e2c95df4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b17507978c6070519283bd5ad9baf6ce02a4e98090e647bd572a186e2c95df4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:14:25 compute-0 podman[297486]: 2025-11-22 04:14:25.975472222 +0000 UTC m=+0.198936626 container init c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:14:25 compute-0 podman[297486]: 2025-11-22 04:14:25.983206217 +0000 UTC m=+0.206670631 container start c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:14:25 compute-0 podman[297486]: 2025-11-22 04:14:25.993457663 +0000 UTC m=+0.216922067 container attach c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:14:26 compute-0 ceph-mon[75011]: pgmap v2006: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 22 04:14:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 170 B/s wr, 3 op/s
Nov 22 04:14:26 compute-0 busy_albattani[297502]: {
Nov 22 04:14:26 compute-0 busy_albattani[297502]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "osd_id": 1,
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "type": "bluestore"
Nov 22 04:14:26 compute-0 busy_albattani[297502]:     },
Nov 22 04:14:26 compute-0 busy_albattani[297502]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "osd_id": 0,
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "type": "bluestore"
Nov 22 04:14:26 compute-0 busy_albattani[297502]:     },
Nov 22 04:14:26 compute-0 busy_albattani[297502]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "osd_id": 2,
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:14:26 compute-0 busy_albattani[297502]:         "type": "bluestore"
Nov 22 04:14:26 compute-0 busy_albattani[297502]:     }
Nov 22 04:14:26 compute-0 busy_albattani[297502]: }
Nov 22 04:14:27 compute-0 systemd[1]: libpod-c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141.scope: Deactivated successfully.
Nov 22 04:14:27 compute-0 podman[297486]: 2025-11-22 04:14:27.004514496 +0000 UTC m=+1.227978910 container died c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:14:27 compute-0 systemd[1]: libpod-c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141.scope: Consumed 1.024s CPU time.
Nov 22 04:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b17507978c6070519283bd5ad9baf6ce02a4e98090e647bd572a186e2c95df4-merged.mount: Deactivated successfully.
Nov 22 04:14:27 compute-0 podman[297486]: 2025-11-22 04:14:27.555754524 +0000 UTC m=+1.779218968 container remove c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_albattani, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:14:27 compute-0 sudo[297381]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:27 compute-0 systemd[1]: libpod-conmon-c00c7e2dbc7acecf71e5a8e803d5de46da0f98290ebde4de7e37eba168393141.scope: Deactivated successfully.
Nov 22 04:14:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:14:27 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:14:27 compute-0 podman[297548]: 2025-11-22 04:14:27.752709771 +0000 UTC m=+0.092628853 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 22 04:14:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:14:27 compute-0 ceph-mon[75011]: pgmap v2007: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 170 B/s wr, 3 op/s
Nov 22 04:14:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:28 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:14:28 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 53028f4c-0c73-48a9-93a9-098eb589b842 does not exist
Nov 22 04:14:28 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a2f970d4-0378-497f-b49a-40207cb59d97 does not exist
Nov 22 04:14:28 compute-0 sudo[297569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:14:28 compute-0 sudo[297569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:28 compute-0 sudo[297569]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:28 compute-0 sudo[297594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:14:28 compute-0 sudo[297594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:14:28 compute-0 sudo[297594]: pam_unix(sudo:session): session closed for user root
Nov 22 04:14:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:14:28 compute-0 nova_compute[253461]: 2025-11-22 04:14:28.675 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:29 compute-0 nova_compute[253461]: 2025-11-22 04:14:29.048 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:14:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:14:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 11 op/s
Nov 22 04:14:30 compute-0 ceph-mon[75011]: pgmap v2008: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:14:32 compute-0 ceph-mon[75011]: pgmap v2009: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 0 B/s wr, 11 op/s
Nov 22 04:14:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 255 B/s wr, 12 op/s
Nov 22 04:14:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:33 compute-0 nova_compute[253461]: 2025-11-22 04:14:33.677 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:33 compute-0 ceph-mon[75011]: pgmap v2010: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 9.8 KiB/s rd, 255 B/s wr, 12 op/s
Nov 22 04:14:34 compute-0 nova_compute[253461]: 2025-11-22 04:14:34.079 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 92 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 343 KiB/s wr, 21 op/s
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:14:36
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.control', 'images', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'volumes']
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:14:36 compute-0 ceph-mon[75011]: pgmap v2011: 305 pgs: 305 active+clean; 92 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 343 KiB/s wr, 21 op/s
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:14:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 124 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.9 MiB/s wr, 25 op/s
Nov 22 04:14:37 compute-0 ceph-mon[75011]: pgmap v2012: 305 pgs: 305 active+clean; 124 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.9 MiB/s wr, 25 op/s
Nov 22 04:14:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 124 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.9 MiB/s wr, 25 op/s
Nov 22 04:14:38 compute-0 nova_compute[253461]: 2025-11-22 04:14:38.732 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:39 compute-0 nova_compute[253461]: 2025-11-22 04:14:39.081 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:39 compute-0 ceph-mon[75011]: pgmap v2013: 305 pgs: 305 active+clean; 124 MiB data, 491 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 2.9 MiB/s wr, 25 op/s
Nov 22 04:14:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 174 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 7.0 MiB/s wr, 43 op/s
Nov 22 04:14:42 compute-0 ceph-mon[75011]: pgmap v2014: 305 pgs: 305 active+clean; 174 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 7.0 MiB/s wr, 43 op/s
Nov 22 04:14:42 compute-0 podman[297620]: 2025-11-22 04:14:42.392292185 +0000 UTC m=+0.067309732 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 04:14:42 compute-0 podman[297621]: 2025-11-22 04:14:42.427210942 +0000 UTC m=+0.103582498 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:14:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 9.4 MiB/s wr, 33 op/s
Nov 22 04:14:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:43 compute-0 nova_compute[253461]: 2025-11-22 04:14:43.772 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:44 compute-0 nova_compute[253461]: 2025-11-22 04:14:44.083 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:44 compute-0 ceph-mon[75011]: pgmap v2015: 305 pgs: 305 active+clean; 202 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 9.4 MiB/s wr, 33 op/s
Nov 22 04:14:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 9.4 MiB/s wr, 32 op/s
Nov 22 04:14:46 compute-0 nova_compute[253461]: 2025-11-22 04:14:46.450 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6c989a97-6747-4b3b-a025-118564ecad92" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:14:46 compute-0 nova_compute[253461]: 2025-11-22 04:14:46.451 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:14:46 compute-0 ceph-mon[75011]: pgmap v2016: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 9.4 MiB/s wr, 32 op/s
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021746750260467538 of space, bias 1.0, pg target 0.6524025078140261 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:14:46 compute-0 nova_compute[253461]: 2025-11-22 04:14:46.669 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:14:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 9.0 MiB/s wr, 22 op/s
Nov 22 04:14:46 compute-0 nova_compute[253461]: 2025-11-22 04:14:46.923 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:14:46 compute-0 nova_compute[253461]: 2025-11-22 04:14:46.924 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:14:46 compute-0 nova_compute[253461]: 2025-11-22 04:14:46.935 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:14:46 compute-0 nova_compute[253461]: 2025-11-22 04:14:46.936 253465 INFO nova.compute.claims [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:14:47 compute-0 nova_compute[253461]: 2025-11-22 04:14:47.270 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:14:47 compute-0 nova_compute[253461]: 2025-11-22 04:14:47.561 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:14:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2609152553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:47 compute-0 nova_compute[253461]: 2025-11-22 04:14:47.721 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:14:47 compute-0 nova_compute[253461]: 2025-11-22 04:14:47.730 253465 DEBUG nova.compute.provider_tree [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:14:48 compute-0 ceph-mon[75011]: pgmap v2017: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 9.0 MiB/s wr, 22 op/s
Nov 22 04:14:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2609152553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.511 253465 DEBUG nova.scheduler.client.report [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.581 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.581 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 6.5 MiB/s wr, 19 op/s
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.776 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.826 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.827 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.834 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.835 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.836 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.836 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:14:48 compute-0 nova_compute[253461]: 2025-11-22 04:14:48.837 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.085 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:14:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3974996271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.239 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:14:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3974996271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.430 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.432 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4436MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.432 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.432 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.466 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.466 253465 DEBUG nova.network.neutron [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.641 253465 INFO nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.679 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 6c989a97-6747-4b3b-a025-118564ecad92 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.679 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.680 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.717 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:14:49 compute-0 nova_compute[253461]: 2025-11-22 04:14:49.834 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.004 253465 DEBUG nova.policy [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ddff25657c74403e9ed9e91ff227badd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.011 253465 INFO nova.virt.block_device [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Booting with volume 736913be-faab-467f-889e-ff95053bdeaa at /dev/vda
Nov 22 04:14:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:14:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2373245867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.274 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.280 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.301 253465 DEBUG os_brick.utils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.303 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.321 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.322 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[a597cf1b-9919-491e-ac0f-7e66d5b95ed2]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.324 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.337 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.337 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[30c72f95-27eb-483d-815e-fa9ea747ada3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.339 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.355 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.355 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[db788ee9-2045-4304-ba7c-dae822f2f641]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.357 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[761bd67d-8a25-46b4-8369-a6e0a361526c]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.358 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.394 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "nvme version" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.398 253465 DEBUG os_brick.initiator.connectors.lightos [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.399 253465 DEBUG os_brick.initiator.connectors.lightos [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.400 253465 DEBUG os_brick.initiator.connectors.lightos [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.400 253465 DEBUG os_brick.utils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] <== get_connector_properties: return (99ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.401 253465 DEBUG nova.virt.block_device [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Updating existing volume attachment record: 67c395b9-4019-4dec-814a-9e6cf91bb45c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.441 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:14:50 compute-0 ceph-mon[75011]: pgmap v2018: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 6.5 MiB/s wr, 19 op/s
Nov 22 04:14:50 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2373245867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.617 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.618 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.619 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:50 compute-0 nova_compute[253461]: 2025-11-22 04:14:50.620 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 04:14:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 6.5 MiB/s wr, 19 op/s
Nov 22 04:14:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:14:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/867660162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:14:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/867660162' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:14:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 2.3 MiB/s wr, 1 op/s
Nov 22 04:14:52 compute-0 ceph-mon[75011]: pgmap v2019: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 6.5 MiB/s wr, 19 op/s
Nov 22 04:14:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.283 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:14:53.283 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:14:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:14:53.285 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.496 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.497 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.497 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.498 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.514 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.517 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.518 253465 INFO nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Creating image(s)
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.519 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.519 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Ensure instance console log exists: /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.520 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.521 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.522 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.543 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.544 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.545 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.545 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.546 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.778 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:53 compute-0 nova_compute[253461]: 2025-11-22 04:14:53.915 253465 DEBUG nova.network.neutron [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Successfully created port: dbce96fb-9cd9-4da5-acdf-14e560a0724f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:14:54 compute-0 nova_compute[253461]: 2025-11-22 04:14:54.087 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:54 compute-0 ceph-mon[75011]: pgmap v2020: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 2.3 MiB/s wr, 1 op/s
Nov 22 04:14:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 22 04:14:55 compute-0 nova_compute[253461]: 2025-11-22 04:14:55.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:55 compute-0 nova_compute[253461]: 2025-11-22 04:14:55.533 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:55 compute-0 nova_compute[253461]: 2025-11-22 04:14:55.868 253465 DEBUG nova.network.neutron [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Successfully updated port: dbce96fb-9cd9-4da5-acdf-14e560a0724f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:14:55 compute-0 nova_compute[253461]: 2025-11-22 04:14:55.956 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:14:55 compute-0 nova_compute[253461]: 2025-11-22 04:14:55.957 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquired lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:14:55 compute-0 nova_compute[253461]: 2025-11-22 04:14:55.957 253465 DEBUG nova.network.neutron [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:14:56 compute-0 nova_compute[253461]: 2025-11-22 04:14:56.248 253465 DEBUG nova.compute.manager [req-db86d3ab-ed53-4436-99a0-d93640cf465f req-188be051-411e-4c36-af20-d56c0c769a61 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received event network-changed-dbce96fb-9cd9-4da5-acdf-14e560a0724f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:14:56 compute-0 nova_compute[253461]: 2025-11-22 04:14:56.248 253465 DEBUG nova.compute.manager [req-db86d3ab-ed53-4436-99a0-d93640cf465f req-188be051-411e-4c36-af20-d56c0c769a61 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Refreshing instance network info cache due to event network-changed-dbce96fb-9cd9-4da5-acdf-14e560a0724f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:14:56 compute-0 nova_compute[253461]: 2025-11-22 04:14:56.249 253465 DEBUG oslo_concurrency.lockutils [req-db86d3ab-ed53-4436-99a0-d93640cf465f req-188be051-411e-4c36-af20-d56c0c769a61 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:14:56 compute-0 nova_compute[253461]: 2025-11-22 04:14:56.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:56 compute-0 ceph-mon[75011]: pgmap v2021: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Nov 22 04:14:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:14:56 compute-0 nova_compute[253461]: 2025-11-22 04:14:56.872 253465 DEBUG nova.network.neutron [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:14:57 compute-0 nova_compute[253461]: 2025-11-22 04:14:57.910 253465 DEBUG nova.network.neutron [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Updating instance_info_cache with network_info: [{"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.163 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Releasing lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.163 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Instance network_info: |[{"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.164 253465 DEBUG oslo_concurrency.lockutils [req-db86d3ab-ed53-4436-99a0-d93640cf465f req-188be051-411e-4c36-af20-d56c0c769a61 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.164 253465 DEBUG nova.network.neutron [req-db86d3ab-ed53-4436-99a0-d93640cf465f req-188be051-411e-4c36-af20-d56c0c769a61 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Refreshing network info cache for port dbce96fb-9cd9-4da5-acdf-14e560a0724f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.168 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Start _get_guest_xml network_info=[{"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': '67c395b9-4019-4dec-814a-9e6cf91bb45c', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-736913be-faab-467f-889e-ff95053bdeaa', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '736913be-faab-467f-889e-ff95053bdeaa', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6c989a97-6747-4b3b-a025-118564ecad92', 'attached_at': '', 'detached_at': '', 'volume_id': '736913be-faab-467f-889e-ff95053bdeaa', 'serial': '736913be-faab-467f-889e-ff95053bdeaa'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.176 253465 WARNING nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.182 253465 DEBUG nova.virt.libvirt.host [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.183 253465 DEBUG nova.virt.libvirt.host [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.186 253465 DEBUG nova.virt.libvirt.host [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.187 253465 DEBUG nova.virt.libvirt.host [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.188 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.188 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.189 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.189 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.190 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.190 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.191 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.191 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.191 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.192 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.192 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.193 253465 DEBUG nova.virt.hardware [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:14:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.228 253465 DEBUG nova.storage.rbd_utils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6c989a97-6747-4b3b-a025-118564ecad92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.233 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:14:58 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:14:58.287 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:14:58 compute-0 podman[297757]: 2025-11-22 04:14:58.395760867 +0000 UTC m=+0.067757246 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:14:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:14:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048536689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.670 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:14:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:14:58 compute-0 ceph-mon[75011]: pgmap v2022: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.780 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.797 253465 DEBUG os_brick.encryptors [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Using volume encryption metadata '{'encryption_key_id': 'b8caf87e-0d14-44d6-b0a9-8f62d74c94dd', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-736913be-faab-467f-889e-ff95053bdeaa', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '736913be-faab-467f-889e-ff95053bdeaa', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6c989a97-6747-4b3b-a025-118564ecad92', 'attached_at': '', 'detached_at': '', 'volume_id': '736913be-faab-467f-889e-ff95053bdeaa', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.800 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.812 253465 DEBUG barbicanclient.v1.secrets [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.813 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.838 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.838 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.861 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.862 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.885 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.886 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.911 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.912 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.938 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:58 compute-0 nova_compute[253461]: 2025-11-22 04:14:58.938 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.007 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.008 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.034 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.034 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.068 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.068 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.089 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.090 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.091 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.141 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.141 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.163 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.164 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.220 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.221 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.247 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.248 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.285 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.286 253465 INFO barbicanclient.base [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/b8caf87e-0d14-44d6-b0a9-8f62d74c94dd
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.318 253465 DEBUG barbicanclient.client [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.319 253465 DEBUG nova.virt.libvirt.host [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 04:14:59 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 04:14:59 compute-0 nova_compute[253461]:     <volume>736913be-faab-467f-889e-ff95053bdeaa</volume>
Nov 22 04:14:59 compute-0 nova_compute[253461]:   </usage>
Nov 22 04:14:59 compute-0 nova_compute[253461]: </secret>
Nov 22 04:14:59 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:14:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3048536689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:14:59 compute-0 ceph-mon[75011]: pgmap v2023: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.902 253465 DEBUG nova.network.neutron [req-db86d3ab-ed53-4436-99a0-d93640cf465f req-188be051-411e-4c36-af20-d56c0c769a61 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Updated VIF entry in instance network info cache for port dbce96fb-9cd9-4da5-acdf-14e560a0724f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:14:59 compute-0 nova_compute[253461]: 2025-11-22 04:14:59.903 253465 DEBUG nova.network.neutron [req-db86d3ab-ed53-4436-99a0-d93640cf465f req-188be051-411e-4c36-af20-d56c0c769a61 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Updating instance_info_cache with network_info: [{"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:15:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:15:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4035892226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:15:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:15:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4035892226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:15:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4035892226' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:15:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4035892226' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:15:02 compute-0 ceph-mon[75011]: pgmap v2024: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:02 compute-0 nova_compute[253461]: 2025-11-22 04:15:02.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:02 compute-0 nova_compute[253461]: 2025-11-22 04:15:02.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 04:15:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:03 compute-0 nova_compute[253461]: 2025-11-22 04:15:03.782 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:04 compute-0 nova_compute[253461]: 2025-11-22 04:15:04.090 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:04 compute-0 nova_compute[253461]: 2025-11-22 04:15:04.281 253465 DEBUG nova.virt.libvirt.vif [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:14:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-22541137',display_name='tempest-TransferEncryptedVolumeTest-server-22541137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-22541137',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMF3KIMSa8o4WCzM1VgX9RcGz4FcpwcrZcdUDFLNYpjBj2lzhaXFrO0bSdzjU9Itff6b3BySQo/nLrhI32bk8GIfHP/n0NuDArjdwgS2hsu8vteQ0u/zEQY1VMKJGLhTNw==',key_name='tempest-TransferEncryptedVolumeTest-1133237278',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-4t21clcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:14:49Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6c989a97-6747-4b3b-a025-118564ecad92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:15:04 compute-0 nova_compute[253461]: 2025-11-22 04:15:04.282 253465 DEBUG nova.network.os_vif_util [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:15:04 compute-0 nova_compute[253461]: 2025-11-22 04:15:04.283 253465 DEBUG nova.network.os_vif_util [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:f3:19,bridge_name='br-int',has_traffic_filtering=True,id=dbce96fb-9cd9-4da5-acdf-14e560a0724f,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbce96fb-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:15:04 compute-0 nova_compute[253461]: 2025-11-22 04:15:04.286 253465 DEBUG nova.objects.instance [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c989a97-6747-4b3b-a025-118564ecad92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:15:04 compute-0 ceph-mon[75011]: pgmap v2025: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:04 compute-0 nova_compute[253461]: 2025-11-22 04:15:04.632 253465 DEBUG oslo_concurrency.lockutils [req-db86d3ab-ed53-4436-99a0-d93640cf465f req-188be051-411e-4c36-af20-d56c0c769a61 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:15:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.175 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <uuid>6c989a97-6747-4b3b-a025-118564ecad92</uuid>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <name>instance-00000018</name>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-22541137</nova:name>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:14:58</nova:creationTime>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <nova:user uuid="ddff25657c74403e9ed9e91ff227badd">tempest-TransferEncryptedVolumeTest-1500496447-project-member</nova:user>
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <nova:project uuid="98e4451f91104cd88f6e19dd5c53fd00">tempest-TransferEncryptedVolumeTest-1500496447</nova:project>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <nova:port uuid="dbce96fb-9cd9-4da5-acdf-14e560a0724f">
Nov 22 04:15:05 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <system>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <entry name="serial">6c989a97-6747-4b3b-a025-118564ecad92</entry>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <entry name="uuid">6c989a97-6747-4b3b-a025-118564ecad92</entry>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </system>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <os>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   </os>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <features>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   </features>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/6c989a97-6747-4b3b-a025-118564ecad92_disk.config">
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       </source>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-736913be-faab-467f-889e-ff95053bdeaa">
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       </source>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <serial>736913be-faab-467f-889e-ff95053bdeaa</serial>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <encryption format="luks">
Nov 22 04:15:05 compute-0 nova_compute[253461]:         <secret type="passphrase" uuid="765405d9-5301-4967-8f03-edb0343e05cb"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       </encryption>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:05:f3:19"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <target dev="tapdbce96fb-9c"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92/console.log" append="off"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <video>
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </video>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:15:05 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:15:05 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:15:05 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:15:05 compute-0 nova_compute[253461]: </domain>
Nov 22 04:15:05 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.177 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Preparing to wait for external event network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.177 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6c989a97-6747-4b3b-a025-118564ecad92-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.178 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.178 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.178 253465 DEBUG nova.virt.libvirt.vif [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:14:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-22541137',display_name='tempest-TransferEncryptedVolumeTest-server-22541137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-22541137',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMF3KIMSa8o4WCzM1VgX9RcGz4FcpwcrZcdUDFLNYpjBj2lzhaXFrO0bSdzjU9Itff6b3BySQo/nLrhI32bk8GIfHP/n0NuDArjdwgS2hsu8vteQ0u/zEQY1VMKJGLhTNw==',key_name='tempest-TransferEncryptedVolumeTest-1133237278',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-4t21clcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:14:49Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6c989a97-6747-4b3b-a025-118564ecad92,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.179 253465 DEBUG nova.network.os_vif_util [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.179 253465 DEBUG nova.network.os_vif_util [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:f3:19,bridge_name='br-int',has_traffic_filtering=True,id=dbce96fb-9cd9-4da5-acdf-14e560a0724f,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbce96fb-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.180 253465 DEBUG os_vif [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:f3:19,bridge_name='br-int',has_traffic_filtering=True,id=dbce96fb-9cd9-4da5-acdf-14e560a0724f,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbce96fb-9c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.180 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.181 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.181 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.185 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.185 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdbce96fb-9c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.186 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdbce96fb-9c, col_values=(('external_ids', {'iface-id': 'dbce96fb-9cd9-4da5-acdf-14e560a0724f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:f3:19', 'vm-uuid': '6c989a97-6747-4b3b-a025-118564ecad92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.187 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:05 compute-0 NetworkManager[48916]: <info>  [1763784905.1891] manager: (tapdbce96fb-9c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.190 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.200 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:05 compute-0 nova_compute[253461]: 2025-11-22 04:15:05.200 253465 INFO os_vif [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:f3:19,bridge_name='br-int',has_traffic_filtering=True,id=dbce96fb-9cd9-4da5-acdf-14e560a0724f,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbce96fb-9c')
Nov 22 04:15:05 compute-0 ceph-mon[75011]: pgmap v2026: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:07 compute-0 nova_compute[253461]: 2025-11-22 04:15:07.396 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 04:15:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:08 compute-0 nova_compute[253461]: 2025-11-22 04:15:08.785 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:08 compute-0 ceph-mon[75011]: pgmap v2027: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:10 compute-0 nova_compute[253461]: 2025-11-22 04:15:10.189 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:10 compute-0 ceph-mon[75011]: pgmap v2028: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:11 compute-0 nova_compute[253461]: 2025-11-22 04:15:11.780 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:15:11 compute-0 nova_compute[253461]: 2025-11-22 04:15:11.780 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:15:11 compute-0 nova_compute[253461]: 2025-11-22 04:15:11.781 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No VIF found with MAC fa:16:3e:05:f3:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:15:11 compute-0 nova_compute[253461]: 2025-11-22 04:15:11.782 253465 INFO nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Using config drive
Nov 22 04:15:11 compute-0 nova_compute[253461]: 2025-11-22 04:15:11.810 253465 DEBUG nova.storage.rbd_utils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6c989a97-6747-4b3b-a025-118564ecad92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:15:12 compute-0 nova_compute[253461]: 2025-11-22 04:15:12.175 253465 INFO nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Creating config drive at /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92/disk.config
Nov 22 04:15:12 compute-0 nova_compute[253461]: 2025-11-22 04:15:12.181 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprh7181xo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:15:12 compute-0 nova_compute[253461]: 2025-11-22 04:15:12.317 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprh7181xo" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:15:12 compute-0 nova_compute[253461]: 2025-11-22 04:15:12.351 253465 DEBUG nova.storage.rbd_utils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6c989a97-6747-4b3b-a025-118564ecad92_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:15:12 compute-0 nova_compute[253461]: 2025-11-22 04:15:12.356 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92/disk.config 6c989a97-6747-4b3b-a025-118564ecad92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:15:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:12 compute-0 ceph-mon[75011]: pgmap v2029: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:13 compute-0 podman[297858]: 2025-11-22 04:15:13.399309525 +0000 UTC m=+0.071511339 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:15:13 compute-0 podman[297859]: 2025-11-22 04:15:13.426375533 +0000 UTC m=+0.099172135 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 04:15:13 compute-0 nova_compute[253461]: 2025-11-22 04:15:13.750 253465 DEBUG oslo_concurrency.processutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92/disk.config 6c989a97-6747-4b3b-a025-118564ecad92_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:15:13 compute-0 nova_compute[253461]: 2025-11-22 04:15:13.751 253465 INFO nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Deleting local config drive /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92/disk.config because it was imported into RBD.
Nov 22 04:15:13 compute-0 nova_compute[253461]: 2025-11-22 04:15:13.787 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:13 compute-0 kernel: tapdbce96fb-9c: entered promiscuous mode
Nov 22 04:15:13 compute-0 nova_compute[253461]: 2025-11-22 04:15:13.834 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:13 compute-0 ovn_controller[152691]: 2025-11-22T04:15:13Z|00251|binding|INFO|Claiming lport dbce96fb-9cd9-4da5-acdf-14e560a0724f for this chassis.
Nov 22 04:15:13 compute-0 ovn_controller[152691]: 2025-11-22T04:15:13Z|00252|binding|INFO|dbce96fb-9cd9-4da5-acdf-14e560a0724f: Claiming fa:16:3e:05:f3:19 10.100.0.14
Nov 22 04:15:13 compute-0 NetworkManager[48916]: <info>  [1763784913.8367] manager: (tapdbce96fb-9c): new Tun device (/org/freedesktop/NetworkManager/Devices/124)
Nov 22 04:15:13 compute-0 nova_compute[253461]: 2025-11-22 04:15:13.848 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:13 compute-0 systemd-machined[215728]: New machine qemu-24-instance-00000018.
Nov 22 04:15:13 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Nov 22 04:15:13 compute-0 ovn_controller[152691]: 2025-11-22T04:15:13Z|00253|binding|INFO|Setting lport dbce96fb-9cd9-4da5-acdf-14e560a0724f ovn-installed in OVS
Nov 22 04:15:13 compute-0 nova_compute[253461]: 2025-11-22 04:15:13.931 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:13 compute-0 systemd-udevd[297917]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:15:13 compute-0 NetworkManager[48916]: <info>  [1763784913.9552] device (tapdbce96fb-9c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:15:13 compute-0 NetworkManager[48916]: <info>  [1763784913.9563] device (tapdbce96fb-9c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:15:13 compute-0 ceph-mon[75011]: pgmap v2030: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:14 compute-0 ovn_controller[152691]: 2025-11-22T04:15:14Z|00254|binding|INFO|Setting lport dbce96fb-9cd9-4da5-acdf-14e560a0724f up in Southbound
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.011 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:f3:19 10.100.0.14'], port_security=['fa:16:3e:05:f3:19 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6c989a97-6747-4b3b-a025-118564ecad92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73bcc005-88ac-46b6-ad11-6207c6046246', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e95e3fed-bcd6-449d-9f95-3b75633f02f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8139379-e220-4788-92e4-b495f0c34eb7, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=dbce96fb-9cd9-4da5-acdf-14e560a0724f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.013 162689 INFO neutron.agent.ovn.metadata.agent [-] Port dbce96fb-9cd9-4da5-acdf-14e560a0724f in datapath 73bcc005-88ac-46b6-ad11-6207c6046246 bound to our chassis
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.016 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.032 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c50109-23f3-4e92-a1eb-7cbee20709f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.034 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap73bcc005-81 in ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.039 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap73bcc005-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.039 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[be995a94-486e-4099-bad8-89c92a70f2c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.040 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb2b540-4e5f-4538-a0b9-58d7f9f6d8e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.057 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[b89d6718-43f4-418e-bd89-c475d4ff3f08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.077 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f71dd1f3-890b-4d4b-a6a2-640a809098e4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.108 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[f38c2ec3-e891-4c2a-8623-5465561049b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.116 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f7487473-f54f-4641-95b8-74a347f3a78a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 NetworkManager[48916]: <info>  [1763784914.1178] manager: (tap73bcc005-80): new Veth device (/org/freedesktop/NetworkManager/Devices/125)
Nov 22 04:15:14 compute-0 systemd-udevd[297919]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.160 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[3f10512f-8f0a-42e7-8e49-bace739e470c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.164 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[6a99f8b6-9c4f-4896-b4dd-03b764784639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 NetworkManager[48916]: <info>  [1763784914.1931] device (tap73bcc005-80): carrier: link connected
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.198 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[3b58f302-0155-4845-8280-d05bc27b932e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.225 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3924dfe2-439e-41ba-8805-afbcc1442480]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73bcc005-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516679, 'reachable_time': 15833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297965, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.244 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f569748c-a76f-4910-b2cc-fad416cf2028]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:1121'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516679, 'tstamp': 516679}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297969, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.264 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1b184125-626c-4306-a6ab-7ae5ebf9cb3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73bcc005-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 76], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516679, 'reachable_time': 15833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297970, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.308 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6d682c9f-fdff-4f6d-8458-787e9d3f5385]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.367 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d1cd5a5b-4d94-45f8-885c-5c83f1d397fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.368 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73bcc005-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.368 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.368 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73bcc005-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:15:14 compute-0 nova_compute[253461]: 2025-11-22 04:15:14.370 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:14 compute-0 NetworkManager[48916]: <info>  [1763784914.3710] manager: (tap73bcc005-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/126)
Nov 22 04:15:14 compute-0 kernel: tap73bcc005-80: entered promiscuous mode
Nov 22 04:15:14 compute-0 nova_compute[253461]: 2025-11-22 04:15:14.372 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.373 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap73bcc005-80, col_values=(('external_ids', {'iface-id': 'c0be682a-2fee-4917-82d9-be22b54079b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:15:14 compute-0 nova_compute[253461]: 2025-11-22 04:15:14.374 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:14 compute-0 ovn_controller[152691]: 2025-11-22T04:15:14Z|00255|binding|INFO|Releasing lport c0be682a-2fee-4917-82d9-be22b54079b1 from this chassis (sb_readonly=0)
Nov 22 04:15:14 compute-0 nova_compute[253461]: 2025-11-22 04:15:14.375 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.376 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.377 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c6197f7d-6a23-4ba1-b57c-cc8045611891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.377 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:15:14 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:14.378 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'env', 'PROCESS_TAG=haproxy-73bcc005-88ac-46b6-ad11-6207c6046246', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/73bcc005-88ac-46b6-ad11-6207c6046246.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:15:14 compute-0 nova_compute[253461]: 2025-11-22 04:15:14.387 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Nov 22 04:15:14 compute-0 podman[298020]: 2025-11-22 04:15:14.78495976 +0000 UTC m=+0.042311267 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:15:15 compute-0 podman[298020]: 2025-11-22 04:15:15.142908095 +0000 UTC m=+0.400259562 container create 240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:15:15 compute-0 nova_compute[253461]: 2025-11-22 04:15:15.165 253465 DEBUG nova.compute.manager [req-8ac97cb6-6507-4ee4-9e93-76a62eff959c req-006e0e39-1ea8-4fe4-9ecb-f63c2c556d48 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received event network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:15:15 compute-0 nova_compute[253461]: 2025-11-22 04:15:15.166 253465 DEBUG oslo_concurrency.lockutils [req-8ac97cb6-6507-4ee4-9e93-76a62eff959c req-006e0e39-1ea8-4fe4-9ecb-f63c2c556d48 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6c989a97-6747-4b3b-a025-118564ecad92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:15 compute-0 nova_compute[253461]: 2025-11-22 04:15:15.167 253465 DEBUG oslo_concurrency.lockutils [req-8ac97cb6-6507-4ee4-9e93-76a62eff959c req-006e0e39-1ea8-4fe4-9ecb-f63c2c556d48 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:15 compute-0 nova_compute[253461]: 2025-11-22 04:15:15.168 253465 DEBUG oslo_concurrency.lockutils [req-8ac97cb6-6507-4ee4-9e93-76a62eff959c req-006e0e39-1ea8-4fe4-9ecb-f63c2c556d48 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:15 compute-0 nova_compute[253461]: 2025-11-22 04:15:15.169 253465 DEBUG nova.compute.manager [req-8ac97cb6-6507-4ee4-9e93-76a62eff959c req-006e0e39-1ea8-4fe4-9ecb-f63c2c556d48 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Processing event network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:15:15 compute-0 nova_compute[253461]: 2025-11-22 04:15:15.190 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:15 compute-0 systemd[1]: Started libpod-conmon-240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545.scope.
Nov 22 04:15:15 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/353815aded28f1dd02527a1e3cb30e5c5a97f5986460dfbf586ab036ba77898a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:15 compute-0 podman[298020]: 2025-11-22 04:15:15.849253476 +0000 UTC m=+1.106604973 container init 240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:15:15 compute-0 podman[298020]: 2025-11-22 04:15:15.859488335 +0000 UTC m=+1.116839812 container start 240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:15:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[298035]: [NOTICE]   (298039) : New worker (298041) forked
Nov 22 04:15:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[298035]: [NOTICE]   (298039) : Loading success.
Nov 22 04:15:16 compute-0 ceph-mon[75011]: pgmap v2031: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 KiB/s rd, 85 B/s wr, 2 op/s
Nov 22 04:15:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 12 KiB/s wr, 7 op/s
Nov 22 04:15:16 compute-0 nova_compute[253461]: 2025-11-22 04:15:16.755 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.031 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784917.0313237, 6c989a97-6747-4b3b-a025-118564ecad92 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.032 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] VM Started (Lifecycle Event)
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.034 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.038 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.041 253465 INFO nova.virt.libvirt.driver [-] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Instance spawned successfully.
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.041 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.122 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.126 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.189 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.189 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.190 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.190 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.190 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.191 253465 DEBUG nova.virt.libvirt.driver [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.227 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.228 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784917.0315049, 6c989a97-6747-4b3b-a025-118564ecad92 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.228 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] VM Paused (Lifecycle Event)
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.335 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.341 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784917.0370417, 6c989a97-6747-4b3b-a025-118564ecad92 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.342 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] VM Resumed (Lifecycle Event)
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.442 253465 DEBUG nova.compute.manager [req-d7d54d78-9de8-4b76-9f98-1531a9deddec req-5fb76385-b569-4a76-b45c-35135f212fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received event network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.443 253465 DEBUG oslo_concurrency.lockutils [req-d7d54d78-9de8-4b76-9f98-1531a9deddec req-5fb76385-b569-4a76-b45c-35135f212fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6c989a97-6747-4b3b-a025-118564ecad92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.444 253465 DEBUG oslo_concurrency.lockutils [req-d7d54d78-9de8-4b76-9f98-1531a9deddec req-5fb76385-b569-4a76-b45c-35135f212fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.444 253465 DEBUG oslo_concurrency.lockutils [req-d7d54d78-9de8-4b76-9f98-1531a9deddec req-5fb76385-b569-4a76-b45c-35135f212fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.445 253465 DEBUG nova.compute.manager [req-d7d54d78-9de8-4b76-9f98-1531a9deddec req-5fb76385-b569-4a76-b45c-35135f212fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] No waiting events found dispatching network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.445 253465 WARNING nova.compute.manager [req-d7d54d78-9de8-4b76-9f98-1531a9deddec req-5fb76385-b569-4a76-b45c-35135f212fcb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received unexpected event network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f for instance with vm_state building and task_state spawning.
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.449 253465 INFO nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Took 23.93 seconds to spawn the instance on the hypervisor.
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.450 253465 DEBUG nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.482 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.488 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.556 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.680 253465 INFO nova.compute.manager [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Took 30.79 seconds to build instance.
Nov 22 04:15:17 compute-0 nova_compute[253461]: 2025-11-22 04:15:17.731 253465 DEBUG oslo_concurrency.lockutils [None req-48039eba-af0d-46cc-956f-5ca7cd1b75fd ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 31.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:18 compute-0 ceph-mon[75011]: pgmap v2032: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 12 KiB/s wr, 7 op/s
Nov 22 04:15:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 12 KiB/s wr, 7 op/s
Nov 22 04:15:18 compute-0 nova_compute[253461]: 2025-11-22 04:15:18.789 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:20 compute-0 nova_compute[253461]: 2025-11-22 04:15:20.230 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:20 compute-0 ceph-mon[75011]: pgmap v2033: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 5.3 KiB/s rd, 12 KiB/s wr, 7 op/s
Nov 22 04:15:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 45 op/s
Nov 22 04:15:22 compute-0 ceph-mon[75011]: pgmap v2034: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 12 KiB/s wr, 45 op/s
Nov 22 04:15:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:15:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:23.030 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:23.031 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:23.032 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:23 compute-0 nova_compute[253461]: 2025-11-22 04:15:23.791 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:24 compute-0 ceph-mon[75011]: pgmap v2035: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:15:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:15:25 compute-0 NetworkManager[48916]: <info>  [1763784925.1576] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Nov 22 04:15:25 compute-0 NetworkManager[48916]: <info>  [1763784925.1591] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.156 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.231 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.280 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:25 compute-0 ovn_controller[152691]: 2025-11-22T04:15:25Z|00256|binding|INFO|Releasing lport c0be682a-2fee-4917-82d9-be22b54079b1 from this chassis (sb_readonly=0)
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.296 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:25 compute-0 ceph-mon[75011]: pgmap v2036: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.995 253465 DEBUG nova.compute.manager [req-c3d43a4e-d186-4272-ae08-b45547c11518 req-28ed0e4d-e919-4177-a149-0bfba5844418 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received event network-changed-dbce96fb-9cd9-4da5-acdf-14e560a0724f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.996 253465 DEBUG nova.compute.manager [req-c3d43a4e-d186-4272-ae08-b45547c11518 req-28ed0e4d-e919-4177-a149-0bfba5844418 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Refreshing instance network info cache due to event network-changed-dbce96fb-9cd9-4da5-acdf-14e560a0724f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.997 253465 DEBUG oslo_concurrency.lockutils [req-c3d43a4e-d186-4272-ae08-b45547c11518 req-28ed0e4d-e919-4177-a149-0bfba5844418 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.997 253465 DEBUG oslo_concurrency.lockutils [req-c3d43a4e-d186-4272-ae08-b45547c11518 req-28ed0e4d-e919-4177-a149-0bfba5844418 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:15:25 compute-0 nova_compute[253461]: 2025-11-22 04:15:25.998 253465 DEBUG nova.network.neutron [req-c3d43a4e-d186-4272-ae08-b45547c11518 req-28ed0e4d-e919-4177-a149-0bfba5844418 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Refreshing network info cache for port dbce96fb-9cd9-4da5-acdf-14e560a0724f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:15:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 22 04:15:27 compute-0 nova_compute[253461]: 2025-11-22 04:15:27.266 253465 DEBUG nova.network.neutron [req-c3d43a4e-d186-4272-ae08-b45547c11518 req-28ed0e4d-e919-4177-a149-0bfba5844418 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Updated VIF entry in instance network info cache for port dbce96fb-9cd9-4da5-acdf-14e560a0724f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:15:27 compute-0 nova_compute[253461]: 2025-11-22 04:15:27.267 253465 DEBUG nova.network.neutron [req-c3d43a4e-d186-4272-ae08-b45547c11518 req-28ed0e4d-e919-4177-a149-0bfba5844418 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Updating instance_info_cache with network_info: [{"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:15:27 compute-0 nova_compute[253461]: 2025-11-22 04:15:27.335 253465 DEBUG oslo_concurrency.lockutils [req-c3d43a4e-d186-4272-ae08-b45547c11518 req-28ed0e4d-e919-4177-a149-0bfba5844418 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-6c989a97-6747-4b3b-a025-118564ecad92" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:15:27 compute-0 ceph-mon[75011]: pgmap v2037: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 22 04:15:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:28 compute-0 sudo[298059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:28 compute-0 sudo[298059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:28 compute-0 sudo[298059]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:28 compute-0 sudo[298084]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:15:28 compute-0 sudo[298084]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:28 compute-0 sudo[298084]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:28 compute-0 sudo[298109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:28 compute-0 sudo[298109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:28 compute-0 sudo[298109]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:28 compute-0 sudo[298135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:15:28 compute-0 sudo[298135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:28 compute-0 podman[298133]: 2025-11-22 04:15:28.573300962 +0000 UTC m=+0.106125859 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:15:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Nov 22 04:15:28 compute-0 nova_compute[253461]: 2025-11-22 04:15:28.794 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:29 compute-0 sudo[298135]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:15:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:15:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:15:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:15:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:15:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:15:29 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a1564258-4a11-4a76-b5ab-06f0ec507628 does not exist
Nov 22 04:15:29 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ad84e0f5-597e-4387-8b27-bc60d47eb300 does not exist
Nov 22 04:15:29 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev d2c89f08-515f-4ca7-af42-5ee0e15189a6 does not exist
Nov 22 04:15:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:15:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:15:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:15:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:15:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:15:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:15:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:15:29 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:15:29 compute-0 sudo[298211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:29 compute-0 sudo[298211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:29 compute-0 sudo[298211]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:29 compute-0 sudo[298236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:15:29 compute-0 sudo[298236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:29 compute-0 sudo[298236]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:29 compute-0 sudo[298261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:29 compute-0 sudo[298261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:29 compute-0 sudo[298261]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:29 compute-0 sudo[298286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:15:29 compute-0 sudo[298286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:30 compute-0 nova_compute[253461]: 2025-11-22 04:15:30.234 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:30 compute-0 podman[298351]: 2025-11-22 04:15:30.217256909 +0000 UTC m=+0.029259874 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:30 compute-0 podman[298351]: 2025-11-22 04:15:30.337105776 +0000 UTC m=+0.149108661 container create 6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:15:30 compute-0 ceph-mon[75011]: pgmap v2038: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Nov 22 04:15:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:15:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:15:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:15:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:15:30 compute-0 systemd[1]: Started libpod-conmon-6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440.scope.
Nov 22 04:15:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:15:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 202 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 857 KiB/s wr, 81 op/s
Nov 22 04:15:30 compute-0 podman[298351]: 2025-11-22 04:15:30.817646055 +0000 UTC m=+0.629649010 container init 6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_burnell, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:15:30 compute-0 podman[298351]: 2025-11-22 04:15:30.831834513 +0000 UTC m=+0.643837378 container start 6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:15:30 compute-0 objective_burnell[298368]: 167 167
Nov 22 04:15:30 compute-0 systemd[1]: libpod-6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440.scope: Deactivated successfully.
Nov 22 04:15:30 compute-0 podman[298351]: 2025-11-22 04:15:30.962105879 +0000 UTC m=+0.774108844 container attach 6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_burnell, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Nov 22 04:15:30 compute-0 podman[298351]: 2025-11-22 04:15:30.964660377 +0000 UTC m=+0.776663302 container died 6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_burnell, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Nov 22 04:15:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cb0473a5ea52027e121090277c4fe33844ea723fee0a9712716a4789cefaf29-merged.mount: Deactivated successfully.
Nov 22 04:15:31 compute-0 podman[298351]: 2025-11-22 04:15:31.480779558 +0000 UTC m=+1.292782423 container remove 6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_burnell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Nov 22 04:15:31 compute-0 systemd[1]: libpod-conmon-6441361ae0a93acc3dea919a5d80ce14b8c280c457fbded04cc15c8a5cf2d440.scope: Deactivated successfully.
Nov 22 04:15:31 compute-0 ovn_controller[152691]: 2025-11-22T04:15:31Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:f3:19 10.100.0.14
Nov 22 04:15:31 compute-0 ovn_controller[152691]: 2025-11-22T04:15:31Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:f3:19 10.100.0.14
Nov 22 04:15:31 compute-0 ceph-mon[75011]: pgmap v2039: 305 pgs: 305 active+clean; 202 MiB data, 568 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 857 KiB/s wr, 81 op/s
Nov 22 04:15:31 compute-0 podman[298391]: 2025-11-22 04:15:31.655817511 +0000 UTC m=+0.029856252 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:31 compute-0 podman[298391]: 2025-11-22 04:15:31.769345611 +0000 UTC m=+0.143384322 container create 182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:15:32 compute-0 systemd[1]: Started libpod-conmon-182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa.scope.
Nov 22 04:15:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bacf0fc61f721f19877f00c993276b45cbe8bb18ff7091288dda0c7ae7944e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bacf0fc61f721f19877f00c993276b45cbe8bb18ff7091288dda0c7ae7944e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bacf0fc61f721f19877f00c993276b45cbe8bb18ff7091288dda0c7ae7944e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bacf0fc61f721f19877f00c993276b45cbe8bb18ff7091288dda0c7ae7944e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bacf0fc61f721f19877f00c993276b45cbe8bb18ff7091288dda0c7ae7944e8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:32 compute-0 podman[298391]: 2025-11-22 04:15:32.308179598 +0000 UTC m=+0.682218329 container init 182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hermann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:15:32 compute-0 podman[298391]: 2025-11-22 04:15:32.316569491 +0000 UTC m=+0.690608202 container start 182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Nov 22 04:15:32 compute-0 podman[298391]: 2025-11-22 04:15:32.505228005 +0000 UTC m=+0.879266736 container attach 182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Nov 22 04:15:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 202 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 52 op/s
Nov 22 04:15:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:33 compute-0 recursing_hermann[298407]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:15:33 compute-0 recursing_hermann[298407]: --> relative data size: 1.0
Nov 22 04:15:33 compute-0 recursing_hermann[298407]: --> All data devices are unavailable
Nov 22 04:15:33 compute-0 systemd[1]: libpod-182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa.scope: Deactivated successfully.
Nov 22 04:15:33 compute-0 systemd[1]: libpod-182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa.scope: Consumed 1.076s CPU time.
Nov 22 04:15:33 compute-0 podman[298391]: 2025-11-22 04:15:33.486316022 +0000 UTC m=+1.860354802 container died 182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:15:33 compute-0 nova_compute[253461]: 2025-11-22 04:15:33.795 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bacf0fc61f721f19877f00c993276b45cbe8bb18ff7091288dda0c7ae7944e8-merged.mount: Deactivated successfully.
Nov 22 04:15:34 compute-0 ceph-mon[75011]: pgmap v2040: 305 pgs: 305 active+clean; 202 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.3 MiB/s wr, 52 op/s
Nov 22 04:15:34 compute-0 podman[298391]: 2025-11-22 04:15:34.281454804 +0000 UTC m=+2.655493515 container remove 182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:15:34 compute-0 systemd[1]: libpod-conmon-182afd3241d5a343d74c98f849fdb03b217fb0a69b97ebb5714e903a012e6eaa.scope: Deactivated successfully.
Nov 22 04:15:34 compute-0 sudo[298286]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:34 compute-0 sudo[298451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:34 compute-0 sudo[298451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:34 compute-0 sudo[298451]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:34 compute-0 sudo[298476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:15:34 compute-0 sudo[298476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:34 compute-0 sudo[298476]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:34 compute-0 sudo[298501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:34 compute-0 sudo[298501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:34 compute-0 sudo[298501]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:34 compute-0 sudo[298526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:15:34 compute-0 sudo[298526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 202 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Nov 22 04:15:35 compute-0 podman[298590]: 2025-11-22 04:15:34.923617029 +0000 UTC m=+0.028119140 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:35 compute-0 podman[298590]: 2025-11-22 04:15:35.081655857 +0000 UTC m=+0.186157918 container create 5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:15:35 compute-0 systemd[1]: Started libpod-conmon-5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7.scope.
Nov 22 04:15:35 compute-0 nova_compute[253461]: 2025-11-22 04:15:35.236 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:15:35 compute-0 podman[298590]: 2025-11-22 04:15:35.327623856 +0000 UTC m=+0.432126017 container init 5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:15:35 compute-0 podman[298590]: 2025-11-22 04:15:35.338844758 +0000 UTC m=+0.443346819 container start 5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Nov 22 04:15:35 compute-0 reverent_fermi[298607]: 167 167
Nov 22 04:15:35 compute-0 systemd[1]: libpod-5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7.scope: Deactivated successfully.
Nov 22 04:15:35 compute-0 conmon[298607]: conmon 5bdfeca98e1b592bf9d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7.scope/container/memory.events
Nov 22 04:15:35 compute-0 podman[298590]: 2025-11-22 04:15:35.478339339 +0000 UTC m=+0.582841420 container attach 5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:15:35 compute-0 podman[298590]: 2025-11-22 04:15:35.4788339 +0000 UTC m=+0.583335961 container died 5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:15:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:15:35 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 108K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.75 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5128 writes, 13K keys, 5128 commit groups, 1.0 writes per commit group, ingest: 8.17 MB, 0.01 MB/s
                                           Interval WAL: 5128 writes, 2073 syncs, 2.47 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8ada781cceb32374cb44f2d3c83ba5d67d5bdc9c810e0e09595d25b0176f7a7-merged.mount: Deactivated successfully.
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:15:36
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.meta', 'backups', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control']
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:15:36 compute-0 ceph-mon[75011]: pgmap v2041: 305 pgs: 305 active+clean; 202 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Nov 22 04:15:36 compute-0 podman[298590]: 2025-11-22 04:15:36.61922317 +0000 UTC m=+1.723725271 container remove 5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_fermi, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:15:36 compute-0 systemd[1]: libpod-conmon-5bdfeca98e1b592bf9d5259c218c66a059a07de6de9bd4945159bf1271633aa7.scope: Deactivated successfully.
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:15:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 211 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 494 KiB/s rd, 2.3 MiB/s wr, 52 op/s
Nov 22 04:15:36 compute-0 podman[298633]: 2025-11-22 04:15:36.857948866 +0000 UTC m=+0.044712726 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:37 compute-0 podman[298633]: 2025-11-22 04:15:37.166099032 +0000 UTC m=+0.352862882 container create cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:15:37 compute-0 systemd[1]: Started libpod-conmon-cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb.scope.
Nov 22 04:15:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b3a4945a44fa40cbf2a25e54e3f6287760585262b1ebdd52163fd8ee5c6eba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b3a4945a44fa40cbf2a25e54e3f6287760585262b1ebdd52163fd8ee5c6eba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b3a4945a44fa40cbf2a25e54e3f6287760585262b1ebdd52163fd8ee5c6eba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26b3a4945a44fa40cbf2a25e54e3f6287760585262b1ebdd52163fd8ee5c6eba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:37 compute-0 podman[298633]: 2025-11-22 04:15:37.732395639 +0000 UTC m=+0.919159559 container init cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:15:37 compute-0 podman[298633]: 2025-11-22 04:15:37.747150456 +0000 UTC m=+0.933914346 container start cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Nov 22 04:15:37 compute-0 ceph-mon[75011]: pgmap v2042: 305 pgs: 305 active+clean; 211 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 494 KiB/s rd, 2.3 MiB/s wr, 52 op/s
Nov 22 04:15:37 compute-0 podman[298633]: 2025-11-22 04:15:37.890777207 +0000 UTC m=+1.077541087 container attach cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:15:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]: {
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:     "0": [
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:         {
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "devices": [
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "/dev/loop3"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             ],
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_name": "ceph_lv0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_size": "21470642176",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "name": "ceph_lv0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "tags": {
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cluster_name": "ceph",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.crush_device_class": "",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.encrypted": "0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osd_id": "0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.type": "block",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.vdo": "0"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             },
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "type": "block",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "vg_name": "ceph_vg0"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:         }
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:     ],
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:     "1": [
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:         {
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "devices": [
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "/dev/loop4"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             ],
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_name": "ceph_lv1",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_size": "21470642176",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "name": "ceph_lv1",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "tags": {
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cluster_name": "ceph",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.crush_device_class": "",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.encrypted": "0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osd_id": "1",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.type": "block",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.vdo": "0"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             },
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "type": "block",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "vg_name": "ceph_vg1"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:         }
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:     ],
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:     "2": [
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:         {
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "devices": [
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "/dev/loop5"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             ],
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_name": "ceph_lv2",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_size": "21470642176",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "name": "ceph_lv2",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "tags": {
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.cluster_name": "ceph",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.crush_device_class": "",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.encrypted": "0",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osd_id": "2",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.type": "block",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:                 "ceph.vdo": "0"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             },
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "type": "block",
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:             "vg_name": "ceph_vg2"
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:         }
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]:     ]
Nov 22 04:15:38 compute-0 wonderful_heyrovsky[298649]: }
Nov 22 04:15:38 compute-0 systemd[1]: libpod-cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb.scope: Deactivated successfully.
Nov 22 04:15:38 compute-0 podman[298633]: 2025-11-22 04:15:38.636843697 +0000 UTC m=+1.823607577 container died cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:15:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 211 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 494 KiB/s rd, 2.3 MiB/s wr, 52 op/s
Nov 22 04:15:38 compute-0 nova_compute[253461]: 2025-11-22 04:15:38.798 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-26b3a4945a44fa40cbf2a25e54e3f6287760585262b1ebdd52163fd8ee5c6eba-merged.mount: Deactivated successfully.
Nov 22 04:15:40 compute-0 podman[298633]: 2025-11-22 04:15:40.144532281 +0000 UTC m=+3.331296121 container remove cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:15:40 compute-0 systemd[1]: libpod-conmon-cca81a8f1c7f9d06461f81e62a5efaec86a6301115a9c70da0abfd978295cbcb.scope: Deactivated successfully.
Nov 22 04:15:40 compute-0 sudo[298526]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:40 compute-0 ceph-mon[75011]: pgmap v2043: 305 pgs: 305 active+clean; 211 MiB data, 570 MiB used, 59 GiB / 60 GiB avail; 494 KiB/s rd, 2.3 MiB/s wr, 52 op/s
Nov 22 04:15:40 compute-0 nova_compute[253461]: 2025-11-22 04:15:40.277 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:40 compute-0 sudo[298672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:40 compute-0 sudo[298672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:40 compute-0 sudo[298672]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:40 compute-0 sudo[298697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:15:40 compute-0 sudo[298697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:40 compute-0 sudo[298697]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:40 compute-0 sudo[298722]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:40 compute-0 sudo[298722]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:40 compute-0 sudo[298722]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:40 compute-0 sudo[298747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:15:40 compute-0 sudo[298747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 228 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 3.2 MiB/s wr, 60 op/s
Nov 22 04:15:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:15:40 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 114K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.86 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4453 writes, 12K keys, 4453 commit groups, 1.0 writes per commit group, ingest: 10.10 MB, 0.02 MB/s
                                           Interval WAL: 4453 writes, 1741 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:15:40 compute-0 podman[298808]: 2025-11-22 04:15:40.88340896 +0000 UTC m=+0.061066138 container create 6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:15:40 compute-0 podman[298808]: 2025-11-22 04:15:40.850490271 +0000 UTC m=+0.028147529 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:40 compute-0 systemd[1]: Started libpod-conmon-6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445.scope.
Nov 22 04:15:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:15:41 compute-0 podman[298808]: 2025-11-22 04:15:41.025788982 +0000 UTC m=+0.203446250 container init 6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:15:41 compute-0 podman[298808]: 2025-11-22 04:15:41.033918582 +0000 UTC m=+0.211575800 container start 6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 04:15:41 compute-0 serene_edison[298825]: 167 167
Nov 22 04:15:41 compute-0 systemd[1]: libpod-6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445.scope: Deactivated successfully.
Nov 22 04:15:41 compute-0 podman[298808]: 2025-11-22 04:15:41.045157896 +0000 UTC m=+0.222815094 container attach 6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:15:41 compute-0 podman[298808]: 2025-11-22 04:15:41.046019732 +0000 UTC m=+0.223676950 container died 6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d34631000fc48d4e6562f0fc2aa6e74308c643021cd7f0b4fde6447dd02d15-merged.mount: Deactivated successfully.
Nov 22 04:15:41 compute-0 podman[298808]: 2025-11-22 04:15:41.171498993 +0000 UTC m=+0.349156181 container remove 6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Nov 22 04:15:41 compute-0 systemd[1]: libpod-conmon-6b9e4a2f109b43f0be9a75b33062f9b61caf5ab4ccca9dee29a61554038ed445.scope: Deactivated successfully.
Nov 22 04:15:41 compute-0 podman[298848]: 2025-11-22 04:15:41.36874021 +0000 UTC m=+0.045812028 container create 6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldberg, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Nov 22 04:15:41 compute-0 systemd[1]: Started libpod-conmon-6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d.scope.
Nov 22 04:15:41 compute-0 podman[298848]: 2025-11-22 04:15:41.345777025 +0000 UTC m=+0.022848863 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:15:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f188743b7688ac447552a1a43c44d8d100307f98887dc9c718d7a8585ae65e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f188743b7688ac447552a1a43c44d8d100307f98887dc9c718d7a8585ae65e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f188743b7688ac447552a1a43c44d8d100307f98887dc9c718d7a8585ae65e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f188743b7688ac447552a1a43c44d8d100307f98887dc9c718d7a8585ae65e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:15:41 compute-0 podman[298848]: 2025-11-22 04:15:41.473203004 +0000 UTC m=+0.150274842 container init 6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:15:41 compute-0 podman[298848]: 2025-11-22 04:15:41.483259066 +0000 UTC m=+0.160330884 container start 6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldberg, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:15:41 compute-0 podman[298848]: 2025-11-22 04:15:41.486848486 +0000 UTC m=+0.163920374 container attach 6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldberg, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.146 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6c989a97-6747-4b3b-a025-118564ecad92" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.148 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.149 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6c989a97-6747-4b3b-a025-118564ecad92-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.149 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.149 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.150 253465 INFO nova.compute.manager [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Terminating instance
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.152 253465 DEBUG nova.compute.manager [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:15:42 compute-0 kernel: tapdbce96fb-9c (unregistering): left promiscuous mode
Nov 22 04:15:42 compute-0 NetworkManager[48916]: <info>  [1763784942.2130] device (tapdbce96fb-9c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:15:42 compute-0 ovn_controller[152691]: 2025-11-22T04:15:42Z|00257|binding|INFO|Releasing lport dbce96fb-9cd9-4da5-acdf-14e560a0724f from this chassis (sb_readonly=0)
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.222 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:42 compute-0 ovn_controller[152691]: 2025-11-22T04:15:42Z|00258|binding|INFO|Setting lport dbce96fb-9cd9-4da5-acdf-14e560a0724f down in Southbound
Nov 22 04:15:42 compute-0 ovn_controller[152691]: 2025-11-22T04:15:42Z|00259|binding|INFO|Removing iface tapdbce96fb-9c ovn-installed in OVS
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.226 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.231 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:f3:19 10.100.0.14'], port_security=['fa:16:3e:05:f3:19 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '6c989a97-6747-4b3b-a025-118564ecad92', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73bcc005-88ac-46b6-ad11-6207c6046246', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e95e3fed-bcd6-449d-9f95-3b75633f02f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.230'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8139379-e220-4788-92e4-b495f0c34eb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=dbce96fb-9cd9-4da5-acdf-14e560a0724f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.232 162689 INFO neutron.agent.ovn.metadata.agent [-] Port dbce96fb-9cd9-4da5-acdf-14e560a0724f in datapath 73bcc005-88ac-46b6-ad11-6207c6046246 unbound from our chassis
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.234 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 73bcc005-88ac-46b6-ad11-6207c6046246, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.236 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e5081897-f2c0-41c6-b3f9-265f0005d2dc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:42 compute-0 ceph-mon[75011]: pgmap v2044: 305 pgs: 305 active+clean; 228 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 516 KiB/s rd, 3.2 MiB/s wr, 60 op/s
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.238 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 namespace which is not needed anymore
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.249 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:42 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Nov 22 04:15:42 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 16.371s CPU time.
Nov 22 04:15:42 compute-0 systemd-machined[215728]: Machine qemu-24-instance-00000018 terminated.
Nov 22 04:15:42 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[298035]: [NOTICE]   (298039) : haproxy version is 2.8.14-c23fe91
Nov 22 04:15:42 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[298035]: [NOTICE]   (298039) : path to executable is /usr/sbin/haproxy
Nov 22 04:15:42 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[298035]: [WARNING]  (298039) : Exiting Master process...
Nov 22 04:15:42 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[298035]: [ALERT]    (298039) : Current worker (298041) exited with code 143 (Terminated)
Nov 22 04:15:42 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[298035]: [WARNING]  (298039) : All workers exited. Exiting... (0)
Nov 22 04:15:42 compute-0 systemd[1]: libpod-240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545.scope: Deactivated successfully.
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.391 253465 INFO nova.virt.libvirt.driver [-] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Instance destroyed successfully.
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.392 253465 DEBUG nova.objects.instance [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lazy-loading 'resources' on Instance uuid 6c989a97-6747-4b3b-a025-118564ecad92 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:15:42 compute-0 podman[298906]: 2025-11-22 04:15:42.393229386 +0000 UTC m=+0.048828386 container died 240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.404 253465 DEBUG nova.virt.libvirt.vif [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:14:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-22541137',display_name='tempest-TransferEncryptedVolumeTest-server-22541137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-22541137',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMF3KIMSa8o4WCzM1VgX9RcGz4FcpwcrZcdUDFLNYpjBj2lzhaXFrO0bSdzjU9Itff6b3BySQo/nLrhI32bk8GIfHP/n0NuDArjdwgS2hsu8vteQ0u/zEQY1VMKJGLhTNw==',key_name='tempest-TransferEncryptedVolumeTest-1133237278',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:15:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-4t21clcw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:15:17Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6c989a97-6747-4b3b-a025-118564ecad92,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.404 253465 DEBUG nova.network.os_vif_util [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "address": "fa:16:3e:05:f3:19", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.230", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdbce96fb-9c", "ovs_interfaceid": "dbce96fb-9cd9-4da5-acdf-14e560a0724f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.405 253465 DEBUG nova.network.os_vif_util [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:f3:19,bridge_name='br-int',has_traffic_filtering=True,id=dbce96fb-9cd9-4da5-acdf-14e560a0724f,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbce96fb-9c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.405 253465 DEBUG os_vif [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:f3:19,bridge_name='br-int',has_traffic_filtering=True,id=dbce96fb-9cd9-4da5-acdf-14e560a0724f,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbce96fb-9c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.406 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.407 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdbce96fb-9c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.408 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.410 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.412 253465 INFO os_vif [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:f3:19,bridge_name='br-int',has_traffic_filtering=True,id=dbce96fb-9cd9-4da5-acdf-14e560a0724f,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdbce96fb-9c')
Nov 22 04:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545-userdata-shm.mount: Deactivated successfully.
Nov 22 04:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-353815aded28f1dd02527a1e3cb30e5c5a97f5986460dfbf586ab036ba77898a-merged.mount: Deactivated successfully.
Nov 22 04:15:42 compute-0 podman[298906]: 2025-11-22 04:15:42.491983362 +0000 UTC m=+0.147582372 container cleanup 240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:15:42 compute-0 systemd[1]: libpod-conmon-240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545.scope: Deactivated successfully.
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]: {
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "osd_id": 1,
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "type": "bluestore"
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:     },
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "osd_id": 0,
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "type": "bluestore"
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:     },
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "osd_id": 2,
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:         "type": "bluestore"
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]:     }
Nov 22 04:15:42 compute-0 frosty_goldberg[298865]: }
Nov 22 04:15:42 compute-0 systemd[1]: libpod-6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d.scope: Deactivated successfully.
Nov 22 04:15:42 compute-0 systemd[1]: libpod-6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d.scope: Consumed 1.062s CPU time.
Nov 22 04:15:42 compute-0 conmon[298865]: conmon 6ac4659eb4aa60f876b4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d.scope/container/memory.events
Nov 22 04:15:42 compute-0 podman[298848]: 2025-11-22 04:15:42.561903502 +0000 UTC m=+1.238975320 container died 6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:15:42 compute-0 podman[298976]: 2025-11-22 04:15:42.586684038 +0000 UTC m=+0.075029755 container remove 240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.592 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c63622bf-55f8-4c87-b55c-41e61eeaa3f2]: (4, ('Sat Nov 22 04:15:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 (240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545)\n240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545\nSat Nov 22 04:15:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 (240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545)\n240ecfeb84bfa5db283514fd9dbdd31b9b558ac496dd54d9c0d794e0acad0545\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f188743b7688ac447552a1a43c44d8d100307f98887dc9c718d7a8585ae65e7-merged.mount: Deactivated successfully.
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.594 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[02ad7b74-a659-423e-8010-9dbfd3f22bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.598 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73bcc005-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.600 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:42 compute-0 kernel: tap73bcc005-80: left promiscuous mode
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.613 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.617 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2896f4-f614-49bc-b96a-c5a6f90fe8b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.626 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[af7f3854-40a5-40ea-9b1f-c97efbf3453f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.627 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[488dcf9a-0264-449d-9b6c-8d8739ef93fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:42 compute-0 podman[298848]: 2025-11-22 04:15:42.630706537 +0000 UTC m=+1.307778375 container remove 6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:15:42 compute-0 systemd[1]: libpod-conmon-6ac4659eb4aa60f876b4e5e1b6bb565b56cb924ab366dfabe59b9c1280911f3d.scope: Deactivated successfully.
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.645 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e793aca5-2879-4b6e-ab04-0580dd22b1bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516670, 'reachable_time': 21938, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299005, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.647 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:15:42 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:15:42.647 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[76cca620-dceb-47f9-85f5-7a6682c09b18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:15:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d73bcc005\x2d88ac\x2d46b6\x2dad11\x2d6207c6046246.mount: Deactivated successfully.
Nov 22 04:15:42 compute-0 sudo[298747]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:15:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:15:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.689 253465 INFO nova.virt.libvirt.driver [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Deleting instance files /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92_del
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.690 253465 INFO nova.virt.libvirt.driver [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Deletion of /var/lib/nova/instances/6c989a97-6747-4b3b-a025-118564ecad92_del complete
Nov 22 04:15:42 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:15:42 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev e45dcd3f-e85c-40e3-bf31-1d7545a59454 does not exist
Nov 22 04:15:42 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8b7ae18e-0ff0-486e-8d60-668c4daa298a does not exist
Nov 22 04:15:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 244 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 3.4 MiB/s wr, 47 op/s
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.744 253465 INFO nova.compute.manager [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Took 0.59 seconds to destroy the instance on the hypervisor.
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.744 253465 DEBUG oslo.service.loopingcall [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.745 253465 DEBUG nova.compute.manager [-] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:15:42 compute-0 nova_compute[253461]: 2025-11-22 04:15:42.745 253465 DEBUG nova.network.neutron [-] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:15:42 compute-0 sudo[299008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:15:42 compute-0 sudo[299008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:42 compute-0 sudo[299008]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:42 compute-0 sudo[299033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:15:42 compute-0 sudo[299033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:15:42 compute-0 sudo[299033]: pam_unix(sudo:session): session closed for user root
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.174 253465 DEBUG nova.compute.manager [req-fb1e6bf1-2acf-45a4-9170-a4f07445c7fc req-d37f418c-7326-4da3-bd3a-f41170cd3a88 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received event network-vif-unplugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.174 253465 DEBUG oslo_concurrency.lockutils [req-fb1e6bf1-2acf-45a4-9170-a4f07445c7fc req-d37f418c-7326-4da3-bd3a-f41170cd3a88 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6c989a97-6747-4b3b-a025-118564ecad92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.175 253465 DEBUG oslo_concurrency.lockutils [req-fb1e6bf1-2acf-45a4-9170-a4f07445c7fc req-d37f418c-7326-4da3-bd3a-f41170cd3a88 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.175 253465 DEBUG oslo_concurrency.lockutils [req-fb1e6bf1-2acf-45a4-9170-a4f07445c7fc req-d37f418c-7326-4da3-bd3a-f41170cd3a88 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.176 253465 DEBUG nova.compute.manager [req-fb1e6bf1-2acf-45a4-9170-a4f07445c7fc req-d37f418c-7326-4da3-bd3a-f41170cd3a88 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] No waiting events found dispatching network-vif-unplugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.176 253465 DEBUG nova.compute.manager [req-fb1e6bf1-2acf-45a4-9170-a4f07445c7fc req-d37f418c-7326-4da3-bd3a-f41170cd3a88 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received event network-vif-unplugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:15:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.636 253465 DEBUG nova.network.neutron [-] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.652 253465 INFO nova.compute.manager [-] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Took 0.91 seconds to deallocate network for instance.
Nov 22 04:15:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:15:43 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:15:43 compute-0 ceph-mon[75011]: pgmap v2045: 305 pgs: 305 active+clean; 244 MiB data, 585 MiB used, 59 GiB / 60 GiB avail; 393 KiB/s rd, 3.4 MiB/s wr, 47 op/s
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.800 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.871 253465 INFO nova.compute.manager [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Took 0.22 seconds to detach 1 volumes for instance.
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.924 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.925 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:43 compute-0 nova_compute[253461]: 2025-11-22 04:15:43.971 253465 DEBUG oslo_concurrency.processutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:15:44 compute-0 podman[299078]: 2025-11-22 04:15:44.417557341 +0000 UTC m=+0.084189867 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:15:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2550661334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:44 compute-0 podman[299079]: 2025-11-22 04:15:44.459872094 +0000 UTC m=+0.125904266 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 22 04:15:44 compute-0 nova_compute[253461]: 2025-11-22 04:15:44.467 253465 DEBUG oslo_concurrency.processutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:15:44 compute-0 nova_compute[253461]: 2025-11-22 04:15:44.476 253465 DEBUG nova.compute.provider_tree [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:15:44 compute-0 nova_compute[253461]: 2025-11-22 04:15:44.495 253465 DEBUG nova.scheduler.client.report [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:15:44 compute-0 nova_compute[253461]: 2025-11-22 04:15:44.514 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:44 compute-0 nova_compute[253461]: 2025-11-22 04:15:44.544 253465 INFO nova.scheduler.client.report [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Deleted allocations for instance 6c989a97-6747-4b3b-a025-118564ecad92
Nov 22 04:15:44 compute-0 nova_compute[253461]: 2025-11-22 04:15:44.612 253465 DEBUG oslo_concurrency.lockutils [None req-90099eda-508e-4044-bb39-e265ce8d8c3c ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:44 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2550661334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 257 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Nov 22 04:15:45 compute-0 nova_compute[253461]: 2025-11-22 04:15:45.268 253465 DEBUG nova.compute.manager [req-d7f4bce6-f156-4408-b90a-61bf9fb8bb93 req-947fa7a9-ae24-4be1-a585-b9ce2684d3e7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received event network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:15:45 compute-0 nova_compute[253461]: 2025-11-22 04:15:45.269 253465 DEBUG oslo_concurrency.lockutils [req-d7f4bce6-f156-4408-b90a-61bf9fb8bb93 req-947fa7a9-ae24-4be1-a585-b9ce2684d3e7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6c989a97-6747-4b3b-a025-118564ecad92-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:45 compute-0 nova_compute[253461]: 2025-11-22 04:15:45.269 253465 DEBUG oslo_concurrency.lockutils [req-d7f4bce6-f156-4408-b90a-61bf9fb8bb93 req-947fa7a9-ae24-4be1-a585-b9ce2684d3e7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:45 compute-0 nova_compute[253461]: 2025-11-22 04:15:45.270 253465 DEBUG oslo_concurrency.lockutils [req-d7f4bce6-f156-4408-b90a-61bf9fb8bb93 req-947fa7a9-ae24-4be1-a585-b9ce2684d3e7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c989a97-6747-4b3b-a025-118564ecad92-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:45 compute-0 nova_compute[253461]: 2025-11-22 04:15:45.270 253465 DEBUG nova.compute.manager [req-d7f4bce6-f156-4408-b90a-61bf9fb8bb93 req-947fa7a9-ae24-4be1-a585-b9ce2684d3e7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] No waiting events found dispatching network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:15:45 compute-0 nova_compute[253461]: 2025-11-22 04:15:45.271 253465 WARNING nova.compute.manager [req-d7f4bce6-f156-4408-b90a-61bf9fb8bb93 req-947fa7a9-ae24-4be1-a585-b9ce2684d3e7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received unexpected event network-vif-plugged-dbce96fb-9cd9-4da5-acdf-14e560a0724f for instance with vm_state deleted and task_state None.
Nov 22 04:15:45 compute-0 nova_compute[253461]: 2025-11-22 04:15:45.271 253465 DEBUG nova.compute.manager [req-d7f4bce6-f156-4408-b90a-61bf9fb8bb93 req-947fa7a9-ae24-4be1-a585-b9ce2684d3e7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Received event network-vif-deleted-dbce96fb-9cd9-4da5-acdf-14e560a0724f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:15:45 compute-0 ceph-mon[75011]: pgmap v2046: 305 pgs: 305 active+clean; 257 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Nov 22 04:15:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:15:46 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 22K writes, 90K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 22K writes, 7681 syncs, 2.92 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3529 writes, 10K keys, 3529 commit groups, 1.0 writes per commit group, ingest: 9.54 MB, 0.02 MB/s
                                           Interval WAL: 3529 writes, 1347 syncs, 2.62 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0027221274296737644 of space, bias 1.0, pg target 0.8166382289021293 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:15:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 3.8 MiB/s wr, 60 op/s
Nov 22 04:15:47 compute-0 nova_compute[253461]: 2025-11-22 04:15:47.410 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:47 compute-0 ceph-mon[75011]: pgmap v2047: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 271 KiB/s rd, 3.8 MiB/s wr, 60 op/s
Nov 22 04:15:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 38 op/s
Nov 22 04:15:48 compute-0 nova_compute[253461]: 2025-11-22 04:15:48.801 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:49 compute-0 nova_compute[253461]: 2025-11-22 04:15:49.504 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:49 compute-0 ceph-mon[75011]: pgmap v2048: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 38 op/s
Nov 22 04:15:49 compute-0 nova_compute[253461]: 2025-11-22 04:15:49.769 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:50 compute-0 nova_compute[253461]: 2025-11-22 04:15:50.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:50 compute-0 nova_compute[253461]: 2025-11-22 04:15:50.450 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:50 compute-0 nova_compute[253461]: 2025-11-22 04:15:50.451 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:50 compute-0 nova_compute[253461]: 2025-11-22 04:15:50.451 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:50 compute-0 nova_compute[253461]: 2025-11-22 04:15:50.451 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:15:50 compute-0 nova_compute[253461]: 2025-11-22 04:15:50.452 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:15:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 38 op/s
Nov 22 04:15:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3395793501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:50 compute-0 nova_compute[253461]: 2025-11-22 04:15:50.903 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.044 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.045 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4390MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.046 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.046 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.176 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.177 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.196 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing inventories for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.213 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating ProviderTree inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.213 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.227 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing aggregate associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.251 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing trait associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.268 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:15:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:15:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280050442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.658 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.390s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.666 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.686 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.719 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:15:51 compute-0 nova_compute[253461]: 2025-11-22 04:15:51.719 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:15:51 compute-0 ceph-mgr[75294]: [devicehealth INFO root] Check health
Nov 22 04:15:51 compute-0 ceph-mon[75011]: pgmap v2049: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 3.5 MiB/s wr, 38 op/s
Nov 22 04:15:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3395793501' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4280050442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:15:52 compute-0 nova_compute[253461]: 2025-11-22 04:15:52.412 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 30 op/s
Nov 22 04:15:52 compute-0 nova_compute[253461]: 2025-11-22 04:15:52.715 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:52 compute-0 nova_compute[253461]: 2025-11-22 04:15:52.716 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:52 compute-0 nova_compute[253461]: 2025-11-22 04:15:52.717 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:15:52 compute-0 nova_compute[253461]: 2025-11-22 04:15:52.717 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:15:52 compute-0 nova_compute[253461]: 2025-11-22 04:15:52.766 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:15:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.276802) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784953276860, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1264, "num_deletes": 252, "total_data_size": 1922089, "memory_usage": 1958328, "flush_reason": "Manual Compaction"}
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784953290878, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1129756, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39333, "largest_seqno": 40596, "table_properties": {"data_size": 1125214, "index_size": 2002, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 12041, "raw_average_key_size": 20, "raw_value_size": 1115283, "raw_average_value_size": 1916, "num_data_blocks": 92, "num_entries": 582, "num_filter_entries": 582, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784826, "oldest_key_time": 1763784826, "file_creation_time": 1763784953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 14150 microseconds, and 7312 cpu microseconds.
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.290946) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1129756 bytes OK
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.290981) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.293565) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.293647) EVENT_LOG_v1 {"time_micros": 1763784953293631, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.293682) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1916373, prev total WAL file size 1916373, number of live WAL files 2.
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.294705) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1103KB)], [80(11MB)]
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784953294740, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13475708, "oldest_snapshot_seqno": -1}
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7342 keys, 10902721 bytes, temperature: kUnknown
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784953380120, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 10902721, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10848670, "index_size": 34611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 184574, "raw_average_key_size": 25, "raw_value_size": 10711987, "raw_average_value_size": 1459, "num_data_blocks": 1379, "num_entries": 7342, "num_filter_entries": 7342, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.380577) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10902721 bytes
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.382562) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.6 rd, 127.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.8 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(21.6) write-amplify(9.7) OK, records in: 7800, records dropped: 458 output_compression: NoCompression
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.382602) EVENT_LOG_v1 {"time_micros": 1763784953382583, "job": 46, "event": "compaction_finished", "compaction_time_micros": 85487, "compaction_time_cpu_micros": 30495, "output_level": 6, "num_output_files": 1, "total_output_size": 10902721, "num_input_records": 7800, "num_output_records": 7342, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784953383226, "job": 46, "event": "table_file_deletion", "file_number": 82}
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784953387839, "job": 46, "event": "table_file_deletion", "file_number": 80}
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.294607) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.387919) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.387925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.387929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.387932) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:53 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:15:53.387934) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:15:53 compute-0 nova_compute[253461]: 2025-11-22 04:15:53.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:53 compute-0 nova_compute[253461]: 2025-11-22 04:15:53.431 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:15:53 compute-0 nova_compute[253461]: 2025-11-22 04:15:53.803 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:54 compute-0 ceph-mon[75011]: pgmap v2050: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 2.6 MiB/s wr, 30 op/s
Nov 22 04:15:54 compute-0 nova_compute[253461]: 2025-11-22 04:15:54.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 28 op/s
Nov 22 04:15:56 compute-0 ceph-mon[75011]: pgmap v2051: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 MiB/s wr, 28 op/s
Nov 22 04:15:56 compute-0 nova_compute[253461]: 2025-11-22 04:15:56.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:56 compute-0 nova_compute[253461]: 2025-11-22 04:15:56.431 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:15:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 934 KiB/s wr, 9 op/s
Nov 22 04:15:57 compute-0 nova_compute[253461]: 2025-11-22 04:15:57.387 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763784942.384818, 6c989a97-6747-4b3b-a025-118564ecad92 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:15:57 compute-0 nova_compute[253461]: 2025-11-22 04:15:57.388 253465 INFO nova.compute.manager [-] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] VM Stopped (Lifecycle Event)
Nov 22 04:15:57 compute-0 nova_compute[253461]: 2025-11-22 04:15:57.415 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:57 compute-0 nova_compute[253461]: 2025-11-22 04:15:57.418 253465 DEBUG nova.compute.manager [None req-bd51fca3-6ad6-462e-8aff-293392b33413 - - - - - -] [instance: 6c989a97-6747-4b3b-a025-118564ecad92] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:15:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:15:58 compute-0 ceph-mon[75011]: pgmap v2052: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 934 KiB/s wr, 9 op/s
Nov 22 04:15:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:15:58 compute-0 nova_compute[253461]: 2025-11-22 04:15:58.806 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:15:59 compute-0 podman[299168]: 2025-11-22 04:15:59.389349792 +0000 UTC m=+0.069365941 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Nov 22 04:16:00 compute-0 ceph-mon[75011]: pgmap v2053: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:16:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3896291609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:16:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:16:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3896291609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.384 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6c35e937-504d-41a7-876c-b3b295904a3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.384 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.454 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.566 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.566 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.575 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.575 253465 INFO nova.compute.claims [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:16:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:00 compute-0 nova_compute[253461]: 2025-11-22 04:16:00.722 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2248529583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.204 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.209 253465 DEBUG nova.compute.provider_tree [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.303 253465 DEBUG nova.scheduler.client.report [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:16:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3896291609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:16:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3896291609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:16:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2248529583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.429 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.430 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.569 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.569 253465 DEBUG nova.network.neutron [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.778 253465 INFO nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.915 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:16:01 compute-0 sshd-session[299212]: Connection closed by 194.0.234.20 port 65105
Nov 22 04:16:01 compute-0 nova_compute[253461]: 2025-11-22 04:16:01.989 253465 DEBUG nova.policy [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ddff25657c74403e9ed9e91ff227badd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.077 253465 INFO nova.virt.block_device [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Booting with volume 736913be-faab-467f-889e-ff95053bdeaa at /dev/vda
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.270 253465 DEBUG os_brick.utils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.271 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.288 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.289 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[bd10b1b6-a8fe-43e7-a061-e7df0f4062e8]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.290 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.299 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.300 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e0092601-d3cc-4a62-b2af-46decf76b040]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.301 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.311 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.311 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[608f6ecd-8fe6-403e-9a6a-4c19ac5022dc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.313 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e8373216-a791-48cd-b9ae-21edf5d94d12]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.313 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.337 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.340 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.341 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.341 253465 DEBUG os_brick.initiator.connectors.lightos [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.342 253465 DEBUG os_brick.utils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.342 253465 DEBUG nova.virt.block_device [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Updating existing volume attachment record: ababa485-9a4a-4bd1-b91a-f19382b1cfc7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:16:02 compute-0 nova_compute[253461]: 2025-11-22 04:16:02.418 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:02 compute-0 ceph-mon[75011]: pgmap v2054: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:16:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3836106061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:16:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:03.232 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.232 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:03 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:03.234 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:16:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.293 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.295 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.295 253465 INFO nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Creating image(s)
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.296 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.296 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Ensure instance console log exists: /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.296 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.297 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.297 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:03 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3836106061' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.808 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:03 compute-0 nova_compute[253461]: 2025-11-22 04:16:03.944 253465 DEBUG nova.network.neutron [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Successfully created port: 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:16:04 compute-0 ceph-mon[75011]: pgmap v2055: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:05 compute-0 nova_compute[253461]: 2025-11-22 04:16:05.307 253465 DEBUG nova.network.neutron [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Successfully updated port: 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:16:05 compute-0 nova_compute[253461]: 2025-11-22 04:16:05.324 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:16:05 compute-0 nova_compute[253461]: 2025-11-22 04:16:05.324 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquired lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:16:05 compute-0 nova_compute[253461]: 2025-11-22 04:16:05.324 253465 DEBUG nova.network.neutron [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:16:05 compute-0 nova_compute[253461]: 2025-11-22 04:16:05.402 253465 DEBUG nova.compute.manager [req-a7c614ce-470e-4bac-aea5-ac66e3bcadee req-9dfc9adf-8711-482b-bdc7-143cc0fd072d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received event network-changed-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:16:05 compute-0 nova_compute[253461]: 2025-11-22 04:16:05.403 253465 DEBUG nova.compute.manager [req-a7c614ce-470e-4bac-aea5-ac66e3bcadee req-9dfc9adf-8711-482b-bdc7-143cc0fd072d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Refreshing instance network info cache due to event network-changed-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:16:05 compute-0 nova_compute[253461]: 2025-11-22 04:16:05.403 253465 DEBUG oslo_concurrency.lockutils [req-a7c614ce-470e-4bac-aea5-ac66e3bcadee req-9dfc9adf-8711-482b-bdc7-143cc0fd072d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:16:05 compute-0 nova_compute[253461]: 2025-11-22 04:16:05.919 253465 DEBUG nova.network.neutron [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:16:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:06 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:06.237 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:06 compute-0 ceph-mon[75011]: pgmap v2056: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.168 253465 DEBUG nova.network.neutron [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Updating instance_info_cache with network_info: [{"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.190 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Releasing lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.190 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Instance network_info: |[{"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.191 253465 DEBUG oslo_concurrency.lockutils [req-a7c614ce-470e-4bac-aea5-ac66e3bcadee req-9dfc9adf-8711-482b-bdc7-143cc0fd072d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.191 253465 DEBUG nova.network.neutron [req-a7c614ce-470e-4bac-aea5-ac66e3bcadee req-9dfc9adf-8711-482b-bdc7-143cc0fd072d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Refreshing network info cache for port 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.196 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Start _get_guest_xml network_info=[{"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': 'ababa485-9a4a-4bd1-b91a-f19382b1cfc7', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-736913be-faab-467f-889e-ff95053bdeaa', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '736913be-faab-467f-889e-ff95053bdeaa', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6c35e937-504d-41a7-876c-b3b295904a3f', 'attached_at': '', 'detached_at': '', 'volume_id': '736913be-faab-467f-889e-ff95053bdeaa', 'serial': '736913be-faab-467f-889e-ff95053bdeaa'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.203 253465 WARNING nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.210 253465 DEBUG nova.virt.libvirt.host [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.211 253465 DEBUG nova.virt.libvirt.host [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.220 253465 DEBUG nova.virt.libvirt.host [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.221 253465 DEBUG nova.virt.libvirt.host [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.222 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.223 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.224 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.225 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.225 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.225 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.225 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.225 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.226 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.226 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.226 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.226 253465 DEBUG nova.virt.hardware [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.248 253465 DEBUG nova.storage.rbd_utils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6c35e937-504d-41a7-876c-b3b295904a3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.252 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.420 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:16:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1321307081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.693 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.822 253465 DEBUG os_brick.encryptors [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Using volume encryption metadata '{'encryption_key_id': '43eae931-642b-46fc-a6eb-b511895b4020', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-736913be-faab-467f-889e-ff95053bdeaa', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '736913be-faab-467f-889e-ff95053bdeaa', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6c35e937-504d-41a7-876c-b3b295904a3f', 'attached_at': '', 'detached_at': '', 'volume_id': '736913be-faab-467f-889e-ff95053bdeaa', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.826 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.845 253465 DEBUG barbicanclient.v1.secrets [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/43eae931-642b-46fc-a6eb-b511895b4020 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.845 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.882 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.883 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.910 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.911 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.936 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.936 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.978 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:07 compute-0 nova_compute[253461]: 2025-11-22 04:16:07.978 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.010 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.010 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.031 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.031 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.051 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.051 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.068 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.068 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.092 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.092 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.112 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.112 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.191 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.191 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.225 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.226 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.249 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.250 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.280 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.281 253465 INFO barbicanclient.base [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/43eae931-642b-46fc-a6eb-b511895b4020
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.306 253465 DEBUG barbicanclient.client [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.307 253465 DEBUG nova.virt.libvirt.host [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <volume>736913be-faab-467f-889e-ff95053bdeaa</volume>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   </usage>
Nov 22 04:16:08 compute-0 nova_compute[253461]: </secret>
Nov 22 04:16:08 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.340 253465 DEBUG nova.virt.libvirt.vif [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:15:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1109852748',display_name='tempest-TransferEncryptedVolumeTest-server-1109852748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1109852748',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMF3KIMSa8o4WCzM1VgX9RcGz4FcpwcrZcdUDFLNYpjBj2lzhaXFrO0bSdzjU9Itff6b3BySQo/nLrhI32bk8GIfHP/n0NuDArjdwgS2hsu8vteQ0u/zEQY1VMKJGLhTNw==',key_name='tempest-TransferEncryptedVolumeTest-1133237278',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-p2qdq1ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:16:02Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6c35e937-504d-41a7-876c-b3b295904a3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.341 253465 DEBUG nova.network.os_vif_util [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.342 253465 DEBUG nova.network.os_vif_util [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01e9eaea-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.344 253465 DEBUG nova.objects.instance [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c35e937-504d-41a7-876c-b3b295904a3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.359 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <uuid>6c35e937-504d-41a7-876c-b3b295904a3f</uuid>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <name>instance-00000019</name>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1109852748</nova:name>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:16:07</nova:creationTime>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <nova:user uuid="ddff25657c74403e9ed9e91ff227badd">tempest-TransferEncryptedVolumeTest-1500496447-project-member</nova:user>
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <nova:project uuid="98e4451f91104cd88f6e19dd5c53fd00">tempest-TransferEncryptedVolumeTest-1500496447</nova:project>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <nova:port uuid="01e9eaea-6fba-4e12-9b9a-770c2ce71a2a">
Nov 22 04:16:08 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <system>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <entry name="serial">6c35e937-504d-41a7-876c-b3b295904a3f</entry>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <entry name="uuid">6c35e937-504d-41a7-876c-b3b295904a3f</entry>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </system>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <os>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   </os>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <features>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   </features>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/6c35e937-504d-41a7-876c-b3b295904a3f_disk.config">
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       </source>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-736913be-faab-467f-889e-ff95053bdeaa">
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       </source>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <serial>736913be-faab-467f-889e-ff95053bdeaa</serial>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <encryption format="luks">
Nov 22 04:16:08 compute-0 nova_compute[253461]:         <secret type="passphrase" uuid="03e03ef2-1298-44c5-a108-0a3822633a6a"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       </encryption>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:71:9c:d4"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <target dev="tap01e9eaea-6f"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f/console.log" append="off"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <video>
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </video>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:16:08 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:16:08 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:16:08 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:16:08 compute-0 nova_compute[253461]: </domain>
Nov 22 04:16:08 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.360 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Preparing to wait for external event network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.361 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.362 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.363 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.364 253465 DEBUG nova.virt.libvirt.vif [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:15:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1109852748',display_name='tempest-TransferEncryptedVolumeTest-server-1109852748',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1109852748',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMF3KIMSa8o4WCzM1VgX9RcGz4FcpwcrZcdUDFLNYpjBj2lzhaXFrO0bSdzjU9Itff6b3BySQo/nLrhI32bk8GIfHP/n0NuDArjdwgS2hsu8vteQ0u/zEQY1VMKJGLhTNw==',key_name='tempest-TransferEncryptedVolumeTest-1133237278',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-p2qdq1ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:16:02Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6c35e937-504d-41a7-876c-b3b295904a3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.364 253465 DEBUG nova.network.os_vif_util [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.365 253465 DEBUG nova.network.os_vif_util [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01e9eaea-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.366 253465 DEBUG os_vif [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01e9eaea-6f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.367 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.368 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.369 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.373 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.373 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01e9eaea-6f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.374 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap01e9eaea-6f, col_values=(('external_ids', {'iface-id': '01e9eaea-6fba-4e12-9b9a-770c2ce71a2a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:9c:d4', 'vm-uuid': '6c35e937-504d-41a7-876c-b3b295904a3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:08 compute-0 NetworkManager[48916]: <info>  [1763784968.3772] manager: (tap01e9eaea-6f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.376 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.381 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.388 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.390 253465 INFO os_vif [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01e9eaea-6f')
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.449 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.450 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.450 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No VIF found with MAC fa:16:3e:71:9c:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.451 253465 INFO nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Using config drive
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.481 253465 DEBUG nova.storage.rbd_utils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6c35e937-504d-41a7-876c-b3b295904a3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:16:08 compute-0 ceph-mon[75011]: pgmap v2057: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1321307081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:16:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.810 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.992 253465 DEBUG nova.network.neutron [req-a7c614ce-470e-4bac-aea5-ac66e3bcadee req-9dfc9adf-8711-482b-bdc7-143cc0fd072d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Updated VIF entry in instance network info cache for port 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:16:08 compute-0 nova_compute[253461]: 2025-11-22 04:16:08.993 253465 DEBUG nova.network.neutron [req-a7c614ce-470e-4bac-aea5-ac66e3bcadee req-9dfc9adf-8711-482b-bdc7-143cc0fd072d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Updating instance_info_cache with network_info: [{"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.127 253465 DEBUG oslo_concurrency.lockutils [req-a7c614ce-470e-4bac-aea5-ac66e3bcadee req-9dfc9adf-8711-482b-bdc7-143cc0fd072d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.406 253465 INFO nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Creating config drive at /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f/disk.config
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.414 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuoipvz4r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.555 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuoipvz4r" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.602 253465 DEBUG nova.storage.rbd_utils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6c35e937-504d-41a7-876c-b3b295904a3f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.607 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f/disk.config 6c35e937-504d-41a7-876c-b3b295904a3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.802 253465 DEBUG oslo_concurrency.processutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f/disk.config 6c35e937-504d-41a7-876c-b3b295904a3f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.803 253465 INFO nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Deleting local config drive /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f/disk.config because it was imported into RBD.
Nov 22 04:16:09 compute-0 kernel: tap01e9eaea-6f: entered promiscuous mode
Nov 22 04:16:09 compute-0 NetworkManager[48916]: <info>  [1763784969.8701] manager: (tap01e9eaea-6f): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Nov 22 04:16:09 compute-0 ovn_controller[152691]: 2025-11-22T04:16:09Z|00260|binding|INFO|Claiming lport 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a for this chassis.
Nov 22 04:16:09 compute-0 ovn_controller[152691]: 2025-11-22T04:16:09Z|00261|binding|INFO|01e9eaea-6fba-4e12-9b9a-770c2ce71a2a: Claiming fa:16:3e:71:9c:d4 10.100.0.12
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.869 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.889 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:9c:d4 10.100.0.12'], port_security=['fa:16:3e:71:9c:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6c35e937-504d-41a7-876c-b3b295904a3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73bcc005-88ac-46b6-ad11-6207c6046246', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e95e3fed-bcd6-449d-9f95-3b75633f02f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8139379-e220-4788-92e4-b495f0c34eb7, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.891 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a in datapath 73bcc005-88ac-46b6-ad11-6207c6046246 bound to our chassis
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.895 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:16:09 compute-0 ovn_controller[152691]: 2025-11-22T04:16:09Z|00262|binding|INFO|Setting lport 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a ovn-installed in OVS
Nov 22 04:16:09 compute-0 ovn_controller[152691]: 2025-11-22T04:16:09Z|00263|binding|INFO|Setting lport 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a up in Southbound
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.901 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:09 compute-0 systemd-udevd[299330]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:16:09 compute-0 nova_compute[253461]: 2025-11-22 04:16:09.905 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.910 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[241d929a-da0d-49c4-9370-15f9185f26e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.911 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap73bcc005-81 in ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.913 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap73bcc005-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.914 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[bad26752-ef9a-4f98-8f3f-1c85a40c4b8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.915 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[37dc8751-0980-428e-923f-1c4efebda072]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:09 compute-0 NetworkManager[48916]: <info>  [1763784969.9214] device (tap01e9eaea-6f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:16:09 compute-0 NetworkManager[48916]: <info>  [1763784969.9226] device (tap01e9eaea-6f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:16:09 compute-0 systemd-machined[215728]: New machine qemu-25-instance-00000019.
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.931 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[14c57a4b-b1f3-4337-b3dc-db45305f0a07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:09 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.961 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[718a2ce5-4558-438a-84f8-092307e1ae37]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:09 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:09.996 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[1900a863-6fd7-4ff8-b766-6a12efbb0bdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 systemd-udevd[299337]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.005 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[73eec316-9a4b-445a-8bcf-4dae75bbb286]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 NetworkManager[48916]: <info>  [1763784970.0067] manager: (tap73bcc005-80): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.041 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[b0b73ffb-8f14-42d9-a0c2-d0a7225bdbf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.045 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee3a496-c0ab-4f37-96ee-04f7a678119f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 NetworkManager[48916]: <info>  [1763784970.0733] device (tap73bcc005-80): carrier: link connected
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.080 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[10c0cba4-73e9-4e9e-a68b-c4a7eba8bc04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.098 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[31841d5d-7d3c-49d0-8762-70178ac86a6a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73bcc005-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522267, 'reachable_time': 44122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299366, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.116 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[76cf4bf6-3ebe-46d0-a609-1e8fc26f6940]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:1121'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522267, 'tstamp': 522267}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299367, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.135 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2b718daa-2a74-45c3-8e39-99989e26d0fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73bcc005-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522267, 'reachable_time': 44122, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299369, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.169 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[cf172f4d-beb0-494f-ae9c-9a9364aaeb80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 nova_compute[253461]: 2025-11-22 04:16:10.174 253465 DEBUG nova.compute.manager [req-fca19769-5a74-4abe-87e1-ee11e79625e9 req-e2a3b216-6a4d-4404-9463-0bc94de550d3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received event network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:16:10 compute-0 nova_compute[253461]: 2025-11-22 04:16:10.174 253465 DEBUG oslo_concurrency.lockutils [req-fca19769-5a74-4abe-87e1-ee11e79625e9 req-e2a3b216-6a4d-4404-9463-0bc94de550d3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:10 compute-0 nova_compute[253461]: 2025-11-22 04:16:10.175 253465 DEBUG oslo_concurrency.lockutils [req-fca19769-5a74-4abe-87e1-ee11e79625e9 req-e2a3b216-6a4d-4404-9463-0bc94de550d3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:10 compute-0 nova_compute[253461]: 2025-11-22 04:16:10.175 253465 DEBUG oslo_concurrency.lockutils [req-fca19769-5a74-4abe-87e1-ee11e79625e9 req-e2a3b216-6a4d-4404-9463-0bc94de550d3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:10 compute-0 nova_compute[253461]: 2025-11-22 04:16:10.176 253465 DEBUG nova.compute.manager [req-fca19769-5a74-4abe-87e1-ee11e79625e9 req-e2a3b216-6a4d-4404-9463-0bc94de550d3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Processing event network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.224 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b966ea9e-b777-46f8-b18f-86e2d440de26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.225 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73bcc005-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.225 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.226 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73bcc005-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:10 compute-0 NetworkManager[48916]: <info>  [1763784970.2286] manager: (tap73bcc005-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Nov 22 04:16:10 compute-0 kernel: tap73bcc005-80: entered promiscuous mode
Nov 22 04:16:10 compute-0 nova_compute[253461]: 2025-11-22 04:16:10.229 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.230 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap73bcc005-80, col_values=(('external_ids', {'iface-id': 'c0be682a-2fee-4917-82d9-be22b54079b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:10 compute-0 ovn_controller[152691]: 2025-11-22T04:16:10Z|00264|binding|INFO|Releasing lport c0be682a-2fee-4917-82d9-be22b54079b1 from this chassis (sb_readonly=0)
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.288 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:16:10 compute-0 nova_compute[253461]: 2025-11-22 04:16:10.287 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.289 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0b501811-182e-4e69-91e6-f1b174ace862]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.290 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:16:10 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:10.290 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'env', 'PROCESS_TAG=haproxy-73bcc005-88ac-46b6-ad11-6207c6046246', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/73bcc005-88ac-46b6-ad11-6207c6046246.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:16:10 compute-0 ceph-mon[75011]: pgmap v2058: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:16:10 compute-0 podman[299436]: 2025-11-22 04:16:10.63010453 +0000 UTC m=+0.026161077 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:16:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s
Nov 22 04:16:10 compute-0 podman[299436]: 2025-11-22 04:16:10.731922921 +0000 UTC m=+0.127979488 container create 947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:16:10 compute-0 systemd[1]: Started libpod-conmon-947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798.scope.
Nov 22 04:16:10 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:16:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd07d698afbcd108deb20b9664fd5384ecb5da89e6c2f17262c2a1faa0e1e843/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:10 compute-0 podman[299436]: 2025-11-22 04:16:10.888949302 +0000 UTC m=+0.285005929 container init 947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true)
Nov 22 04:16:10 compute-0 podman[299436]: 2025-11-22 04:16:10.897956367 +0000 UTC m=+0.294012934 container start 947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:16:10 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[299451]: [NOTICE]   (299455) : New worker (299457) forked
Nov 22 04:16:10 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[299451]: [NOTICE]   (299455) : Loading success.
Nov 22 04:16:11 compute-0 ceph-mon[75011]: pgmap v2059: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 2 op/s
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.252 253465 DEBUG nova.compute.manager [req-88e3280a-9eb9-4280-ab5c-ae59ebb091f2 req-5cc5408d-5b2d-4ba7-990b-e58b49c3accd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received event network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.254 253465 DEBUG oslo_concurrency.lockutils [req-88e3280a-9eb9-4280-ab5c-ae59ebb091f2 req-5cc5408d-5b2d-4ba7-990b-e58b49c3accd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.254 253465 DEBUG oslo_concurrency.lockutils [req-88e3280a-9eb9-4280-ab5c-ae59ebb091f2 req-5cc5408d-5b2d-4ba7-990b-e58b49c3accd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.255 253465 DEBUG oslo_concurrency.lockutils [req-88e3280a-9eb9-4280-ab5c-ae59ebb091f2 req-5cc5408d-5b2d-4ba7-990b-e58b49c3accd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.255 253465 DEBUG nova.compute.manager [req-88e3280a-9eb9-4280-ab5c-ae59ebb091f2 req-5cc5408d-5b2d-4ba7-990b-e58b49c3accd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] No waiting events found dispatching network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.256 253465 WARNING nova.compute.manager [req-88e3280a-9eb9-4280-ab5c-ae59ebb091f2 req-5cc5408d-5b2d-4ba7-990b-e58b49c3accd f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received unexpected event network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a for instance with vm_state building and task_state spawning.
Nov 22 04:16:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 12 KiB/s wr, 7 op/s
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.752 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784972.7521074, 6c35e937-504d-41a7-876c-b3b295904a3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.753 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] VM Started (Lifecycle Event)
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.757 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.761 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.766 253465 INFO nova.virt.libvirt.driver [-] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Instance spawned successfully.
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.767 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.790 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.799 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.806 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.806 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.807 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.807 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.808 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.808 253465 DEBUG nova.virt.libvirt.driver [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.836 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.837 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784972.7535906, 6c35e937-504d-41a7-876c-b3b295904a3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.837 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] VM Paused (Lifecycle Event)
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.870 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.875 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763784972.760993, 6c35e937-504d-41a7-876c-b3b295904a3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.876 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] VM Resumed (Lifecycle Event)
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.884 253465 INFO nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Took 9.59 seconds to spawn the instance on the hypervisor.
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.885 253465 DEBUG nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.895 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.900 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.943 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.967 253465 INFO nova.compute.manager [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Took 12.44 seconds to build instance.
Nov 22 04:16:12 compute-0 nova_compute[253461]: 2025-11-22 04:16:12.986 253465 DEBUG oslo_concurrency.lockutils [None req-f28b2086-6550-46d7-abbe-c123724f8ca2 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:13 compute-0 nova_compute[253461]: 2025-11-22 04:16:13.377 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:13 compute-0 ceph-mon[75011]: pgmap v2060: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 12 KiB/s wr, 7 op/s
Nov 22 04:16:13 compute-0 nova_compute[253461]: 2025-11-22 04:16:13.812 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Nov 22 04:16:15 compute-0 podman[299472]: 2025-11-22 04:16:15.380248799 +0000 UTC m=+0.060188792 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:16:15 compute-0 podman[299473]: 2025-11-22 04:16:15.488292078 +0000 UTC m=+0.156508522 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:15 compute-0 ceph-mon[75011]: pgmap v2061: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Nov 22 04:16:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:16:18 compute-0 ceph-mon[75011]: pgmap v2062: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:16:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:18 compute-0 nova_compute[253461]: 2025-11-22 04:16:18.380 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:16:18 compute-0 nova_compute[253461]: 2025-11-22 04:16:18.815 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:20 compute-0 ceph-mon[75011]: pgmap v2063: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:16:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:16:21 compute-0 nova_compute[253461]: 2025-11-22 04:16:21.436 253465 DEBUG nova.compute.manager [req-0c8d345b-cab0-47fb-a7d3-10d635291b50 req-e70a6d14-1c44-403a-89c3-a85a28b89686 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received event network-changed-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:16:21 compute-0 nova_compute[253461]: 2025-11-22 04:16:21.437 253465 DEBUG nova.compute.manager [req-0c8d345b-cab0-47fb-a7d3-10d635291b50 req-e70a6d14-1c44-403a-89c3-a85a28b89686 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Refreshing instance network info cache due to event network-changed-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:16:21 compute-0 nova_compute[253461]: 2025-11-22 04:16:21.437 253465 DEBUG oslo_concurrency.lockutils [req-0c8d345b-cab0-47fb-a7d3-10d635291b50 req-e70a6d14-1c44-403a-89c3-a85a28b89686 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:16:21 compute-0 nova_compute[253461]: 2025-11-22 04:16:21.437 253465 DEBUG oslo_concurrency.lockutils [req-0c8d345b-cab0-47fb-a7d3-10d635291b50 req-e70a6d14-1c44-403a-89c3-a85a28b89686 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:16:21 compute-0 nova_compute[253461]: 2025-11-22 04:16:21.438 253465 DEBUG nova.network.neutron [req-0c8d345b-cab0-47fb-a7d3-10d635291b50 req-e70a6d14-1c44-403a-89c3-a85a28b89686 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Refreshing network info cache for port 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:16:22 compute-0 ceph-mon[75011]: pgmap v2064: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:16:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 22 04:16:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:23.032 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:23.032 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:23.033 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:23 compute-0 nova_compute[253461]: 2025-11-22 04:16:23.384 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:23 compute-0 nova_compute[253461]: 2025-11-22 04:16:23.816 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:24 compute-0 ceph-mon[75011]: pgmap v2065: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 22 04:16:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 341 B/s wr, 70 op/s
Nov 22 04:16:24 compute-0 nova_compute[253461]: 2025-11-22 04:16:24.973 253465 DEBUG nova.network.neutron [req-0c8d345b-cab0-47fb-a7d3-10d635291b50 req-e70a6d14-1c44-403a-89c3-a85a28b89686 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Updated VIF entry in instance network info cache for port 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:16:24 compute-0 nova_compute[253461]: 2025-11-22 04:16:24.974 253465 DEBUG nova.network.neutron [req-0c8d345b-cab0-47fb-a7d3-10d635291b50 req-e70a6d14-1c44-403a-89c3-a85a28b89686 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Updating instance_info_cache with network_info: [{"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:16:25 compute-0 nova_compute[253461]: 2025-11-22 04:16:25.000 253465 DEBUG oslo_concurrency.lockutils [req-0c8d345b-cab0-47fb-a7d3-10d635291b50 req-e70a6d14-1c44-403a-89c3-a85a28b89686 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-6c35e937-504d-41a7-876c-b3b295904a3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:16:26 compute-0 ovn_controller[152691]: 2025-11-22T04:16:26Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Nov 22 04:16:26 compute-0 ovn_controller[152691]: 2025-11-22T04:16:26Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:71:9c:d4 10.100.0.12
Nov 22 04:16:26 compute-0 ceph-mon[75011]: pgmap v2066: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 341 B/s wr, 70 op/s
Nov 22 04:16:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.3 KiB/s wr, 101 op/s
Nov 22 04:16:27 compute-0 ceph-mon[75011]: pgmap v2067: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.3 KiB/s wr, 101 op/s
Nov 22 04:16:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:28 compute-0 nova_compute[253461]: 2025-11-22 04:16:28.387 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 469 KiB/s rd, 6.3 KiB/s wr, 37 op/s
Nov 22 04:16:28 compute-0 nova_compute[253461]: 2025-11-22 04:16:28.818 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:29 compute-0 ceph-mon[75011]: pgmap v2068: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 469 KiB/s rd, 6.3 KiB/s wr, 37 op/s
Nov 22 04:16:30 compute-0 podman[299518]: 2025-11-22 04:16:30.387696273 +0000 UTC m=+0.064702543 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 16 KiB/s wr, 44 op/s
Nov 22 04:16:30 compute-0 ovn_controller[152691]: 2025-11-22T04:16:30Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.12
Nov 22 04:16:30 compute-0 ovn_controller[152691]: 2025-11-22T04:16:30Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:71:9c:d4 10.100.0.12
Nov 22 04:16:31 compute-0 ovn_controller[152691]: 2025-11-22T04:16:31Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:9c:d4 10.100.0.12
Nov 22 04:16:31 compute-0 ovn_controller[152691]: 2025-11-22T04:16:31Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:9c:d4 10.100.0.12
Nov 22 04:16:31 compute-0 ceph-mon[75011]: pgmap v2069: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 16 KiB/s wr, 44 op/s
Nov 22 04:16:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 16 KiB/s wr, 44 op/s
Nov 22 04:16:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.384164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784993384285, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 559, "num_deletes": 255, "total_data_size": 578248, "memory_usage": 589144, "flush_reason": "Manual Compaction"}
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Nov 22 04:16:33 compute-0 nova_compute[253461]: 2025-11-22 04:16:33.391 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784993394342, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 573046, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40597, "largest_seqno": 41155, "table_properties": {"data_size": 569988, "index_size": 1030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6851, "raw_average_key_size": 18, "raw_value_size": 563945, "raw_average_value_size": 1507, "num_data_blocks": 47, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784954, "oldest_key_time": 1763784954, "file_creation_time": 1763784993, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 10252 microseconds, and 3891 cpu microseconds.
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.394398) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 573046 bytes OK
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.394462) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.398399) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.398468) EVENT_LOG_v1 {"time_micros": 1763784993398459, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.398497) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 575112, prev total WAL file size 575112, number of live WAL files 2.
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.399073) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353038' seq:0, type:0; will stop at (end)
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(559KB)], [83(10MB)]
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784993399120, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11475767, "oldest_snapshot_seqno": -1}
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7198 keys, 11337537 bytes, temperature: kUnknown
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784993475093, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 11337537, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11283428, "index_size": 34989, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 182499, "raw_average_key_size": 25, "raw_value_size": 11148190, "raw_average_value_size": 1548, "num_data_blocks": 1392, "num_entries": 7198, "num_filter_entries": 7198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763784993, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.475622) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 11337537 bytes
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.489594) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.9 rd, 149.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(39.8) write-amplify(19.8) OK, records in: 7716, records dropped: 518 output_compression: NoCompression
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.489625) EVENT_LOG_v1 {"time_micros": 1763784993489613, "job": 48, "event": "compaction_finished", "compaction_time_micros": 76046, "compaction_time_cpu_micros": 25673, "output_level": 6, "num_output_files": 1, "total_output_size": 11337537, "num_input_records": 7716, "num_output_records": 7198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784993490027, "job": 48, "event": "table_file_deletion", "file_number": 85}
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763784993492120, "job": 48, "event": "table_file_deletion", "file_number": 83}
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.398963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.492246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.492255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.492258) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.492262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:16:33 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:16:33.492265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:16:33 compute-0 nova_compute[253461]: 2025-11-22 04:16:33.819 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:34 compute-0 ceph-mon[75011]: pgmap v2070: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 16 KiB/s wr, 44 op/s
Nov 22 04:16:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 20 KiB/s wr, 45 op/s
Nov 22 04:16:34 compute-0 sshd-session[299540]: Invalid user admin from 27.79.46.85 port 48936
Nov 22 04:16:35 compute-0 sshd-session[299540]: Connection closed by invalid user admin 27.79.46.85 port 48936 [preauth]
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:16:36
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'backups', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log']
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:16:36 compute-0 ceph-mon[75011]: pgmap v2071: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 20 KiB/s wr, 45 op/s
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:16:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 20 KiB/s wr, 41 op/s
Nov 22 04:16:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:38 compute-0 nova_compute[253461]: 2025-11-22 04:16:38.441 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:38 compute-0 ceph-mon[75011]: pgmap v2072: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 20 KiB/s wr, 41 op/s
Nov 22 04:16:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 14 KiB/s wr, 7 op/s
Nov 22 04:16:38 compute-0 nova_compute[253461]: 2025-11-22 04:16:38.822 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:40 compute-0 ceph-mon[75011]: pgmap v2073: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 14 KiB/s wr, 7 op/s
Nov 22 04:16:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 24 KiB/s wr, 8 op/s
Nov 22 04:16:42 compute-0 ceph-mon[75011]: pgmap v2074: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 24 KiB/s wr, 8 op/s
Nov 22 04:16:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 14 KiB/s wr, 1 op/s
Nov 22 04:16:42 compute-0 sudo[299543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:42 compute-0 sudo[299543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:42 compute-0 sudo[299543]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:43 compute-0 sudo[299568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:16:43 compute-0 sudo[299568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:43 compute-0 sudo[299568]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:43 compute-0 sudo[299593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:43 compute-0 sudo[299593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:43 compute-0 sudo[299593]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:43 compute-0 sudo[299618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 check-host
Nov 22 04:16:43 compute-0 sudo[299618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:43 compute-0 nova_compute[253461]: 2025-11-22 04:16:43.445 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:43 compute-0 sudo[299618]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:16:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:16:43 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:43 compute-0 sudo[299663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:43 compute-0 sudo[299663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:43 compute-0 sudo[299663]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:43 compute-0 nova_compute[253461]: 2025-11-22 04:16:43.824 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:43 compute-0 sudo[299688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:16:43 compute-0 sudo[299688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:43 compute-0 sudo[299688]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:43 compute-0 sudo[299713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:43 compute-0 sudo[299713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:43 compute-0 sudo[299713]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:43 compute-0 sudo[299740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:16:43 compute-0 sudo[299740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:44 compute-0 sudo[299740]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:16:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:16:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:16:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:16:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:16:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:44 compute-0 sshd-session[299714]: Received disconnect from 80.94.93.119 port 44832:11:  [preauth]
Nov 22 04:16:44 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f566d105-f604-455c-bdbc-d6534fb1d3ce does not exist
Nov 22 04:16:44 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 3cd56ace-9371-4f96-a592-6d4f39271892 does not exist
Nov 22 04:16:44 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev f4d54a12-ad57-41ef-bb2f-7de9f1780f81 does not exist
Nov 22 04:16:44 compute-0 sshd-session[299714]: Disconnected from authenticating user root 80.94.93.119 port 44832 [preauth]
Nov 22 04:16:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:16:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:16:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:16:44 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:16:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:16:44 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:16:44 compute-0 sudo[299797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:44 compute-0 sudo[299797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:44 compute-0 sudo[299797]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 2 op/s
Nov 22 04:16:44 compute-0 ceph-mon[75011]: pgmap v2075: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 14 KiB/s wr, 1 op/s
Nov 22 04:16:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:16:44 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:16:44 compute-0 sudo[299822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:16:44 compute-0 sudo[299822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:44 compute-0 sudo[299822]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:44 compute-0 sudo[299847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:44 compute-0 sudo[299847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:44 compute-0 sudo[299847]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:44 compute-0 sudo[299872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:16:44 compute-0 sudo[299872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:45 compute-0 podman[299936]: 2025-11-22 04:16:45.290966941 +0000 UTC m=+0.025625711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:45 compute-0 podman[299936]: 2025-11-22 04:16:45.472953247 +0000 UTC m=+0.207611957 container create 430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:16:45 compute-0 systemd[1]: Started libpod-conmon-430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300.scope.
Nov 22 04:16:45 compute-0 podman[299950]: 2025-11-22 04:16:45.616199471 +0000 UTC m=+0.092650065 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:16:45 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:16:45 compute-0 podman[299951]: 2025-11-22 04:16:45.645735162 +0000 UTC m=+0.122005176 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 22 04:16:45 compute-0 podman[299936]: 2025-11-22 04:16:45.994976112 +0000 UTC m=+0.729634842 container init 430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:16:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:16:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:16:46 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:16:46 compute-0 ceph-mon[75011]: pgmap v2076: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 15 KiB/s wr, 2 op/s
Nov 22 04:16:46 compute-0 podman[299936]: 2025-11-22 04:16:46.01451929 +0000 UTC m=+0.749177990 container start 430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Nov 22 04:16:46 compute-0 systemd[1]: libpod-430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300.scope: Deactivated successfully.
Nov 22 04:16:46 compute-0 eager_burnell[299985]: 167 167
Nov 22 04:16:46 compute-0 conmon[299985]: conmon 430a715bab5ebf4a4756 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300.scope/container/memory.events
Nov 22 04:16:46 compute-0 podman[299936]: 2025-11-22 04:16:46.051660048 +0000 UTC m=+0.786318748 container attach 430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:16:46 compute-0 podman[299936]: 2025-11-22 04:16:46.053605885 +0000 UTC m=+0.788264585 container died 430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9461df66628db0eeaa00e90bdde79f998cf4bd4d1a945be2c915996f7f3e06f-merged.mount: Deactivated successfully.
Nov 22 04:16:46 compute-0 podman[299936]: 2025-11-22 04:16:46.239141302 +0000 UTC m=+0.973800002 container remove 430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Nov 22 04:16:46 compute-0 systemd[1]: libpod-conmon-430a715bab5ebf4a4756f10af91ee7fdb0fa4e6bb0fea4e5c9ec8e890b6a6300.scope: Deactivated successfully.
Nov 22 04:16:46 compute-0 podman[300021]: 2025-11-22 04:16:46.463324916 +0000 UTC m=+0.067326870 container create 74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:16:46 compute-0 podman[300021]: 2025-11-22 04:16:46.423927412 +0000 UTC m=+0.027929366 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:46 compute-0 systemd[1]: Started libpod-conmon-74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65.scope.
Nov 22 04:16:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4b88e132c3a7c8a7f74b6d8c0e07575af3808abb469cd84a380158800607/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4b88e132c3a7c8a7f74b6d8c0e07575af3808abb469cd84a380158800607/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4b88e132c3a7c8a7f74b6d8c0e07575af3808abb469cd84a380158800607/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4b88e132c3a7c8a7f74b6d8c0e07575af3808abb469cd84a380158800607/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e02b4b88e132c3a7c8a7f74b6d8c0e07575af3808abb469cd84a380158800607/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:46 compute-0 podman[300021]: 2025-11-22 04:16:46.633806909 +0000 UTC m=+0.237808873 container init 74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:16:46 compute-0 podman[300021]: 2025-11-22 04:16:46.644306256 +0000 UTC m=+0.248308200 container start 74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galois, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0028901340797356256 of space, bias 1.0, pg target 0.8670402239206877 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:16:46 compute-0 podman[300021]: 2025-11-22 04:16:46.672084733 +0000 UTC m=+0.276086697 container attach 74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galois, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:16:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 14 KiB/s wr, 9 op/s
Nov 22 04:16:47 compute-0 modest_galois[300037]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:16:47 compute-0 modest_galois[300037]: --> relative data size: 1.0
Nov 22 04:16:47 compute-0 modest_galois[300037]: --> All data devices are unavailable
Nov 22 04:16:47 compute-0 systemd[1]: libpod-74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65.scope: Deactivated successfully.
Nov 22 04:16:47 compute-0 podman[300021]: 2025-11-22 04:16:47.821977609 +0000 UTC m=+1.425979593 container died 74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:16:47 compute-0 systemd[1]: libpod-74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65.scope: Consumed 1.096s CPU time.
Nov 22 04:16:48 compute-0 ceph-mon[75011]: pgmap v2077: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 14 KiB/s wr, 9 op/s
Nov 22 04:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e02b4b88e132c3a7c8a7f74b6d8c0e07575af3808abb469cd84a380158800607-merged.mount: Deactivated successfully.
Nov 22 04:16:48 compute-0 podman[300021]: 2025-11-22 04:16:48.36302004 +0000 UTC m=+1.967021994 container remove 74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:16:48 compute-0 systemd[1]: libpod-conmon-74235dc138519d199511cfadf56d79a7fade4cf454b711c7ea5d6219989a6d65.scope: Deactivated successfully.
Nov 22 04:16:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:48 compute-0 sudo[299872]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:48 compute-0 nova_compute[253461]: 2025-11-22 04:16:48.450 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:48 compute-0 sudo[300080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:48 compute-0 sudo[300080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:48 compute-0 sudo[300080]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:48 compute-0 sudo[300105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:16:48 compute-0 sudo[300105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:48 compute-0 sudo[300105]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:48 compute-0 sudo[300130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:48 compute-0 sudo[300130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:48 compute-0 sudo[300130]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:48 compute-0 sudo[300155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:16:48 compute-0 sudo[300155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 14 KiB/s wr, 9 op/s
Nov 22 04:16:48 compute-0 nova_compute[253461]: 2025-11-22 04:16:48.825 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:49 compute-0 podman[300220]: 2025-11-22 04:16:49.117704612 +0000 UTC m=+0.048010479 container create 2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_golick, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:16:49 compute-0 systemd[1]: Started libpod-conmon-2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095.scope.
Nov 22 04:16:49 compute-0 podman[300220]: 2025-11-22 04:16:49.09533536 +0000 UTC m=+0.025641217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:16:49 compute-0 podman[300220]: 2025-11-22 04:16:49.239034729 +0000 UTC m=+0.169340636 container init 2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:16:49 compute-0 podman[300220]: 2025-11-22 04:16:49.253156326 +0000 UTC m=+0.183462193 container start 2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_golick, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:16:49 compute-0 condescending_golick[300237]: 167 167
Nov 22 04:16:49 compute-0 systemd[1]: libpod-2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095.scope: Deactivated successfully.
Nov 22 04:16:49 compute-0 podman[300220]: 2025-11-22 04:16:49.260555754 +0000 UTC m=+0.190861601 container attach 2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:16:49 compute-0 podman[300220]: 2025-11-22 04:16:49.260942503 +0000 UTC m=+0.191248350 container died 2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_golick, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:16:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f84d68e0f51169c93e12e06b9975dd3c4aadbe9bdb4c6dea3e273cb7d655fd-merged.mount: Deactivated successfully.
Nov 22 04:16:49 compute-0 podman[300220]: 2025-11-22 04:16:49.317240321 +0000 UTC m=+0.247546168 container remove 2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_golick, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:16:49 compute-0 systemd[1]: libpod-conmon-2236ee2ed552a2dd8160b115ba14aeea5cc49f4de243884e4d6ebd3477b47095.scope: Deactivated successfully.
Nov 22 04:16:49 compute-0 podman[300261]: 2025-11-22 04:16:49.540032991 +0000 UTC m=+0.061435690 container create 436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:16:49 compute-0 systemd[1]: Started libpod-conmon-436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e.scope.
Nov 22 04:16:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158b6599f0fd7bf09ddfb3b7c5268c3db6a25b5e63e64005c87fbe377567b5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158b6599f0fd7bf09ddfb3b7c5268c3db6a25b5e63e64005c87fbe377567b5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:49 compute-0 podman[300261]: 2025-11-22 04:16:49.51951519 +0000 UTC m=+0.040917879 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158b6599f0fd7bf09ddfb3b7c5268c3db6a25b5e63e64005c87fbe377567b5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9158b6599f0fd7bf09ddfb3b7c5268c3db6a25b5e63e64005c87fbe377567b5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:49 compute-0 podman[300261]: 2025-11-22 04:16:49.630183836 +0000 UTC m=+0.151586585 container init 436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:16:49 compute-0 podman[300261]: 2025-11-22 04:16:49.639932956 +0000 UTC m=+0.161335655 container start 436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:16:49 compute-0 podman[300261]: 2025-11-22 04:16:49.645553222 +0000 UTC m=+0.166955991 container attach 436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:16:50 compute-0 ceph-mon[75011]: pgmap v2078: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 14 KiB/s wr, 9 op/s
Nov 22 04:16:50 compute-0 funny_hoover[300278]: {
Nov 22 04:16:50 compute-0 funny_hoover[300278]:     "0": [
Nov 22 04:16:50 compute-0 funny_hoover[300278]:         {
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "devices": [
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "/dev/loop3"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             ],
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_name": "ceph_lv0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_size": "21470642176",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "name": "ceph_lv0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "tags": {
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cluster_name": "ceph",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.crush_device_class": "",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.encrypted": "0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osd_id": "0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.type": "block",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.vdo": "0"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             },
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "type": "block",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "vg_name": "ceph_vg0"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:         }
Nov 22 04:16:50 compute-0 funny_hoover[300278]:     ],
Nov 22 04:16:50 compute-0 funny_hoover[300278]:     "1": [
Nov 22 04:16:50 compute-0 funny_hoover[300278]:         {
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "devices": [
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "/dev/loop4"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             ],
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_name": "ceph_lv1",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_size": "21470642176",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "name": "ceph_lv1",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "tags": {
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cluster_name": "ceph",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.crush_device_class": "",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.encrypted": "0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osd_id": "1",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.type": "block",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.vdo": "0"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             },
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "type": "block",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "vg_name": "ceph_vg1"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:         }
Nov 22 04:16:50 compute-0 funny_hoover[300278]:     ],
Nov 22 04:16:50 compute-0 funny_hoover[300278]:     "2": [
Nov 22 04:16:50 compute-0 funny_hoover[300278]:         {
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "devices": [
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "/dev/loop5"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             ],
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_name": "ceph_lv2",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_size": "21470642176",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "name": "ceph_lv2",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "tags": {
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.cluster_name": "ceph",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.crush_device_class": "",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.encrypted": "0",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osd_id": "2",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.type": "block",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:                 "ceph.vdo": "0"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             },
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "type": "block",
Nov 22 04:16:50 compute-0 funny_hoover[300278]:             "vg_name": "ceph_vg2"
Nov 22 04:16:50 compute-0 funny_hoover[300278]:         }
Nov 22 04:16:50 compute-0 funny_hoover[300278]:     ]
Nov 22 04:16:50 compute-0 funny_hoover[300278]: }
Nov 22 04:16:50 compute-0 nova_compute[253461]: 2025-11-22 04:16:50.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:50 compute-0 nova_compute[253461]: 2025-11-22 04:16:50.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:50 compute-0 systemd[1]: libpod-436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e.scope: Deactivated successfully.
Nov 22 04:16:50 compute-0 nova_compute[253461]: 2025-11-22 04:16:50.465 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:50 compute-0 nova_compute[253461]: 2025-11-22 04:16:50.466 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:50 compute-0 nova_compute[253461]: 2025-11-22 04:16:50.466 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:50 compute-0 nova_compute[253461]: 2025-11-22 04:16:50.467 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:16:50 compute-0 nova_compute[253461]: 2025-11-22 04:16:50.468 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:50 compute-0 podman[300287]: 2025-11-22 04:16:50.518365631 +0000 UTC m=+0.044948255 container died 436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hoover, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:16:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9158b6599f0fd7bf09ddfb3b7c5268c3db6a25b5e63e64005c87fbe377567b5f-merged.mount: Deactivated successfully.
Nov 22 04:16:50 compute-0 ovn_controller[152691]: 2025-11-22T04:16:50Z|00265|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 22 04:16:50 compute-0 podman[300287]: 2025-11-22 04:16:50.688725237 +0000 UTC m=+0.215307831 container remove 436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_hoover, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:16:50 compute-0 systemd[1]: libpod-conmon-436a6a89517f39b47629f6ce3c0f6bd11bf4634b60c8883958d2e6d0cfe0a71e.scope: Deactivated successfully.
Nov 22 04:16:50 compute-0 sudo[300155]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 35 op/s
Nov 22 04:16:50 compute-0 sudo[300322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:50 compute-0 sudo[300322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:50 compute-0 sudo[300322]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:50 compute-0 sudo[300347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:16:50 compute-0 sudo[300347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:50 compute-0 sudo[300347]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:50 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:50 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516497714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:50 compute-0 sudo[300372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:50 compute-0 nova_compute[253461]: 2025-11-22 04:16:50.925 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:50 compute-0 sudo[300372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:50 compute-0 sudo[300372]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:51 compute-0 sudo[300399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:16:51 compute-0 sudo[300399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.056 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.057 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.217 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.219 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4218MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.219 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.219 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.293 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 6c35e937-504d-41a7-876c-b3b295904a3f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.293 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.293 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:16:51 compute-0 podman[300464]: 2025-11-22 04:16:51.309634227 +0000 UTC m=+0.032974451 container create 0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_almeida, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Nov 22 04:16:51 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3516497714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.333 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:51 compute-0 systemd[1]: Started libpod-conmon-0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93.scope.
Nov 22 04:16:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:16:51 compute-0 podman[300464]: 2025-11-22 04:16:51.376674766 +0000 UTC m=+0.100015010 container init 0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:51 compute-0 podman[300464]: 2025-11-22 04:16:51.386210785 +0000 UTC m=+0.109551039 container start 0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_almeida, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:16:51 compute-0 unruffled_almeida[300481]: 167 167
Nov 22 04:16:51 compute-0 podman[300464]: 2025-11-22 04:16:51.390412837 +0000 UTC m=+0.113753071 container attach 0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_almeida, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Nov 22 04:16:51 compute-0 systemd[1]: libpod-0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93.scope: Deactivated successfully.
Nov 22 04:16:51 compute-0 conmon[300481]: conmon 0a62fb4c633be0248772 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93.scope/container/memory.events
Nov 22 04:16:51 compute-0 podman[300464]: 2025-11-22 04:16:51.294509985 +0000 UTC m=+0.017850229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:51 compute-0 podman[300464]: 2025-11-22 04:16:51.391763645 +0000 UTC m=+0.115103869 container died 0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:16:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b82d05de14d3a8a7852dc6016115ae6060be3b6ac4fc276cf8b30ae3d9a1c03-merged.mount: Deactivated successfully.
Nov 22 04:16:51 compute-0 podman[300464]: 2025-11-22 04:16:51.442541885 +0000 UTC m=+0.165882119 container remove 0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_almeida, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:16:51 compute-0 systemd[1]: libpod-conmon-0a62fb4c633be02487728250e9b84810a37888a2fdbbb89ce7eab1eb72e40f93.scope: Deactivated successfully.
Nov 22 04:16:51 compute-0 podman[300523]: 2025-11-22 04:16:51.62249246 +0000 UTC m=+0.040393727 container create 1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:16:51 compute-0 systemd[1]: Started libpod-conmon-1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61.scope.
Nov 22 04:16:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701ce67bbdfa4e570527d6aa3fa8b55f13a8da760323d810c86980d990ee20f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701ce67bbdfa4e570527d6aa3fa8b55f13a8da760323d810c86980d990ee20f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701ce67bbdfa4e570527d6aa3fa8b55f13a8da760323d810c86980d990ee20f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/701ce67bbdfa4e570527d6aa3fa8b55f13a8da760323d810c86980d990ee20f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:16:51 compute-0 podman[300523]: 2025-11-22 04:16:51.606233162 +0000 UTC m=+0.024134449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:16:51 compute-0 podman[300523]: 2025-11-22 04:16:51.71210908 +0000 UTC m=+0.130010367 container init 1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:16:51 compute-0 podman[300523]: 2025-11-22 04:16:51.724799488 +0000 UTC m=+0.142700745 container start 1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Nov 22 04:16:51 compute-0 podman[300523]: 2025-11-22 04:16:51.729253047 +0000 UTC m=+0.147154344 container attach 1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:16:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/166157654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.792 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.801 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.821 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.849 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:16:51 compute-0 nova_compute[253461]: 2025-11-22 04:16:51.849 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:52 compute-0 ceph-mon[75011]: pgmap v2079: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 14 KiB/s wr, 35 op/s
Nov 22 04:16:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/166157654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.0 KiB/s wr, 50 op/s
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]: {
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "osd_id": 1,
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "type": "bluestore"
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:     },
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "osd_id": 0,
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "type": "bluestore"
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:     },
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "osd_id": 2,
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:         "type": "bluestore"
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]:     }
Nov 22 04:16:52 compute-0 peaceful_beaver[300540]: }
Nov 22 04:16:52 compute-0 systemd[1]: libpod-1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61.scope: Deactivated successfully.
Nov 22 04:16:52 compute-0 podman[300523]: 2025-11-22 04:16:52.857392787 +0000 UTC m=+1.275294064 container died 1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:16:52 compute-0 systemd[1]: libpod-1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61.scope: Consumed 1.135s CPU time.
Nov 22 04:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-701ce67bbdfa4e570527d6aa3fa8b55f13a8da760323d810c86980d990ee20f6-merged.mount: Deactivated successfully.
Nov 22 04:16:52 compute-0 podman[300523]: 2025-11-22 04:16:52.932792963 +0000 UTC m=+1.350694230 container remove 1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_beaver, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:16:52 compute-0 systemd[1]: libpod-conmon-1a021c2c726a434b6553e7b7aeb899a4dc198bf7dd6f42ad7b0f5ca056811e61.scope: Deactivated successfully.
Nov 22 04:16:52 compute-0 sudo[300399]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:16:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:16:52 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:52 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev bad8141a-ed79-4f29-aa6d-db9761b11388 does not exist
Nov 22 04:16:52 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 11b646a1-6036-490b-9c67-17237780c7aa does not exist
Nov 22 04:16:53 compute-0 sudo[300587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:16:53 compute-0 sudo[300587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:53 compute-0 sudo[300587]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:53 compute-0 sudo[300612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:16:53 compute-0 sudo[300612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:16:53 compute-0 sudo[300612]: pam_unix(sudo:session): session closed for user root
Nov 22 04:16:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.453 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.525 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6c35e937-504d-41a7-876c-b3b295904a3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.526 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.526 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.527 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.527 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.529 253465 INFO nova.compute.manager [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Terminating instance
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.531 253465 DEBUG nova.compute.manager [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:16:53 compute-0 kernel: tap01e9eaea-6f (unregistering): left promiscuous mode
Nov 22 04:16:53 compute-0 NetworkManager[48916]: <info>  [1763785013.5815] device (tap01e9eaea-6f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:16:53 compute-0 ovn_controller[152691]: 2025-11-22T04:16:53Z|00266|binding|INFO|Releasing lport 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a from this chassis (sb_readonly=0)
Nov 22 04:16:53 compute-0 ovn_controller[152691]: 2025-11-22T04:16:53Z|00267|binding|INFO|Setting lport 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a down in Southbound
Nov 22 04:16:53 compute-0 ovn_controller[152691]: 2025-11-22T04:16:53Z|00268|binding|INFO|Removing iface tap01e9eaea-6f ovn-installed in OVS
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.591 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:53.604 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:9c:d4 10.100.0.12'], port_security=['fa:16:3e:71:9c:d4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6c35e937-504d-41a7-876c-b3b295904a3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73bcc005-88ac-46b6-ad11-6207c6046246', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e95e3fed-bcd6-449d-9f95-3b75633f02f7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.175'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8139379-e220-4788-92e4-b495f0c34eb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:16:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:53.605 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a in datapath 73bcc005-88ac-46b6-ad11-6207c6046246 unbound from our chassis
Nov 22 04:16:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:53.607 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 73bcc005-88ac-46b6-ad11-6207c6046246, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:16:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:53.609 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8456c351-828f-40d3-8ec8-4cc7fb6b21bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:53 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:53.609 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 namespace which is not needed anymore
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.619 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:53 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Nov 22 04:16:53 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 16.648s CPU time.
Nov 22 04:16:53 compute-0 systemd-machined[215728]: Machine qemu-25-instance-00000019 terminated.
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.774 253465 INFO nova.virt.libvirt.driver [-] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Instance destroyed successfully.
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.775 253465 DEBUG nova.objects.instance [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lazy-loading 'resources' on Instance uuid 6c35e937-504d-41a7-876c-b3b295904a3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:16:53 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[299451]: [NOTICE]   (299455) : haproxy version is 2.8.14-c23fe91
Nov 22 04:16:53 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[299451]: [NOTICE]   (299455) : path to executable is /usr/sbin/haproxy
Nov 22 04:16:53 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[299451]: [WARNING]  (299455) : Exiting Master process...
Nov 22 04:16:53 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[299451]: [WARNING]  (299455) : Exiting Master process...
Nov 22 04:16:53 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[299451]: [ALERT]    (299455) : Current worker (299457) exited with code 143 (Terminated)
Nov 22 04:16:53 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[299451]: [WARNING]  (299455) : All workers exited. Exiting... (0)
Nov 22 04:16:53 compute-0 systemd[1]: libpod-947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798.scope: Deactivated successfully.
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.790 253465 DEBUG nova.virt.libvirt.vif [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:15:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1109852748',display_name='tempest-TransferEncryptedVolumeTest-server-1109852748',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1109852748',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMF3KIMSa8o4WCzM1VgX9RcGz4FcpwcrZcdUDFLNYpjBj2lzhaXFrO0bSdzjU9Itff6b3BySQo/nLrhI32bk8GIfHP/n0NuDArjdwgS2hsu8vteQ0u/zEQY1VMKJGLhTNw==',key_name='tempest-TransferEncryptedVolumeTest-1133237278',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:16:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-p2qdq1ep',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:16:12Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6c35e937-504d-41a7-876c-b3b295904a3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.791 253465 DEBUG nova.network.os_vif_util [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "address": "fa:16:3e:71:9c:d4", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap01e9eaea-6f", "ovs_interfaceid": "01e9eaea-6fba-4e12-9b9a-770c2ce71a2a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.791 253465 DEBUG nova.network.os_vif_util [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01e9eaea-6f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.792 253465 DEBUG os_vif [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01e9eaea-6f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.794 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.794 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01e9eaea-6f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.795 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:53 compute-0 podman[300662]: 2025-11-22 04:16:53.796954454 +0000 UTC m=+0.060259369 container died 947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.797 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.800 253465 INFO os_vif [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:9c:d4,bridge_name='br-int',has_traffic_filtering=True,id=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap01e9eaea-6f')
Nov 22 04:16:53 compute-0 nova_compute[253461]: 2025-11-22 04:16:53.826 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798-userdata-shm.mount: Deactivated successfully.
Nov 22 04:16:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd07d698afbcd108deb20b9664fd5384ecb5da89e6c2f17262c2a1faa0e1e843-merged.mount: Deactivated successfully.
Nov 22 04:16:53 compute-0 podman[300662]: 2025-11-22 04:16:53.85431603 +0000 UTC m=+0.117620935 container cleanup 947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:16:53 compute-0 systemd[1]: libpod-conmon-947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798.scope: Deactivated successfully.
Nov 22 04:16:54 compute-0 ceph-mon[75011]: pgmap v2080: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 4.0 KiB/s wr, 50 op/s
Nov 22 04:16:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:16:54 compute-0 podman[300722]: 2025-11-22 04:16:54.04473471 +0000 UTC m=+0.170989748 container remove 947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.050 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7d9dc594-7407-41fd-ac4c-f3cd7beafbbd]: (4, ('Sat Nov 22 04:16:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 (947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798)\n947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798\nSat Nov 22 04:16:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 (947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798)\n947a5cfaf76e423b7173a0264873f4b0db0afcf6fe3625bea96ed7df11518798\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.052 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[34181b51-bfdc-4d9e-a596-5f4b03ea3429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.053 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73bcc005-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.090 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:54 compute-0 kernel: tap73bcc005-80: left promiscuous mode
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.102 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.106 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ed8936da-358f-4b7b-83ce-d9f7ba0ea9a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.128 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6ac16c-7f96-4636-9f87-6dc5e95555b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.129 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c31e90-0dca-430f-bfee-c97e6e9b20b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.147 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c20246f1-ae4d-4985-a64f-25bc697c8bd8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522259, 'reachable_time': 30542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300737, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d73bcc005\x2d88ac\x2d46b6\x2dad11\x2d6207c6046246.mount: Deactivated successfully.
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.150 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.150 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[6d85d889-cab9-4e78-968d-4dcaf81ea943]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.251 253465 DEBUG nova.compute.manager [req-e014dad3-6d31-497b-abea-bc46edd94560 req-63022c1d-99a3-4d8b-956a-5934c4a57a27 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received event network-vif-unplugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.252 253465 DEBUG oslo_concurrency.lockutils [req-e014dad3-6d31-497b-abea-bc46edd94560 req-63022c1d-99a3-4d8b-956a-5934c4a57a27 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.252 253465 DEBUG oslo_concurrency.lockutils [req-e014dad3-6d31-497b-abea-bc46edd94560 req-63022c1d-99a3-4d8b-956a-5934c4a57a27 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.252 253465 DEBUG oslo_concurrency.lockutils [req-e014dad3-6d31-497b-abea-bc46edd94560 req-63022c1d-99a3-4d8b-956a-5934c4a57a27 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.253 253465 DEBUG nova.compute.manager [req-e014dad3-6d31-497b-abea-bc46edd94560 req-63022c1d-99a3-4d8b-956a-5934c4a57a27 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] No waiting events found dispatching network-vif-unplugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.253 253465 DEBUG nova.compute.manager [req-e014dad3-6d31-497b-abea-bc46edd94560 req-63022c1d-99a3-4d8b-956a-5934c4a57a27 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received event network-vif-unplugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.440 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.442 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.442 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:54 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:16:54.443 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:16:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.845 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.846 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.846 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.847 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.871 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.872 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.872 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.872 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:54 compute-0 nova_compute[253461]: 2025-11-22 04:16:54.873 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:16:55 compute-0 nova_compute[253461]: 2025-11-22 04:16:55.158 253465 INFO nova.virt.libvirt.driver [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Deleting instance files /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f_del
Nov 22 04:16:55 compute-0 nova_compute[253461]: 2025-11-22 04:16:55.158 253465 INFO nova.virt.libvirt.driver [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Deletion of /var/lib/nova/instances/6c35e937-504d-41a7-876c-b3b295904a3f_del complete
Nov 22 04:16:55 compute-0 nova_compute[253461]: 2025-11-22 04:16:55.353 253465 INFO nova.compute.manager [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Took 1.82 seconds to destroy the instance on the hypervisor.
Nov 22 04:16:55 compute-0 nova_compute[253461]: 2025-11-22 04:16:55.353 253465 DEBUG oslo.service.loopingcall [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:16:55 compute-0 nova_compute[253461]: 2025-11-22 04:16:55.354 253465 DEBUG nova.compute.manager [-] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:16:55 compute-0 nova_compute[253461]: 2025-11-22 04:16:55.354 253465 DEBUG nova.network.neutron [-] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:16:56 compute-0 ceph-mon[75011]: pgmap v2081: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 4.0 KiB/s wr, 65 op/s
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.365 253465 DEBUG nova.compute.manager [req-7cd222b4-a2e2-4569-b4a9-f66b92cc754a req-4a0e545e-ffde-4e75-8f55-7b6284ceb268 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received event network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.365 253465 DEBUG oslo_concurrency.lockutils [req-7cd222b4-a2e2-4569-b4a9-f66b92cc754a req-4a0e545e-ffde-4e75-8f55-7b6284ceb268 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.365 253465 DEBUG oslo_concurrency.lockutils [req-7cd222b4-a2e2-4569-b4a9-f66b92cc754a req-4a0e545e-ffde-4e75-8f55-7b6284ceb268 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.366 253465 DEBUG oslo_concurrency.lockutils [req-7cd222b4-a2e2-4569-b4a9-f66b92cc754a req-4a0e545e-ffde-4e75-8f55-7b6284ceb268 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.366 253465 DEBUG nova.compute.manager [req-7cd222b4-a2e2-4569-b4a9-f66b92cc754a req-4a0e545e-ffde-4e75-8f55-7b6284ceb268 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] No waiting events found dispatching network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.366 253465 WARNING nova.compute.manager [req-7cd222b4-a2e2-4569-b4a9-f66b92cc754a req-4a0e545e-ffde-4e75-8f55-7b6284ceb268 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received unexpected event network-vif-plugged-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a for instance with vm_state active and task_state deleting.
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.508 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.694 253465 DEBUG nova.network.neutron [-] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:16:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 3.1 KiB/s wr, 74 op/s
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.760 253465 DEBUG nova.compute.manager [req-e2883b74-b9d4-40e8-bf2a-2d5b8e4be088 req-6b6f2106-f071-42d5-96b9-77d7587433f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Received event network-vif-deleted-01e9eaea-6fba-4e12-9b9a-770c2ce71a2a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.761 253465 INFO nova.compute.manager [req-e2883b74-b9d4-40e8-bf2a-2d5b8e4be088 req-6b6f2106-f071-42d5-96b9-77d7587433f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Neutron deleted interface 01e9eaea-6fba-4e12-9b9a-770c2ce71a2a; detaching it from the instance and deleting it from the info cache
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.761 253465 DEBUG nova.network.neutron [req-e2883b74-b9d4-40e8-bf2a-2d5b8e4be088 req-6b6f2106-f071-42d5-96b9-77d7587433f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.776 253465 INFO nova.compute.manager [-] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Took 1.42 seconds to deallocate network for instance.
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.793 253465 DEBUG nova.compute.manager [req-e2883b74-b9d4-40e8-bf2a-2d5b8e4be088 req-6b6f2106-f071-42d5-96b9-77d7587433f9 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Detach interface failed, port_id=01e9eaea-6fba-4e12-9b9a-770c2ce71a2a, reason: Instance 6c35e937-504d-41a7-876c-b3b295904a3f could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Nov 22 04:16:56 compute-0 nova_compute[253461]: 2025-11-22 04:16:56.984 253465 INFO nova.compute.manager [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Took 0.21 seconds to detach 1 volumes for instance.
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.096 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.096 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.152 253465 DEBUG oslo_concurrency.processutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:16:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:16:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2770952957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.574 253465 DEBUG oslo_concurrency.processutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.580 253465 DEBUG nova.compute.provider_tree [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.599 253465 DEBUG nova.scheduler.client.report [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.622 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.648 253465 INFO nova.scheduler.client.report [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Deleted allocations for instance 6c35e937-504d-41a7-876c-b3b295904a3f
Nov 22 04:16:57 compute-0 nova_compute[253461]: 2025-11-22 04:16:57.713 253465 DEBUG oslo_concurrency.lockutils [None req-a45a4aaf-e37f-4c97-87fc-1eada001f3b0 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6c35e937-504d-41a7-876c-b3b295904a3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:16:58 compute-0 ceph-mon[75011]: pgmap v2082: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 253 KiB/s rd, 3.1 KiB/s wr, 74 op/s
Nov 22 04:16:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2770952957' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:16:58 compute-0 nova_compute[253461]: 2025-11-22 04:16:58.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:16:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:16:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 85 B/s wr, 66 op/s
Nov 22 04:16:58 compute-0 nova_compute[253461]: 2025-11-22 04:16:58.797 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:16:58 compute-0 nova_compute[253461]: 2025-11-22 04:16:58.828 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:00 compute-0 ceph-mon[75011]: pgmap v2083: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 249 KiB/s rd, 85 B/s wr, 66 op/s
Nov 22 04:17:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:17:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3935471849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:17:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:17:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3935471849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:17:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 682 B/s wr, 69 op/s
Nov 22 04:17:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:17:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2781439067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:17:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:17:01 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2781439067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:17:01 compute-0 podman[300763]: 2025-11-22 04:17:01.421318505 +0000 UTC m=+0.101002801 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:17:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3935471849' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:17:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3935471849' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:17:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2781439067' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:17:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2781439067' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:17:02 compute-0 nova_compute[253461]: 2025-11-22 04:17:02.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:02 compute-0 ceph-mon[75011]: pgmap v2084: 305 pgs: 305 active+clean; 270 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 250 KiB/s rd, 682 B/s wr, 69 op/s
Nov 22 04:17:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 205 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 938 B/s wr, 47 op/s
Nov 22 04:17:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:03 compute-0 nova_compute[253461]: 2025-11-22 04:17:03.802 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:03 compute-0 nova_compute[253461]: 2025-11-22 04:17:03.830 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:04 compute-0 ceph-mon[75011]: pgmap v2085: 305 pgs: 305 active+clean; 205 MiB data, 609 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 938 B/s wr, 47 op/s
Nov 22 04:17:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 150 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Nov 22 04:17:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:06 compute-0 ceph-mon[75011]: pgmap v2086: 305 pgs: 305 active+clean; 150 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 228 KiB/s rd, 1.2 KiB/s wr, 35 op/s
Nov 22 04:17:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 22 04:17:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:08 compute-0 ceph-mon[75011]: pgmap v2087: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 1.2 KiB/s wr, 32 op/s
Nov 22 04:17:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 22 04:17:08 compute-0 nova_compute[253461]: 2025-11-22 04:17:08.772 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763785013.771374, 6c35e937-504d-41a7-876c-b3b295904a3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:17:08 compute-0 nova_compute[253461]: 2025-11-22 04:17:08.773 253465 INFO nova.compute.manager [-] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] VM Stopped (Lifecycle Event)
Nov 22 04:17:08 compute-0 nova_compute[253461]: 2025-11-22 04:17:08.804 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:08 compute-0 nova_compute[253461]: 2025-11-22 04:17:08.832 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:08 compute-0 nova_compute[253461]: 2025-11-22 04:17:08.840 253465 DEBUG nova.compute.manager [None req-99a525e4-45a8-4108-9179-937a7b794118 - - - - - -] [instance: 6c35e937-504d-41a7-876c-b3b295904a3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:17:10 compute-0 ceph-mon[75011]: pgmap v2088: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 22 04:17:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 22 04:17:12 compute-0 ceph-mon[75011]: pgmap v2089: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 21 op/s
Nov 22 04:17:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 18 op/s
Nov 22 04:17:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:13 compute-0 nova_compute[253461]: 2025-11-22 04:17:13.808 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:13 compute-0 nova_compute[253461]: 2025-11-22 04:17:13.835 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:14 compute-0 ceph-mon[75011]: pgmap v2090: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 18 op/s
Nov 22 04:17:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 693 KiB/s rd, 341 B/s wr, 16 op/s
Nov 22 04:17:15 compute-0 ceph-mon[75011]: pgmap v2091: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 693 KiB/s rd, 341 B/s wr, 16 op/s
Nov 22 04:17:16 compute-0 podman[300783]: 2025-11-22 04:17:16.404023603 +0000 UTC m=+0.078165134 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251118, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 22 04:17:16 compute-0 podman[300784]: 2025-11-22 04:17:16.438717837 +0000 UTC m=+0.117032079 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:17:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 17 op/s
Nov 22 04:17:18 compute-0 ceph-mon[75011]: pgmap v2092: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 17 op/s
Nov 22 04:17:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 22 04:17:18 compute-0 nova_compute[253461]: 2025-11-22 04:17:18.811 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:18 compute-0 nova_compute[253461]: 2025-11-22 04:17:18.837 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:20 compute-0 ceph-mon[75011]: pgmap v2093: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 22 04:17:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 21 KiB/s wr, 7 op/s
Nov 22 04:17:22 compute-0 ceph-mon[75011]: pgmap v2094: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 21 KiB/s wr, 7 op/s
Nov 22 04:17:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:17:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:23.033 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:23.033 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:23.034 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:23 compute-0 nova_compute[253461]: 2025-11-22 04:17:23.815 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:23 compute-0 nova_compute[253461]: 2025-11-22 04:17:23.839 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:24 compute-0 ceph-mon[75011]: pgmap v2095: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:17:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:17:26 compute-0 ceph-mon[75011]: pgmap v2096: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:17:26 compute-0 ovn_controller[152691]: 2025-11-22T04:17:26Z|00269|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 22 04:17:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:17:27 compute-0 ceph-mon[75011]: pgmap v2097: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:17:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 22 04:17:28 compute-0 nova_compute[253461]: 2025-11-22 04:17:28.819 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:28 compute-0 nova_compute[253461]: 2025-11-22 04:17:28.841 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:29 compute-0 ceph-mon[75011]: pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 22 04:17:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 22 04:17:32 compute-0 ceph-mon[75011]: pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.069507) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785052069884, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 733, "num_deletes": 251, "total_data_size": 930708, "memory_usage": 944624, "flush_reason": "Manual Compaction"}
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785052086354, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 911209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41156, "largest_seqno": 41888, "table_properties": {"data_size": 907371, "index_size": 1618, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8614, "raw_average_key_size": 19, "raw_value_size": 899761, "raw_average_value_size": 2035, "num_data_blocks": 72, "num_entries": 442, "num_filter_entries": 442, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763784994, "oldest_key_time": 1763784994, "file_creation_time": 1763785052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 16937 microseconds, and 4725 cpu microseconds.
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.086413) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 911209 bytes OK
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.086473) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.090984) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.091004) EVENT_LOG_v1 {"time_micros": 1763785052090997, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.091028) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 926934, prev total WAL file size 926934, number of live WAL files 2.
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.091849) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(889KB)], [86(10MB)]
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785052091935, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 12248746, "oldest_snapshot_seqno": -1}
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7127 keys, 10471634 bytes, temperature: kUnknown
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785052209161, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10471634, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10419243, "index_size": 33463, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 181710, "raw_average_key_size": 25, "raw_value_size": 10286529, "raw_average_value_size": 1443, "num_data_blocks": 1319, "num_entries": 7127, "num_filter_entries": 7127, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763785052, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.209559) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10471634 bytes
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.213814) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 104.4 rd, 89.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.8 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(24.9) write-amplify(11.5) OK, records in: 7640, records dropped: 513 output_compression: NoCompression
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.213843) EVENT_LOG_v1 {"time_micros": 1763785052213828, "job": 50, "event": "compaction_finished", "compaction_time_micros": 117324, "compaction_time_cpu_micros": 35620, "output_level": 6, "num_output_files": 1, "total_output_size": 10471634, "num_input_records": 7640, "num_output_records": 7127, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785052214173, "job": 50, "event": "table_file_deletion", "file_number": 88}
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785052216886, "job": 50, "event": "table_file_deletion", "file_number": 86}
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.091688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.216953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.216959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.216961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.216963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:17:32 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:17:32.216965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:17:32 compute-0 podman[300824]: 2025-11-22 04:17:32.420498059 +0000 UTC m=+0.094940398 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:17:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 511 B/s wr, 4 op/s
Nov 22 04:17:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:33 compute-0 nova_compute[253461]: 2025-11-22 04:17:33.824 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:33 compute-0 nova_compute[253461]: 2025-11-22 04:17:33.843 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:34 compute-0 ceph-mon[75011]: pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 511 B/s wr, 4 op/s
Nov 22 04:17:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 136 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 4.0 MiB/s wr, 9 op/s
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:17:36
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'images', 'backups', '.rgw.root', 'vms', 'default.rgw.meta', '.mgr']
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:17:36 compute-0 ceph-mon[75011]: pgmap v2101: 305 pgs: 305 active+clean; 136 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 6.0 KiB/s rd, 4.0 MiB/s wr, 9 op/s
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:17:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:38 compute-0 ceph-mon[75011]: pgmap v2102: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:38 compute-0 nova_compute[253461]: 2025-11-22 04:17:38.827 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:38 compute-0 nova_compute[253461]: 2025-11-22 04:17:38.886 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:40 compute-0 nova_compute[253461]: 2025-11-22 04:17:40.319 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:40 compute-0 nova_compute[253461]: 2025-11-22 04:17:40.320 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:40 compute-0 ceph-mon[75011]: pgmap v2103: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:40 compute-0 nova_compute[253461]: 2025-11-22 04:17:40.343 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:17:40 compute-0 nova_compute[253461]: 2025-11-22 04:17:40.454 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:40 compute-0 nova_compute[253461]: 2025-11-22 04:17:40.455 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:40 compute-0 nova_compute[253461]: 2025-11-22 04:17:40.465 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:17:40 compute-0 nova_compute[253461]: 2025-11-22 04:17:40.466 253465 INFO nova.compute.claims [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:17:40 compute-0 nova_compute[253461]: 2025-11-22 04:17:40.606 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:41 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:41 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1940238451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.065 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.084 253465 DEBUG nova.compute.provider_tree [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.140 253465 DEBUG nova.scheduler.client.report [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.234 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.235 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.395 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.397 253465 DEBUG nova.network.neutron [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:17:41 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1940238451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.455 253465 INFO nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.516 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.587 253465 INFO nova.virt.block_device [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Booting with volume 32a4109a-1490-40cb-8838-25e3b6fd4d19 at /dev/vda
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.780 253465 DEBUG os_brick.utils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.782 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.796 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.796 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[2a32d4f4-0822-4f23-a9b5-f5e8b4ab2240]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.797 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.807 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.808 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[74c4374a-e052-44ae-8a90-c11cd5f8003e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.811 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.822 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.822 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[abb63df5-3f87-4247-890a-805e4841ee87]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.823 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[fd77cefb-3fe9-40a7-855b-b10a96948ed6]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.824 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.850 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.855 253465 DEBUG os_brick.initiator.connectors.lightos [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.856 253465 DEBUG os_brick.initiator.connectors.lightos [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.856 253465 DEBUG os_brick.initiator.connectors.lightos [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.857 253465 DEBUG os_brick.utils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.858 253465 DEBUG nova.virt.block_device [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updating existing volume attachment record: 7b1a13b1-aacf-4970-92af-960c2dc4ceb4 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:17:41 compute-0 nova_compute[253461]: 2025-11-22 04:17:41.965 253465 DEBUG nova.policy [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ddff25657c74403e9ed9e91ff227badd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:17:42 compute-0 ceph-mon[75011]: pgmap v2104: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:42 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:42 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2377253432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.037 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.040 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.041 253465 INFO nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Creating image(s)
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.041 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.042 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Ensure instance console log exists: /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.043 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.043 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.044 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:43 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2377253432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.830 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.889 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:43 compute-0 nova_compute[253461]: 2025-11-22 04:17:43.965 253465 DEBUG nova.network.neutron [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Successfully created port: 75151d72-858e-460b-bba4-4cc8501a764a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:17:44 compute-0 ceph-mon[75011]: pgmap v2105: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:45 compute-0 nova_compute[253461]: 2025-11-22 04:17:45.494 253465 DEBUG nova.network.neutron [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Successfully updated port: 75151d72-858e-460b-bba4-4cc8501a764a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:17:45 compute-0 nova_compute[253461]: 2025-11-22 04:17:45.516 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:17:45 compute-0 nova_compute[253461]: 2025-11-22 04:17:45.516 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquired lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:17:45 compute-0 nova_compute[253461]: 2025-11-22 04:17:45.517 253465 DEBUG nova.network.neutron [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:17:45 compute-0 nova_compute[253461]: 2025-11-22 04:17:45.602 253465 DEBUG nova.compute.manager [req-656b3ee1-7095-469d-b42f-785a618e97d4 req-4decfabd-c9af-43b9-9115-bd88859fa955 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received event network-changed-75151d72-858e-460b-bba4-4cc8501a764a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:17:45 compute-0 nova_compute[253461]: 2025-11-22 04:17:45.603 253465 DEBUG nova.compute.manager [req-656b3ee1-7095-469d-b42f-785a618e97d4 req-4decfabd-c9af-43b9-9115-bd88859fa955 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Refreshing instance network info cache due to event network-changed-75151d72-858e-460b-bba4-4cc8501a764a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:17:45 compute-0 nova_compute[253461]: 2025-11-22 04:17:45.603 253465 DEBUG oslo_concurrency.lockutils [req-656b3ee1-7095-469d-b42f-785a618e97d4 req-4decfabd-c9af-43b9-9115-bd88859fa955 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:17:45 compute-0 nova_compute[253461]: 2025-11-22 04:17:45.926 253465 DEBUG nova.network.neutron [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.528 253465 DEBUG nova.network.neutron [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updating instance_info_cache with network_info: [{"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0021746750260467538 of space, bias 1.0, pg target 0.6524025078140261 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.687 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Releasing lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:17:46 compute-0 ceph-mon[75011]: pgmap v2106: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.689 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Instance network_info: |[{"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.691 253465 DEBUG oslo_concurrency.lockutils [req-656b3ee1-7095-469d-b42f-785a618e97d4 req-4decfabd-c9af-43b9-9115-bd88859fa955 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.692 253465 DEBUG nova.network.neutron [req-656b3ee1-7095-469d-b42f-785a618e97d4 req-4decfabd-c9af-43b9-9115-bd88859fa955 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Refreshing network info cache for port 75151d72-858e-460b-bba4-4cc8501a764a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.696 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Start _get_guest_xml network_info=[{"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': '7b1a13b1-aacf-4970-92af-960c2dc4ceb4', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-32a4109a-1490-40cb-8838-25e3b6fd4d19', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '32a4109a-1490-40cb-8838-25e3b6fd4d19', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '0f6fc19a-f734-4362-a5ff-785307d2b7b8', 'attached_at': '', 'detached_at': '', 'volume_id': '32a4109a-1490-40cb-8838-25e3b6fd4d19', 'serial': '32a4109a-1490-40cb-8838-25e3b6fd4d19'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.703 253465 WARNING nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.713 253465 DEBUG nova.virt.libvirt.host [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.714 253465 DEBUG nova.virt.libvirt.host [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.718 253465 DEBUG nova.virt.libvirt.host [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.719 253465 DEBUG nova.virt.libvirt.host [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.719 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.719 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.720 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.720 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.720 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.720 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.721 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.721 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.721 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.721 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.721 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.722 253465 DEBUG nova.virt.hardware [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:17:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.4 MiB/s wr, 35 op/s
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.786 253465 DEBUG nova.storage.rbd_utils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 0f6fc19a-f734-4362-a5ff-785307d2b7b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:17:46 compute-0 nova_compute[253461]: 2025-11-22 04:17:46.790 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:17:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/59029952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.245 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.370 253465 DEBUG os_brick.encryptors [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Using volume encryption metadata '{'encryption_key_id': 'acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-32a4109a-1490-40cb-8838-25e3b6fd4d19', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '32a4109a-1490-40cb-8838-25e3b6fd4d19', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '0f6fc19a-f734-4362-a5ff-785307d2b7b8', 'attached_at': '', 'detached_at': '', 'volume_id': '32a4109a-1490-40cb-8838-25e3b6fd4d19', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.373 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.390 253465 DEBUG barbicanclient.v1.secrets [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.390 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 podman[300916]: 2025-11-22 04:17:47.400825996 +0000 UTC m=+0.071283596 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251118)
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.411 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.412 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 podman[300917]: 2025-11-22 04:17:47.42979153 +0000 UTC m=+0.100590206 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.445 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.446 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.485 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.485 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.511 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.511 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.532 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.532 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.556 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.556 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.591 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.592 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.620 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.620 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.654 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.655 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.675 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.676 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.693 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.693 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.710 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.711 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.735 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.735 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.754 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.755 253465 INFO barbicanclient.base [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/acc18dc7-6ae2-43a5-8978-fc0c1c38a2e1
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.772 253465 DEBUG barbicanclient.client [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.773 253465 DEBUG nova.virt.libvirt.host [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 04:17:47 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 04:17:47 compute-0 nova_compute[253461]:     <volume>32a4109a-1490-40cb-8838-25e3b6fd4d19</volume>
Nov 22 04:17:47 compute-0 nova_compute[253461]:   </usage>
Nov 22 04:17:47 compute-0 nova_compute[253461]: </secret>
Nov 22 04:17:47 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 04:17:47 compute-0 ceph-mon[75011]: pgmap v2107: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 5.4 MiB/s wr, 35 op/s
Nov 22 04:17:47 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/59029952' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.964 253465 DEBUG nova.virt.libvirt.vif [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-104981922',display_name='tempest-TransferEncryptedVolumeTest-server-104981922',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-104981922',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ATKJWVlmyGreKYnDCSQO/lCz0VA3LfT+2g0XAIL/EfA89Lu4gjHntRaTvYv3ssQtoWjE9SDx5lQG0mvCId2hvStMomFINnpiLPFYacZktgyZ/1N3JNIqwNfMqE81xQ==',key_name='tempest-TransferEncryptedVolumeTest-1718953926',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-5fgt290f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:17:41Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=0f6fc19a-f734-4362-a5ff-785307d2b7b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.965 253465 DEBUG nova.network.os_vif_util [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.966 253465 DEBUG nova.network.os_vif_util [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:5c:4c,bridge_name='br-int',has_traffic_filtering=True,id=75151d72-858e-460b-bba4-4cc8501a764a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75151d72-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:17:47 compute-0 nova_compute[253461]: 2025-11-22 04:17:47.968 253465 DEBUG nova.objects.instance [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f6fc19a-f734-4362-a5ff-785307d2b7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:17:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.765 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <uuid>0f6fc19a-f734-4362-a5ff-785307d2b7b8</uuid>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <name>instance-0000001a</name>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-104981922</nova:name>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:17:46</nova:creationTime>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <nova:user uuid="ddff25657c74403e9ed9e91ff227badd">tempest-TransferEncryptedVolumeTest-1500496447-project-member</nova:user>
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <nova:project uuid="98e4451f91104cd88f6e19dd5c53fd00">tempest-TransferEncryptedVolumeTest-1500496447</nova:project>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <nova:port uuid="75151d72-858e-460b-bba4-4cc8501a764a">
Nov 22 04:17:48 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <system>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <entry name="serial">0f6fc19a-f734-4362-a5ff-785307d2b7b8</entry>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <entry name="uuid">0f6fc19a-f734-4362-a5ff-785307d2b7b8</entry>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </system>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <os>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   </os>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <features>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   </features>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/0f6fc19a-f734-4362-a5ff-785307d2b7b8_disk.config">
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       </source>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-32a4109a-1490-40cb-8838-25e3b6fd4d19">
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       </source>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <serial>32a4109a-1490-40cb-8838-25e3b6fd4d19</serial>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <encryption format="luks">
Nov 22 04:17:48 compute-0 nova_compute[253461]:         <secret type="passphrase" uuid="33e9dbbc-3f01-4a4a-aa5d-859d1cf1b6cf"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       </encryption>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:5d:5c:4c"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <target dev="tap75151d72-85"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8/console.log" append="off"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <video>
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </video>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:17:48 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:17:48 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:17:48 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:17:48 compute-0 nova_compute[253461]: </domain>
Nov 22 04:17:48 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.765 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Preparing to wait for external event network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.766 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.766 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.766 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.768 253465 DEBUG nova.virt.libvirt.vif [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-104981922',display_name='tempest-TransferEncryptedVolumeTest-server-104981922',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-104981922',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ATKJWVlmyGreKYnDCSQO/lCz0VA3LfT+2g0XAIL/EfA89Lu4gjHntRaTvYv3ssQtoWjE9SDx5lQG0mvCId2hvStMomFINnpiLPFYacZktgyZ/1N3JNIqwNfMqE81xQ==',key_name='tempest-TransferEncryptedVolumeTest-1718953926',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-5fgt290f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:17:41Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=0f6fc19a-f734-4362-a5ff-785307d2b7b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.769 253465 DEBUG nova.network.os_vif_util [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.770 253465 DEBUG nova.network.os_vif_util [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:5c:4c,bridge_name='br-int',has_traffic_filtering=True,id=75151d72-858e-460b-bba4-4cc8501a764a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75151d72-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.770 253465 DEBUG os_vif [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:5c:4c,bridge_name='br-int',has_traffic_filtering=True,id=75151d72-858e-460b-bba4-4cc8501a764a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75151d72-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.772 253465 DEBUG nova.network.neutron [req-656b3ee1-7095-469d-b42f-785a618e97d4 req-4decfabd-c9af-43b9-9115-bd88859fa955 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updated VIF entry in instance network info cache for port 75151d72-858e-460b-bba4-4cc8501a764a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.773 253465 DEBUG nova.network.neutron [req-656b3ee1-7095-469d-b42f-785a618e97d4 req-4decfabd-c9af-43b9-9115-bd88859fa955 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updating instance_info_cache with network_info: [{"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.775 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.776 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.776 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:17:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.781 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.781 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75151d72-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.782 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75151d72-85, col_values=(('external_ids', {'iface-id': '75151d72-858e-460b-bba4-4cc8501a764a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:5c:4c', 'vm-uuid': '0f6fc19a-f734-4362-a5ff-785307d2b7b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.797 253465 DEBUG oslo_concurrency.lockutils [req-656b3ee1-7095-469d-b42f-785a618e97d4 req-4decfabd-c9af-43b9-9115-bd88859fa955 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.817 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:48 compute-0 NetworkManager[48916]: <info>  [1763785068.8186] manager: (tap75151d72-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.820 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.824 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.824 253465 INFO os_vif [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:5c:4c,bridge_name='br-int',has_traffic_filtering=True,id=75151d72-858e-460b-bba4-4cc8501a764a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75151d72-85')
Nov 22 04:17:48 compute-0 nova_compute[253461]: 2025-11-22 04:17:48.891 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.113 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.113 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.114 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No VIF found with MAC fa:16:3e:5d:5c:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.114 253465 INFO nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Using config drive
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.138 253465 DEBUG nova.storage.rbd_utils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 0f6fc19a-f734-4362-a5ff-785307d2b7b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.409 253465 INFO nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Creating config drive at /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8/disk.config
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.419 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpopzlaimp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.560 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpopzlaimp" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.581 253465 DEBUG nova.storage.rbd_utils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 0f6fc19a-f734-4362-a5ff-785307d2b7b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.584 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8/disk.config 0f6fc19a-f734-4362-a5ff-785307d2b7b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.750 253465 DEBUG oslo_concurrency.processutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8/disk.config 0f6fc19a-f734-4362-a5ff-785307d2b7b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.751 253465 INFO nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Deleting local config drive /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8/disk.config because it was imported into RBD.
Nov 22 04:17:49 compute-0 kernel: tap75151d72-85: entered promiscuous mode
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.805 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:49 compute-0 ovn_controller[152691]: 2025-11-22T04:17:49Z|00270|binding|INFO|Claiming lport 75151d72-858e-460b-bba4-4cc8501a764a for this chassis.
Nov 22 04:17:49 compute-0 ovn_controller[152691]: 2025-11-22T04:17:49Z|00271|binding|INFO|75151d72-858e-460b-bba4-4cc8501a764a: Claiming fa:16:3e:5d:5c:4c 10.100.0.10
Nov 22 04:17:49 compute-0 NetworkManager[48916]: <info>  [1763785069.8067] manager: (tap75151d72-85): new Tun device (/org/freedesktop/NetworkManager/Devices/134)
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.815 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:5c:4c 10.100.0.10'], port_security=['fa:16:3e:5d:5c:4c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0f6fc19a-f734-4362-a5ff-785307d2b7b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73bcc005-88ac-46b6-ad11-6207c6046246', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c3cc5e7-a78d-415a-aaf6-2d09ae975fc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8139379-e220-4788-92e4-b495f0c34eb7, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=75151d72-858e-460b-bba4-4cc8501a764a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.817 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 75151d72-858e-460b-bba4-4cc8501a764a in datapath 73bcc005-88ac-46b6-ad11-6207c6046246 bound to our chassis
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.819 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.819 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:17:49 compute-0 ovn_controller[152691]: 2025-11-22T04:17:49Z|00272|binding|INFO|Setting lport 75151d72-858e-460b-bba4-4cc8501a764a ovn-installed in OVS
Nov 22 04:17:49 compute-0 ovn_controller[152691]: 2025-11-22T04:17:49Z|00273|binding|INFO|Setting lport 75151d72-858e-460b-bba4-4cc8501a764a up in Southbound
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.822 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:49 compute-0 nova_compute[253461]: 2025-11-22 04:17:49.823 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.834 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9b60866d-a559-49d0-8497-bc7bfd0142a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.835 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap73bcc005-81 in ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:17:49 compute-0 systemd-udevd[301030]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.839 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap73bcc005-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.839 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[24de3953-4950-4db8-969d-19791ee0559b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.841 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[329d386d-a798-4096-8004-9b8008048089]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 systemd-machined[215728]: New machine qemu-26-instance-0000001a.
Nov 22 04:17:49 compute-0 NetworkManager[48916]: <info>  [1763785069.8560] device (tap75151d72-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:17:49 compute-0 NetworkManager[48916]: <info>  [1763785069.8568] device (tap75151d72-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:17:49 compute-0 ceph-mon[75011]: pgmap v2108: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.855 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[0be35c57-ca7c-4bad-a25d-13bd3c244a06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.871 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9c14a3-983a-4a7a-b240-21eb7beb5150]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.900 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[01af53f9-3017-4f07-911c-10b9b76f186c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 NetworkManager[48916]: <info>  [1763785069.9077] manager: (tap73bcc005-80): new Veth device (/org/freedesktop/NetworkManager/Devices/135)
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.907 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c4adc7-9441-4947-a7ff-582e249bfb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 systemd-udevd[301033]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.934 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[38a142ff-c99b-4c65-8d53-effdfc672d4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.937 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[4c42ee4a-ced5-4710-aa21-327cee09151e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 NetworkManager[48916]: <info>  [1763785069.9550] device (tap73bcc005-80): carrier: link connected
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.958 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[5d06ef84-4249-48d7-beec-dcae0615f2fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.972 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[742417c3-efc3-485c-a76c-2a174e2e78d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73bcc005-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532255, 'reachable_time': 31585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301062, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.985 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fa2c0ca2-df1e-4fce-9350-0eae471fff7b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:1121'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 532255, 'tstamp': 532255}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301063, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:49.998 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fbfb3856-7203-4396-8d76-adfee75ba47e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73bcc005-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532255, 'reachable_time': 31585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301064, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.027 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[12644d06-eca5-4afe-9893-aa03b6ec09ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.079 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2f31b950-242f-4b06-ad4d-ac1bde7c2307]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.080 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73bcc005-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.080 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.081 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73bcc005-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:17:50 compute-0 nova_compute[253461]: 2025-11-22 04:17:50.081 253465 DEBUG nova.compute.manager [req-811a7b17-2367-4ced-a4e3-419fbed0a31d req-c8bf3d43-7805-458e-909c-65b484ae5693 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received event network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:17:50 compute-0 nova_compute[253461]: 2025-11-22 04:17:50.081 253465 DEBUG oslo_concurrency.lockutils [req-811a7b17-2367-4ced-a4e3-419fbed0a31d req-c8bf3d43-7805-458e-909c-65b484ae5693 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:50 compute-0 nova_compute[253461]: 2025-11-22 04:17:50.082 253465 DEBUG oslo_concurrency.lockutils [req-811a7b17-2367-4ced-a4e3-419fbed0a31d req-c8bf3d43-7805-458e-909c-65b484ae5693 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:50 compute-0 nova_compute[253461]: 2025-11-22 04:17:50.082 253465 DEBUG oslo_concurrency.lockutils [req-811a7b17-2367-4ced-a4e3-419fbed0a31d req-c8bf3d43-7805-458e-909c-65b484ae5693 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:50 compute-0 nova_compute[253461]: 2025-11-22 04:17:50.083 253465 DEBUG nova.compute.manager [req-811a7b17-2367-4ced-a4e3-419fbed0a31d req-c8bf3d43-7805-458e-909c-65b484ae5693 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Processing event network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:17:50 compute-0 NetworkManager[48916]: <info>  [1763785070.0863] manager: (tap73bcc005-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Nov 22 04:17:50 compute-0 kernel: tap73bcc005-80: entered promiscuous mode
Nov 22 04:17:50 compute-0 nova_compute[253461]: 2025-11-22 04:17:50.088 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.089 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap73bcc005-80, col_values=(('external_ids', {'iface-id': 'c0be682a-2fee-4917-82d9-be22b54079b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:17:50 compute-0 ovn_controller[152691]: 2025-11-22T04:17:50Z|00274|binding|INFO|Releasing lport c0be682a-2fee-4917-82d9-be22b54079b1 from this chassis (sb_readonly=0)
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.091 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.092 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb6872a-a432-4892-aaa2-dcd5a91a3610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.093 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:17:50 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:50.093 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'env', 'PROCESS_TAG=haproxy-73bcc005-88ac-46b6-ad11-6207c6046246', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/73bcc005-88ac-46b6-ad11-6207c6046246.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:17:50 compute-0 nova_compute[253461]: 2025-11-22 04:17:50.102 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:50 compute-0 podman[301099]: 2025-11-22 04:17:50.501964733 +0000 UTC m=+0.067073149 container create 65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118)
Nov 22 04:17:50 compute-0 podman[301099]: 2025-11-22 04:17:50.461716928 +0000 UTC m=+0.026825424 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:17:50 compute-0 systemd[1]: Started libpod-conmon-65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428.scope.
Nov 22 04:17:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/369d6f24fdd9cb8ee42d596a4a5360abf8ee077a0975a5a1eceeed09ffcbb946/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:50 compute-0 podman[301099]: 2025-11-22 04:17:50.613806268 +0000 UTC m=+0.178914754 container init 65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 22 04:17:50 compute-0 podman[301099]: 2025-11-22 04:17:50.619949323 +0000 UTC m=+0.185057739 container start 65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:17:50 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[301150]: [NOTICE]   (301154) : New worker (301156) forked
Nov 22 04:17:50 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[301150]: [NOTICE]   (301154) : Loading success.
Nov 22 04:17:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 2 op/s
Nov 22 04:17:51 compute-0 nova_compute[253461]: 2025-11-22 04:17:51.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:51 compute-0 nova_compute[253461]: 2025-11-22 04:17:51.478 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:51 compute-0 nova_compute[253461]: 2025-11-22 04:17:51.478 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:51 compute-0 nova_compute[253461]: 2025-11-22 04:17:51.479 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:51 compute-0 nova_compute[253461]: 2025-11-22 04:17:51.480 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:17:51 compute-0 nova_compute[253461]: 2025-11-22 04:17:51.480 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:51 compute-0 ceph-mon[75011]: pgmap v2109: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 85 B/s wr, 2 op/s
Nov 22 04:17:51 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:51 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380272448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:51 compute-0 nova_compute[253461]: 2025-11-22 04:17:51.963 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.139 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.140 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.173 253465 DEBUG nova.compute.manager [req-b33096f7-e269-42fd-8d52-c2eaae9ef4a3 req-07756e80-3a1a-4784-89bc-69251dd9308a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received event network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.174 253465 DEBUG oslo_concurrency.lockutils [req-b33096f7-e269-42fd-8d52-c2eaae9ef4a3 req-07756e80-3a1a-4784-89bc-69251dd9308a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.175 253465 DEBUG oslo_concurrency.lockutils [req-b33096f7-e269-42fd-8d52-c2eaae9ef4a3 req-07756e80-3a1a-4784-89bc-69251dd9308a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.175 253465 DEBUG oslo_concurrency.lockutils [req-b33096f7-e269-42fd-8d52-c2eaae9ef4a3 req-07756e80-3a1a-4784-89bc-69251dd9308a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.176 253465 DEBUG nova.compute.manager [req-b33096f7-e269-42fd-8d52-c2eaae9ef4a3 req-07756e80-3a1a-4784-89bc-69251dd9308a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] No waiting events found dispatching network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.177 253465 WARNING nova.compute.manager [req-b33096f7-e269-42fd-8d52-c2eaae9ef4a3 req-07756e80-3a1a-4784-89bc-69251dd9308a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received unexpected event network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a for instance with vm_state building and task_state spawning.
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.330 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.331 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4291MB free_disk=59.988277435302734GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.332 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.332 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.393 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 0f6fc19a-f734-4362-a5ff-785307d2b7b8 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.394 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.394 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.426 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:17:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.857 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.859 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785072.8563457, 0f6fc19a-f734-4362-a5ff-785307d2b7b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.859 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] VM Started (Lifecycle Event)
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.870 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.874 253465 INFO nova.virt.libvirt.driver [-] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Instance spawned successfully.
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.874 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:17:52 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1380272448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.899 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.908 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:17:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:17:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550423060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.915 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.919 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.920 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.920 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.921 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.922 253465 DEBUG nova.virt.libvirt.driver [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.929 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.930 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785072.858042, 0f6fc19a-f734-4362-a5ff-785307d2b7b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.930 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] VM Paused (Lifecycle Event)
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.948 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.954 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.958 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.961 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785072.8697627, 0f6fc19a-f734-4362-a5ff-785307d2b7b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.961 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] VM Resumed (Lifecycle Event)
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.976 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.989 253465 INFO nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Took 9.95 seconds to spawn the instance on the hypervisor.
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.990 253465 DEBUG nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.992 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:17:52 compute-0 nova_compute[253461]: 2025-11-22 04:17:52.998 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:17:53 compute-0 nova_compute[253461]: 2025-11-22 04:17:53.005 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:17:53 compute-0 nova_compute[253461]: 2025-11-22 04:17:53.005 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.674s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:53 compute-0 nova_compute[253461]: 2025-11-22 04:17:53.026 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:17:53 compute-0 nova_compute[253461]: 2025-11-22 04:17:53.056 253465 INFO nova.compute.manager [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Took 12.64 seconds to build instance.
Nov 22 04:17:53 compute-0 nova_compute[253461]: 2025-11-22 04:17:53.071 253465 DEBUG oslo_concurrency.lockutils [None req-cf211a50-32b3-4a36-8da9-89f2e1987bbc ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:17:53 compute-0 sudo[301215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:17:53 compute-0 sudo[301215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:53 compute-0 sudo[301215]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:53 compute-0 sudo[301240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:17:53 compute-0 sudo[301240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:53 compute-0 sudo[301240]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:53 compute-0 sudo[301265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:17:53 compute-0 sudo[301265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:53 compute-0 sudo[301265]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:53 compute-0 sudo[301290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:17:53 compute-0 sudo[301290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:53 compute-0 nova_compute[253461]: 2025-11-22 04:17:53.819 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:53 compute-0 ceph-mon[75011]: pgmap v2110: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 255 B/s wr, 3 op/s
Nov 22 04:17:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3550423060' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:17:53 compute-0 nova_compute[253461]: 2025-11-22 04:17:53.893 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:53 compute-0 sudo[301290]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:17:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:17:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:17:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:17:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:17:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:17:53 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 6a12d524-b28f-47e4-946a-1834b790c309 does not exist
Nov 22 04:17:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:17:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:17:53 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 8be7c0ad-c83a-46e1-b6e6-46fd9651550c does not exist
Nov 22 04:17:53 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 0f245a0e-fde0-423c-8598-858fb4b915e8 does not exist
Nov 22 04:17:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:17:53 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:17:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:17:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.006 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:54 compute-0 sudo[301347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:17:54 compute-0 sudo[301347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:54 compute-0 sudo[301347]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:54 compute-0 sudo[301372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:17:54 compute-0 sudo[301372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:54 compute-0 sudo[301372]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:54 compute-0 sudo[301397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:17:54 compute-0 sudo[301397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:54 compute-0 sudo[301397]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:54 compute-0 sudo[301422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:17:54 compute-0 sudo[301422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.424 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:17:54 compute-0 podman[301488]: 2025-11-22 04:17:54.658675504 +0000 UTC m=+0.065590592 container create 901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:17:54 compute-0 systemd[1]: Started libpod-conmon-901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054.scope.
Nov 22 04:17:54 compute-0 podman[301488]: 2025-11-22 04:17:54.630738782 +0000 UTC m=+0.037653920 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:54 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:17:54 compute-0 podman[301488]: 2025-11-22 04:17:54.75592288 +0000 UTC m=+0.162837948 container init 901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:17:54 compute-0 podman[301488]: 2025-11-22 04:17:54.763524464 +0000 UTC m=+0.170439542 container start 901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Nov 22 04:17:54 compute-0 podman[301488]: 2025-11-22 04:17:54.769454142 +0000 UTC m=+0.176369210 container attach 901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:17:54 compute-0 optimistic_cannon[301504]: 167 167
Nov 22 04:17:54 compute-0 podman[301488]: 2025-11-22 04:17:54.774995598 +0000 UTC m=+0.181910686 container died 901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:17:54 compute-0 systemd[1]: libpod-901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054.scope: Deactivated successfully.
Nov 22 04:17:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 776 KiB/s rd, 12 KiB/s wr, 32 op/s
Nov 22 04:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b59e7b6c1192d2e44285f4ca7d924557e32497c34fed9349e9841324c7cb1da-merged.mount: Deactivated successfully.
Nov 22 04:17:54 compute-0 podman[301488]: 2025-11-22 04:17:54.820137433 +0000 UTC m=+0.227052481 container remove 901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:17:54 compute-0 systemd[1]: libpod-conmon-901465211c14a62bea69590cae1cc1f951fd439140adc5bf707f855c1844f054.scope: Deactivated successfully.
Nov 22 04:17:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:17:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:17:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:17:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:17:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:17:54 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.916 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.917 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.917 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:17:54 compute-0 nova_compute[253461]: 2025-11-22 04:17:54.917 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0f6fc19a-f734-4362-a5ff-785307d2b7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:17:55 compute-0 podman[301527]: 2025-11-22 04:17:55.017908758 +0000 UTC m=+0.068856655 container create b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_herschel, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:55 compute-0 podman[301527]: 2025-11-22 04:17:54.987965741 +0000 UTC m=+0.038913668 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:55 compute-0 systemd[1]: Started libpod-conmon-b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409.scope.
Nov 22 04:17:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7683f19ccf34671723d842d706d8d80fe191d54852b7d0f9a0fd35239dca257/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7683f19ccf34671723d842d706d8d80fe191d54852b7d0f9a0fd35239dca257/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7683f19ccf34671723d842d706d8d80fe191d54852b7d0f9a0fd35239dca257/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7683f19ccf34671723d842d706d8d80fe191d54852b7d0f9a0fd35239dca257/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7683f19ccf34671723d842d706d8d80fe191d54852b7d0f9a0fd35239dca257/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:55 compute-0 podman[301527]: 2025-11-22 04:17:55.152781325 +0000 UTC m=+0.203729252 container init b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_herschel, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Nov 22 04:17:55 compute-0 podman[301527]: 2025-11-22 04:17:55.16141969 +0000 UTC m=+0.212367597 container start b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:17:55 compute-0 podman[301527]: 2025-11-22 04:17:55.166296383 +0000 UTC m=+0.217244290 container attach b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_herschel, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:17:55 compute-0 ceph-mon[75011]: pgmap v2111: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 776 KiB/s rd, 12 KiB/s wr, 32 op/s
Nov 22 04:17:56 compute-0 quizzical_herschel[301544]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:17:56 compute-0 quizzical_herschel[301544]: --> relative data size: 1.0
Nov 22 04:17:56 compute-0 quizzical_herschel[301544]: --> All data devices are unavailable
Nov 22 04:17:56 compute-0 systemd[1]: libpod-b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409.scope: Deactivated successfully.
Nov 22 04:17:56 compute-0 systemd[1]: libpod-b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409.scope: Consumed 1.050s CPU time.
Nov 22 04:17:56 compute-0 podman[301573]: 2025-11-22 04:17:56.304082368 +0000 UTC m=+0.029830107 container died b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:17:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7683f19ccf34671723d842d706d8d80fe191d54852b7d0f9a0fd35239dca257-merged.mount: Deactivated successfully.
Nov 22 04:17:56 compute-0 podman[301573]: 2025-11-22 04:17:56.358238155 +0000 UTC m=+0.083985894 container remove b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_herschel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:17:56 compute-0 systemd[1]: libpod-conmon-b6d35a7b89e7ac8a32b2459c0b6e33dbca961c90a4b7eacd71b55ab70c998409.scope: Deactivated successfully.
Nov 22 04:17:56 compute-0 sudo[301422]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:56 compute-0 sudo[301588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:17:56 compute-0 sudo[301588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:56 compute-0 sudo[301588]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:56 compute-0 sudo[301613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:17:56 compute-0 sudo[301613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:56 compute-0 sudo[301613]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:56 compute-0 sudo[301638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:17:56 compute-0 sudo[301638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:56 compute-0 sudo[301638]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:56 compute-0 sudo[301663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:17:56 compute-0 sudo[301663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 22 04:17:57 compute-0 podman[301729]: 2025-11-22 04:17:57.072447181 +0000 UTC m=+0.047284364 container create 4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:17:57 compute-0 systemd[1]: Started libpod-conmon-4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873.scope.
Nov 22 04:17:57 compute-0 podman[301729]: 2025-11-22 04:17:57.050945239 +0000 UTC m=+0.025782452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:17:57 compute-0 podman[301729]: 2025-11-22 04:17:57.1884798 +0000 UTC m=+0.163317023 container init 4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:17:57 compute-0 podman[301729]: 2025-11-22 04:17:57.197506876 +0000 UTC m=+0.172344059 container start 4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:17:57 compute-0 podman[301729]: 2025-11-22 04:17:57.201382658 +0000 UTC m=+0.176219891 container attach 4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Nov 22 04:17:57 compute-0 serene_allen[301745]: 167 167
Nov 22 04:17:57 compute-0 systemd[1]: libpod-4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873.scope: Deactivated successfully.
Nov 22 04:17:57 compute-0 podman[301729]: 2025-11-22 04:17:57.206154284 +0000 UTC m=+0.180991477 container died 4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:17:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8049c84ace530af6377c54f7a9fd6ff7e69620d0033d968e382dd4c121b01f1-merged.mount: Deactivated successfully.
Nov 22 04:17:57 compute-0 podman[301729]: 2025-11-22 04:17:57.252348782 +0000 UTC m=+0.227185975 container remove 4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:17:57 compute-0 systemd[1]: libpod-conmon-4cd36f3b5a380bdd7d1c0b04984378f9924f367ec601bd8d8e1e79c897a3d873.scope: Deactivated successfully.
Nov 22 04:17:57 compute-0 podman[301770]: 2025-11-22 04:17:57.528900513 +0000 UTC m=+0.117010655 container create 463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:17:57 compute-0 podman[301770]: 2025-11-22 04:17:57.450780291 +0000 UTC m=+0.038890523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:57 compute-0 systemd[1]: Started libpod-conmon-463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82.scope.
Nov 22 04:17:57 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e779c17dcb69be6695a59a8c768ebf3d2217fb31012c9c44c832e2bac45880/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e779c17dcb69be6695a59a8c768ebf3d2217fb31012c9c44c832e2bac45880/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e779c17dcb69be6695a59a8c768ebf3d2217fb31012c9c44c832e2bac45880/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46e779c17dcb69be6695a59a8c768ebf3d2217fb31012c9c44c832e2bac45880/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:17:57 compute-0 podman[301770]: 2025-11-22 04:17:57.737291931 +0000 UTC m=+0.325402103 container init 463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:57 compute-0 podman[301770]: 2025-11-22 04:17:57.748831923 +0000 UTC m=+0.336942065 container start 463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:17:57 compute-0 podman[301770]: 2025-11-22 04:17:57.75928016 +0000 UTC m=+0.347390342 container attach 463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:17:57 compute-0 sshd-session[301069]: Invalid user admin from 27.79.46.85 port 42452
Nov 22 04:17:58 compute-0 ceph-mon[75011]: pgmap v2112: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.162 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updating instance_info_cache with network_info: [{"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.184 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.185 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.185 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.186 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.186 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.431 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.432 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:17:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:17:58 compute-0 eloquent_bose[301787]: {
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:     "0": [
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:         {
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "devices": [
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "/dev/loop3"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             ],
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_name": "ceph_lv0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_size": "21470642176",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "name": "ceph_lv0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "tags": {
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cluster_name": "ceph",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.crush_device_class": "",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.encrypted": "0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osd_id": "0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.type": "block",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.vdo": "0"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             },
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "type": "block",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "vg_name": "ceph_vg0"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:         }
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:     ],
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:     "1": [
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:         {
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "devices": [
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "/dev/loop4"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             ],
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_name": "ceph_lv1",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_size": "21470642176",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "name": "ceph_lv1",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "tags": {
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cluster_name": "ceph",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.crush_device_class": "",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.encrypted": "0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osd_id": "1",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.type": "block",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.vdo": "0"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             },
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "type": "block",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "vg_name": "ceph_vg1"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:         }
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:     ],
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:     "2": [
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:         {
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "devices": [
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "/dev/loop5"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             ],
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_name": "ceph_lv2",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_size": "21470642176",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "name": "ceph_lv2",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "tags": {
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.cluster_name": "ceph",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.crush_device_class": "",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.encrypted": "0",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osd_id": "2",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.type": "block",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:                 "ceph.vdo": "0"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             },
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "type": "block",
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:             "vg_name": "ceph_vg2"
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:         }
Nov 22 04:17:58 compute-0 eloquent_bose[301787]:     ]
Nov 22 04:17:58 compute-0 eloquent_bose[301787]: }
Nov 22 04:17:58 compute-0 systemd[1]: libpod-463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82.scope: Deactivated successfully.
Nov 22 04:17:58 compute-0 podman[301770]: 2025-11-22 04:17:58.568828233 +0000 UTC m=+1.156938405 container died 463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:17:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-46e779c17dcb69be6695a59a8c768ebf3d2217fb31012c9c44c832e2bac45880-merged.mount: Deactivated successfully.
Nov 22 04:17:58 compute-0 podman[301770]: 2025-11-22 04:17:58.659247307 +0000 UTC m=+1.247357479 container remove 463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:17:58 compute-0 systemd[1]: libpod-conmon-463516e281eac8db76cee3f7bc475d84a2e686afc6580d42c59add65fb6f8f82.scope: Deactivated successfully.
Nov 22 04:17:58 compute-0 sudo[301663]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:58 compute-0 sudo[301808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:17:58 compute-0 sudo[301808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:58 compute-0 sudo[301808]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.823 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:58 compute-0 sudo[301833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:17:58 compute-0 sudo[301833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:58 compute-0 sudo[301833]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.894 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:58 compute-0 sudo[301858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:17:58 compute-0 sudo[301858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:58 compute-0 sudo[301858]: pam_unix(sudo:session): session closed for user root
Nov 22 04:17:58 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:58.952 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:17:58 compute-0 nova_compute[253461]: 2025-11-22 04:17:58.952 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:17:58 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:17:58.954 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:17:58 compute-0 sudo[301883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:17:59 compute-0 sudo[301883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:17:59 compute-0 nova_compute[253461]: 2025-11-22 04:17:59.091 253465 DEBUG nova.compute.manager [req-19e58f08-ce44-4581-a3d5-31bba5ef4957 req-39a3a7c3-3353-4f8c-89e6-6d4ebd3a1a38 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received event network-changed-75151d72-858e-460b-bba4-4cc8501a764a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:17:59 compute-0 nova_compute[253461]: 2025-11-22 04:17:59.092 253465 DEBUG nova.compute.manager [req-19e58f08-ce44-4581-a3d5-31bba5ef4957 req-39a3a7c3-3353-4f8c-89e6-6d4ebd3a1a38 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Refreshing instance network info cache due to event network-changed-75151d72-858e-460b-bba4-4cc8501a764a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:17:59 compute-0 nova_compute[253461]: 2025-11-22 04:17:59.093 253465 DEBUG oslo_concurrency.lockutils [req-19e58f08-ce44-4581-a3d5-31bba5ef4957 req-39a3a7c3-3353-4f8c-89e6-6d4ebd3a1a38 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:17:59 compute-0 nova_compute[253461]: 2025-11-22 04:17:59.093 253465 DEBUG oslo_concurrency.lockutils [req-19e58f08-ce44-4581-a3d5-31bba5ef4957 req-39a3a7c3-3353-4f8c-89e6-6d4ebd3a1a38 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:17:59 compute-0 nova_compute[253461]: 2025-11-22 04:17:59.094 253465 DEBUG nova.network.neutron [req-19e58f08-ce44-4581-a3d5-31bba5ef4957 req-39a3a7c3-3353-4f8c-89e6-6d4ebd3a1a38 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Refreshing network info cache for port 75151d72-858e-460b-bba4-4cc8501a764a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:17:59 compute-0 sshd-session[301069]: Connection closed by invalid user admin 27.79.46.85 port 42452 [preauth]
Nov 22 04:17:59 compute-0 podman[301948]: 2025-11-22 04:17:59.377218225 +0000 UTC m=+0.051560090 container create 397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:17:59 compute-0 systemd[1]: Started libpod-conmon-397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f.scope.
Nov 22 04:17:59 compute-0 podman[301948]: 2025-11-22 04:17:59.354586346 +0000 UTC m=+0.028928201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:17:59 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:17:59 compute-0 podman[301948]: 2025-11-22 04:17:59.484555203 +0000 UTC m=+0.158897058 container init 397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jemison, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:17:59 compute-0 podman[301948]: 2025-11-22 04:17:59.498125446 +0000 UTC m=+0.172467301 container start 397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:17:59 compute-0 podman[301948]: 2025-11-22 04:17:59.503024554 +0000 UTC m=+0.177366419 container attach 397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jemison, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:17:59 compute-0 gallant_jemison[301964]: 167 167
Nov 22 04:17:59 compute-0 systemd[1]: libpod-397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f.scope: Deactivated successfully.
Nov 22 04:17:59 compute-0 podman[301948]: 2025-11-22 04:17:59.507262481 +0000 UTC m=+0.181604356 container died 397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:17:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5c5ec6f1c88767a400ef17fb2d0dcdfdd2293435aeb4775b6b09bea9d1939d9-merged.mount: Deactivated successfully.
Nov 22 04:17:59 compute-0 podman[301948]: 2025-11-22 04:17:59.582089024 +0000 UTC m=+0.256430889 container remove 397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:17:59 compute-0 systemd[1]: libpod-conmon-397755b40287b84757c2f013573b3b2191ec3689c41c28e99a58e2edabf1636f.scope: Deactivated successfully.
Nov 22 04:17:59 compute-0 podman[301990]: 2025-11-22 04:17:59.816932177 +0000 UTC m=+0.048368707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:18:00 compute-0 podman[301990]: 2025-11-22 04:18:00.036734884 +0000 UTC m=+0.268171374 container create 7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:18:00 compute-0 ceph-mon[75011]: pgmap v2113: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 22 04:18:00 compute-0 systemd[1]: Started libpod-conmon-7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35.scope.
Nov 22 04:18:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:18:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2985821838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:18:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:18:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2985821838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:18:00 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e985c6a3e4192b1d14a15e374e22c8872e7442a6caf41c504470e8b7a0eb1507/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e985c6a3e4192b1d14a15e374e22c8872e7442a6caf41c504470e8b7a0eb1507/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e985c6a3e4192b1d14a15e374e22c8872e7442a6caf41c504470e8b7a0eb1507/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e985c6a3e4192b1d14a15e374e22c8872e7442a6caf41c504470e8b7a0eb1507/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:00 compute-0 podman[301990]: 2025-11-22 04:18:00.483123652 +0000 UTC m=+0.714560171 container init 7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:18:00 compute-0 podman[301990]: 2025-11-22 04:18:00.493778197 +0000 UTC m=+0.725214717 container start 7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:18:00 compute-0 podman[301990]: 2025-11-22 04:18:00.604639937 +0000 UTC m=+0.836076417 container attach 7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 04:18:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:18:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2985821838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:18:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2985821838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:18:01 compute-0 nova_compute[253461]: 2025-11-22 04:18:01.321 253465 DEBUG nova.network.neutron [req-19e58f08-ce44-4581-a3d5-31bba5ef4957 req-39a3a7c3-3353-4f8c-89e6-6d4ebd3a1a38 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updated VIF entry in instance network info cache for port 75151d72-858e-460b-bba4-4cc8501a764a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:18:01 compute-0 nova_compute[253461]: 2025-11-22 04:18:01.323 253465 DEBUG nova.network.neutron [req-19e58f08-ce44-4581-a3d5-31bba5ef4957 req-39a3a7c3-3353-4f8c-89e6-6d4ebd3a1a38 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updating instance_info_cache with network_info: [{"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:18:01 compute-0 nova_compute[253461]: 2025-11-22 04:18:01.340 253465 DEBUG oslo_concurrency.lockutils [req-19e58f08-ce44-4581-a3d5-31bba5ef4957 req-39a3a7c3-3353-4f8c-89e6-6d4ebd3a1a38 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-0f6fc19a-f734-4362-a5ff-785307d2b7b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:18:01 compute-0 sad_chatelet[302006]: {
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "osd_id": 1,
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "type": "bluestore"
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:     },
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "osd_id": 0,
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "type": "bluestore"
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:     },
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "osd_id": 2,
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:         "type": "bluestore"
Nov 22 04:18:01 compute-0 sad_chatelet[302006]:     }
Nov 22 04:18:01 compute-0 sad_chatelet[302006]: }
Nov 22 04:18:01 compute-0 systemd[1]: libpod-7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35.scope: Deactivated successfully.
Nov 22 04:18:01 compute-0 podman[301990]: 2025-11-22 04:18:01.539963961 +0000 UTC m=+1.771400441 container died 7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:18:01 compute-0 systemd[1]: libpod-7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35.scope: Consumed 1.056s CPU time.
Nov 22 04:18:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-e985c6a3e4192b1d14a15e374e22c8872e7442a6caf41c504470e8b7a0eb1507-merged.mount: Deactivated successfully.
Nov 22 04:18:01 compute-0 podman[301990]: 2025-11-22 04:18:01.61872679 +0000 UTC m=+1.850163280 container remove 7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chatelet, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:18:01 compute-0 systemd[1]: libpod-conmon-7d89c59f5e9347eb2c3df5115d5c483be99404e867a560a5bb264fb1c7ed2d35.scope: Deactivated successfully.
Nov 22 04:18:01 compute-0 sudo[301883]: pam_unix(sudo:session): session closed for user root
Nov 22 04:18:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:18:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:18:01 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:18:01 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:18:01 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 171d96ba-726a-4144-ba94-72e649fcd0e7 does not exist
Nov 22 04:18:01 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 9ed5f961-8004-4dbe-94b1-d52e9a9b6ae9 does not exist
Nov 22 04:18:01 compute-0 sudo[302052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:18:01 compute-0 sudo[302052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:18:01 compute-0 sudo[302052]: pam_unix(sudo:session): session closed for user root
Nov 22 04:18:01 compute-0 sudo[302077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:18:01 compute-0 sudo[302077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:18:01 compute-0 sudo[302077]: pam_unix(sudo:session): session closed for user root
Nov 22 04:18:02 compute-0 ceph-mon[75011]: pgmap v2114: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:18:02 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:18:02 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:18:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 22 04:18:03 compute-0 podman[302102]: 2025-11-22 04:18:03.442529659 +0000 UTC m=+0.106387466 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 04:18:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:03 compute-0 ceph-mon[75011]: pgmap v2115: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 22 04:18:03 compute-0 nova_compute[253461]: 2025-11-22 04:18:03.828 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:03 compute-0 nova_compute[253461]: 2025-11-22 04:18:03.897 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:04 compute-0 nova_compute[253461]: 2025-11-22 04:18:04.431 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 22 04:18:04 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 04:18:05 compute-0 ceph-mon[75011]: pgmap v2116: 305 pgs: 305 active+clean; 202 MiB data, 565 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 70 op/s
Nov 22 04:18:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:06 compute-0 ovn_controller[152691]: 2025-11-22T04:18:06Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:5c:4c 10.100.0.10
Nov 22 04:18:06 compute-0 ovn_controller[152691]: 2025-11-22T04:18:06Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:5c:4c 10.100.0.10
Nov 22 04:18:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 215 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 76 op/s
Nov 22 04:18:07 compute-0 ceph-mon[75011]: pgmap v2117: 305 pgs: 305 active+clean; 215 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.9 MiB/s wr, 76 op/s
Nov 22 04:18:07 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:07.956 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 215 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 1.9 MiB/s wr, 35 op/s
Nov 22 04:18:08 compute-0 nova_compute[253461]: 2025-11-22 04:18:08.832 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:08 compute-0 nova_compute[253461]: 2025-11-22 04:18:08.899 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:10 compute-0 ceph-mon[75011]: pgmap v2118: 305 pgs: 305 active+clean; 215 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 1.9 MiB/s wr, 35 op/s
Nov 22 04:18:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 247 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 565 KiB/s rd, 4.3 MiB/s wr, 62 op/s
Nov 22 04:18:12 compute-0 ceph-mon[75011]: pgmap v2119: 305 pgs: 305 active+clean; 247 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 565 KiB/s rd, 4.3 MiB/s wr, 62 op/s
Nov 22 04:18:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 270 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 542 KiB/s rd, 5.8 MiB/s wr, 75 op/s
Nov 22 04:18:12 compute-0 sshd-session[300867]: Invalid user admin from 27.79.46.85 port 42488
Nov 22 04:18:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:13 compute-0 nova_compute[253461]: 2025-11-22 04:18:13.867 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:13 compute-0 nova_compute[253461]: 2025-11-22 04:18:13.901 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:14 compute-0 ceph-mon[75011]: pgmap v2120: 305 pgs: 305 active+clean; 270 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 542 KiB/s rd, 5.8 MiB/s wr, 75 op/s
Nov 22 04:18:14 compute-0 sshd-session[300867]: Connection closed by invalid user admin 27.79.46.85 port 42488 [preauth]
Nov 22 04:18:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 270 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 542 KiB/s rd, 5.8 MiB/s wr, 75 op/s
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.128 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.128 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.129 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.130 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.130 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.132 253465 INFO nova.compute.manager [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Terminating instance
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.135 253465 DEBUG nova.compute.manager [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:18:15 compute-0 kernel: tap75151d72-85 (unregistering): left promiscuous mode
Nov 22 04:18:15 compute-0 NetworkManager[48916]: <info>  [1763785095.2578] device (tap75151d72-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:18:15 compute-0 ovn_controller[152691]: 2025-11-22T04:18:15Z|00275|binding|INFO|Releasing lport 75151d72-858e-460b-bba4-4cc8501a764a from this chassis (sb_readonly=0)
Nov 22 04:18:15 compute-0 ovn_controller[152691]: 2025-11-22T04:18:15Z|00276|binding|INFO|Setting lport 75151d72-858e-460b-bba4-4cc8501a764a down in Southbound
Nov 22 04:18:15 compute-0 ovn_controller[152691]: 2025-11-22T04:18:15Z|00277|binding|INFO|Removing iface tap75151d72-85 ovn-installed in OVS
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.266 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.277 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:5c:4c 10.100.0.10'], port_security=['fa:16:3e:5d:5c:4c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '0f6fc19a-f734-4362-a5ff-785307d2b7b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73bcc005-88ac-46b6-ad11-6207c6046246', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0c3cc5e7-a78d-415a-aaf6-2d09ae975fc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8139379-e220-4788-92e4-b495f0c34eb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=75151d72-858e-460b-bba4-4cc8501a764a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.280 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 75151d72-858e-460b-bba4-4cc8501a764a in datapath 73bcc005-88ac-46b6-ad11-6207c6046246 unbound from our chassis
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.283 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 73bcc005-88ac-46b6-ad11-6207c6046246, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.284 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1c9a6e9d-27cb-4c7f-92ce-77615f17614a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.285 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 namespace which is not needed anymore
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.302 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:15 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Nov 22 04:18:15 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 16.424s CPU time.
Nov 22 04:18:15 compute-0 systemd-machined[215728]: Machine qemu-26-instance-0000001a terminated.
Nov 22 04:18:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[301150]: [NOTICE]   (301154) : haproxy version is 2.8.14-c23fe91
Nov 22 04:18:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[301150]: [NOTICE]   (301154) : path to executable is /usr/sbin/haproxy
Nov 22 04:18:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[301150]: [WARNING]  (301154) : Exiting Master process...
Nov 22 04:18:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[301150]: [WARNING]  (301154) : Exiting Master process...
Nov 22 04:18:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[301150]: [ALERT]    (301154) : Current worker (301156) exited with code 143 (Terminated)
Nov 22 04:18:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[301150]: [WARNING]  (301154) : All workers exited. Exiting... (0)
Nov 22 04:18:15 compute-0 systemd[1]: libpod-65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428.scope: Deactivated successfully.
Nov 22 04:18:15 compute-0 podman[302147]: 2025-11-22 04:18:15.453642178 +0000 UTC m=+0.054903096 container died 65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428-userdata-shm.mount: Deactivated successfully.
Nov 22 04:18:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-369d6f24fdd9cb8ee42d596a4a5360abf8ee077a0975a5a1eceeed09ffcbb946-merged.mount: Deactivated successfully.
Nov 22 04:18:15 compute-0 podman[302147]: 2025-11-22 04:18:15.538988755 +0000 UTC m=+0.140249673 container cleanup 65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118)
Nov 22 04:18:15 compute-0 systemd[1]: libpod-conmon-65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428.scope: Deactivated successfully.
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.576 253465 INFO nova.virt.libvirt.driver [-] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Instance destroyed successfully.
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.577 253465 DEBUG nova.objects.instance [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lazy-loading 'resources' on Instance uuid 0f6fc19a-f734-4362-a5ff-785307d2b7b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.593 253465 DEBUG nova.virt.libvirt.vif [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:17:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-104981922',display_name='tempest-TransferEncryptedVolumeTest-server-104981922',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-104981922',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ATKJWVlmyGreKYnDCSQO/lCz0VA3LfT+2g0XAIL/EfA89Lu4gjHntRaTvYv3ssQtoWjE9SDx5lQG0mvCId2hvStMomFINnpiLPFYacZktgyZ/1N3JNIqwNfMqE81xQ==',key_name='tempest-TransferEncryptedVolumeTest-1718953926',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:17:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-5fgt290f',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:17:53Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=0f6fc19a-f734-4362-a5ff-785307d2b7b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.594 253465 DEBUG nova.network.os_vif_util [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "75151d72-858e-460b-bba4-4cc8501a764a", "address": "fa:16:3e:5d:5c:4c", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75151d72-85", "ovs_interfaceid": "75151d72-858e-460b-bba4-4cc8501a764a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.595 253465 DEBUG nova.network.os_vif_util [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:5c:4c,bridge_name='br-int',has_traffic_filtering=True,id=75151d72-858e-460b-bba4-4cc8501a764a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75151d72-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.595 253465 DEBUG os_vif [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:5c:4c,bridge_name='br-int',has_traffic_filtering=True,id=75151d72-858e-460b-bba4-4cc8501a764a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75151d72-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.597 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.597 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75151d72-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.601 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.604 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.607 253465 INFO os_vif [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:5c:4c,bridge_name='br-int',has_traffic_filtering=True,id=75151d72-858e-460b-bba4-4cc8501a764a,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75151d72-85')
Nov 22 04:18:15 compute-0 podman[302179]: 2025-11-22 04:18:15.621514631 +0000 UTC m=+0.055547588 container remove 65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.627 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c62a3069-6af2-4c52-9e06-2b9ff247bc2a]: (4, ('Sat Nov 22 04:18:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 (65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428)\n65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428\nSat Nov 22 04:18:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 (65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428)\n65a785aa8b0c0391312149724c6cfdd392796a4129fb63861afd3ef4bae2f428\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.629 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f152a767-f55b-43b6-8953-52ea9f6c8bf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.630 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73bcc005-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.632 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:15 compute-0 kernel: tap73bcc005-80: left promiscuous mode
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.644 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.647 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9e6f2636-8cbb-4f82-a5b5-7f4c99ae2802]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.664 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[68ff5ef8-0c87-4fba-98f4-aaf784bc3627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.666 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[42e8b3f1-e0b3-4f34-9c32-cdf6e9956bf4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.682 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[78714fcd-b5fd-42dc-9c4f-344a491da573]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 532249, 'reachable_time': 35406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302219, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d73bcc005\x2d88ac\x2d46b6\x2dad11\x2d6207c6046246.mount: Deactivated successfully.
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.685 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:18:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:15.685 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[d34456d1-9758-4dae-b0fb-77c60fb9e382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.823 253465 INFO nova.virt.libvirt.driver [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Deleting instance files /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8_del
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.823 253465 INFO nova.virt.libvirt.driver [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Deletion of /var/lib/nova/instances/0f6fc19a-f734-4362-a5ff-785307d2b7b8_del complete
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.877 253465 INFO nova.compute.manager [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Took 0.74 seconds to destroy the instance on the hypervisor.
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.878 253465 DEBUG oslo.service.loopingcall [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.878 253465 DEBUG nova.compute.manager [-] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:18:15 compute-0 nova_compute[253461]: 2025-11-22 04:18:15.879 253465 DEBUG nova.network.neutron [-] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:18:16 compute-0 nova_compute[253461]: 2025-11-22 04:18:16.052 253465 DEBUG nova.compute.manager [req-d17d7d56-1e4a-43ab-88e9-eea1dd96b972 req-b45c0ff2-6adc-48f1-820e-acef9b4fce56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received event network-vif-unplugged-75151d72-858e-460b-bba4-4cc8501a764a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:18:16 compute-0 nova_compute[253461]: 2025-11-22 04:18:16.053 253465 DEBUG oslo_concurrency.lockutils [req-d17d7d56-1e4a-43ab-88e9-eea1dd96b972 req-b45c0ff2-6adc-48f1-820e-acef9b4fce56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:16 compute-0 nova_compute[253461]: 2025-11-22 04:18:16.053 253465 DEBUG oslo_concurrency.lockutils [req-d17d7d56-1e4a-43ab-88e9-eea1dd96b972 req-b45c0ff2-6adc-48f1-820e-acef9b4fce56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:16 compute-0 nova_compute[253461]: 2025-11-22 04:18:16.054 253465 DEBUG oslo_concurrency.lockutils [req-d17d7d56-1e4a-43ab-88e9-eea1dd96b972 req-b45c0ff2-6adc-48f1-820e-acef9b4fce56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:16 compute-0 nova_compute[253461]: 2025-11-22 04:18:16.054 253465 DEBUG nova.compute.manager [req-d17d7d56-1e4a-43ab-88e9-eea1dd96b972 req-b45c0ff2-6adc-48f1-820e-acef9b4fce56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] No waiting events found dispatching network-vif-unplugged-75151d72-858e-460b-bba4-4cc8501a764a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:18:16 compute-0 nova_compute[253461]: 2025-11-22 04:18:16.054 253465 DEBUG nova.compute.manager [req-d17d7d56-1e4a-43ab-88e9-eea1dd96b972 req-b45c0ff2-6adc-48f1-820e-acef9b4fce56 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received event network-vif-unplugged-75151d72-858e-460b-bba4-4cc8501a764a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:18:16 compute-0 ceph-mon[75011]: pgmap v2121: 305 pgs: 305 active+clean; 270 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 542 KiB/s rd, 5.8 MiB/s wr, 75 op/s
Nov 22 04:18:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 270 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 5.8 MiB/s wr, 78 op/s
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.117 253465 DEBUG nova.network.neutron [-] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.141 253465 INFO nova.compute.manager [-] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Took 1.26 seconds to deallocate network for instance.
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.196 253465 DEBUG nova.compute.manager [req-ef61bd2c-ed06-4baf-a1ee-3b7c1d99a988 req-cce008f4-50ba-4f24-be2c-0821c4ccf53c f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received event network-vif-deleted-75151d72-858e-460b-bba4-4cc8501a764a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.363 253465 INFO nova.compute.manager [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Took 0.22 seconds to detach 1 volumes for instance.
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.411 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.412 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.470 253465 DEBUG oslo_concurrency.processutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1002987585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.923 253465 DEBUG oslo_concurrency.processutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.929 253465 DEBUG nova.compute.provider_tree [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.954 253465 DEBUG nova.scheduler.client.report [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:18:17 compute-0 nova_compute[253461]: 2025-11-22 04:18:17.994 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.072 253465 INFO nova.scheduler.client.report [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Deleted allocations for instance 0f6fc19a-f734-4362-a5ff-785307d2b7b8
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.157 253465 DEBUG nova.compute.manager [req-543c8aa5-90c0-4ed0-8268-cd15fc2a1a36 req-65233032-a397-4096-b14f-03b84cc5ff4d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received event network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.158 253465 DEBUG oslo_concurrency.lockutils [req-543c8aa5-90c0-4ed0-8268-cd15fc2a1a36 req-65233032-a397-4096-b14f-03b84cc5ff4d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.158 253465 DEBUG oslo_concurrency.lockutils [req-543c8aa5-90c0-4ed0-8268-cd15fc2a1a36 req-65233032-a397-4096-b14f-03b84cc5ff4d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.158 253465 DEBUG oslo_concurrency.lockutils [req-543c8aa5-90c0-4ed0-8268-cd15fc2a1a36 req-65233032-a397-4096-b14f-03b84cc5ff4d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.158 253465 DEBUG nova.compute.manager [req-543c8aa5-90c0-4ed0-8268-cd15fc2a1a36 req-65233032-a397-4096-b14f-03b84cc5ff4d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] No waiting events found dispatching network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.159 253465 WARNING nova.compute.manager [req-543c8aa5-90c0-4ed0-8268-cd15fc2a1a36 req-65233032-a397-4096-b14f-03b84cc5ff4d f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Received unexpected event network-vif-plugged-75151d72-858e-460b-bba4-4cc8501a764a for instance with vm_state deleted and task_state None.
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.204 253465 DEBUG oslo_concurrency.lockutils [None req-cc6138e2-d413-43c2-b522-399fd5b4284b ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "0f6fc19a-f734-4362-a5ff-785307d2b7b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:18 compute-0 podman[302243]: 2025-11-22 04:18:18.404938785 +0000 UTC m=+0.075206168 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:18:18 compute-0 podman[302244]: 2025-11-22 04:18:18.440800634 +0000 UTC m=+0.107805022 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:18:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:18 compute-0 ceph-mon[75011]: pgmap v2122: 305 pgs: 305 active+clean; 270 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 5.8 MiB/s wr, 78 op/s
Nov 22 04:18:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1002987585' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 270 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 3.9 MiB/s wr, 44 op/s
Nov 22 04:18:18 compute-0 nova_compute[253461]: 2025-11-22 04:18:18.903 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:20 compute-0 ceph-mon[75011]: pgmap v2123: 305 pgs: 305 active+clean; 270 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 3.9 MiB/s wr, 44 op/s
Nov 22 04:18:20 compute-0 nova_compute[253461]: 2025-11-22 04:18:20.601 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 3.9 MiB/s wr, 56 op/s
Nov 22 04:18:22 compute-0 ceph-mon[75011]: pgmap v2124: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 3.9 MiB/s wr, 56 op/s
Nov 22 04:18:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Nov 22 04:18:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:23.034 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:23.035 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:23.035 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:23 compute-0 nova_compute[253461]: 2025-11-22 04:18:23.906 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:23 compute-0 ceph-mon[75011]: pgmap v2125: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.5 MiB/s wr, 29 op/s
Nov 22 04:18:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Nov 22 04:18:25 compute-0 nova_compute[253461]: 2025-11-22 04:18:25.603 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:26 compute-0 ceph-mon[75011]: pgmap v2126: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Nov 22 04:18:26 compute-0 nova_compute[253461]: 2025-11-22 04:18:26.275 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6530527f-0897-45c4-83e0-8ec0e35c375d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:26 compute-0 nova_compute[253461]: 2025-11-22 04:18:26.275 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:26 compute-0 nova_compute[253461]: 2025-11-22 04:18:26.350 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:18:26 compute-0 nova_compute[253461]: 2025-11-22 04:18:26.677 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:26 compute-0 nova_compute[253461]: 2025-11-22 04:18:26.678 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:26 compute-0 nova_compute[253461]: 2025-11-22 04:18:26.687 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:18:26 compute-0 nova_compute[253461]: 2025-11-22 04:18:26.687 253465 INFO nova.compute.claims [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:18:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Nov 22 04:18:26 compute-0 nova_compute[253461]: 2025-11-22 04:18:26.800 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2065913029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.227 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.234 253465 DEBUG nova.compute.provider_tree [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:18:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2065913029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.342 253465 DEBUG nova.scheduler.client.report [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.368 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.369 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.429 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.430 253465 DEBUG nova.network.neutron [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.457 253465 INFO nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.497 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.606 253465 INFO nova.virt.block_device [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Booting with volume 32a4109a-1490-40cb-8838-25e3b6fd4d19 at /dev/vda
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.760 253465 DEBUG os_brick.utils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.761 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.772 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.773 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a5cea4-dc87-4e96-8abe-2a478ac0de3f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.774 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.782 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.783 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[bb86273c-6fe5-4413-bec7-4b810aa8dadf]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.785 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.795 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.795 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[94d2d615-48eb-4a83-9f86-6970b0132ff2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.797 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[1a8e4f1d-f181-4568-93f5-6127f2dcdb23]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.797 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.830 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.832 253465 DEBUG os_brick.initiator.connectors.lightos [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.833 253465 DEBUG os_brick.initiator.connectors.lightos [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.833 253465 DEBUG os_brick.initiator.connectors.lightos [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.833 253465 DEBUG os_brick.utils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] <== get_connector_properties: return (73ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.834 253465 DEBUG nova.virt.block_device [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updating existing volume attachment record: d2802aea-8932-4c73-948f-82f46abc732b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:18:27 compute-0 nova_compute[253461]: 2025-11-22 04:18:27.888 253465 DEBUG nova.policy [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ddff25657c74403e9ed9e91ff227badd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:18:28 compute-0 ceph-mon[75011]: pgmap v2127: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 14 KiB/s wr, 15 op/s
Nov 22 04:18:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3861346614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.662 253465 DEBUG nova.network.neutron [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Successfully created port: fb575dd5-2b58-4784-8566-bafe4380eb14 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:18:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 12 op/s
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.876 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.878 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.878 253465 INFO nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Creating image(s)
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.879 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.880 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Ensure instance console log exists: /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.880 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.881 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.881 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:28 compute-0 nova_compute[253461]: 2025-11-22 04:18:28.907 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3861346614' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.039 253465 DEBUG nova.network.neutron [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Successfully updated port: fb575dd5-2b58-4784-8566-bafe4380eb14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.055 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.055 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquired lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.056 253465 DEBUG nova.network.neutron [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.152 253465 DEBUG nova.compute.manager [req-c861b15a-b6dd-4e9b-880f-483d946a08e9 req-310aba4b-e011-44c9-9a8a-f9dd927b9206 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received event network-changed-fb575dd5-2b58-4784-8566-bafe4380eb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.152 253465 DEBUG nova.compute.manager [req-c861b15a-b6dd-4e9b-880f-483d946a08e9 req-310aba4b-e011-44c9-9a8a-f9dd927b9206 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Refreshing instance network info cache due to event network-changed-fb575dd5-2b58-4784-8566-bafe4380eb14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.153 253465 DEBUG oslo_concurrency.lockutils [req-c861b15a-b6dd-4e9b-880f-483d946a08e9 req-310aba4b-e011-44c9-9a8a-f9dd927b9206 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:18:30 compute-0 ceph-mon[75011]: pgmap v2128: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 12 op/s
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.576 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763785095.574195, 0f6fc19a-f734-4362-a5ff-785307d2b7b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.576 253465 INFO nova.compute.manager [-] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] VM Stopped (Lifecycle Event)
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.594 253465 DEBUG nova.compute.manager [None req-e6bb3601-0ecb-4ec0-bb5b-6aaf6ae36816 - - - - - -] [instance: 0f6fc19a-f734-4362-a5ff-785307d2b7b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.606 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 12 op/s
Nov 22 04:18:30 compute-0 nova_compute[253461]: 2025-11-22 04:18:30.849 253465 DEBUG nova.network.neutron [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:18:32 compute-0 ceph-mon[75011]: pgmap v2129: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 12 op/s
Nov 22 04:18:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.010 253465 DEBUG nova.network.neutron [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updating instance_info_cache with network_info: [{"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.212 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Releasing lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.213 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Instance network_info: |[{"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.214 253465 DEBUG oslo_concurrency.lockutils [req-c861b15a-b6dd-4e9b-880f-483d946a08e9 req-310aba4b-e011-44c9-9a8a-f9dd927b9206 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.214 253465 DEBUG nova.network.neutron [req-c861b15a-b6dd-4e9b-880f-483d946a08e9 req-310aba4b-e011-44c9-9a8a-f9dd927b9206 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Refreshing network info cache for port fb575dd5-2b58-4784-8566-bafe4380eb14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.218 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Start _get_guest_xml network_info=[{"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': 'd2802aea-8932-4c73-948f-82f46abc732b', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-32a4109a-1490-40cb-8838-25e3b6fd4d19', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '32a4109a-1490-40cb-8838-25e3b6fd4d19', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6530527f-0897-45c4-83e0-8ec0e35c375d', 'attached_at': '', 'detached_at': '', 'volume_id': '32a4109a-1490-40cb-8838-25e3b6fd4d19', 'serial': '32a4109a-1490-40cb-8838-25e3b6fd4d19'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.223 253465 WARNING nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.235 253465 DEBUG nova.virt.libvirt.host [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.236 253465 DEBUG nova.virt.libvirt.host [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.238 253465 DEBUG nova.virt.libvirt.host [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.239 253465 DEBUG nova.virt.libvirt.host [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.239 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.240 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.240 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.241 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.241 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.241 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.242 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.242 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.242 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.242 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.243 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.243 253465 DEBUG nova.virt.hardware [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.269 253465 DEBUG nova.storage.rbd_utils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6530527f-0897-45c4-83e0-8ec0e35c375d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.273 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:18:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2734906399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.910 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:33 compute-0 nova_compute[253461]: 2025-11-22 04:18:33.911 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.088 253465 DEBUG os_brick.encryptors [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Using volume encryption metadata '{'encryption_key_id': '0651fb34-f2f3-4936-ba17-d2a2463dd6b6', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-32a4109a-1490-40cb-8838-25e3b6fd4d19', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '32a4109a-1490-40cb-8838-25e3b6fd4d19', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '6530527f-0897-45c4-83e0-8ec0e35c375d', 'attached_at': '', 'detached_at': '', 'volume_id': '32a4109a-1490-40cb-8838-25e3b6fd4d19', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.092 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.113 253465 DEBUG barbicanclient.v1.secrets [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.114 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.150 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.151 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.184 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.185 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.220 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.221 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.248 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.249 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.281 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.282 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.331 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.332 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.357 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.358 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.383 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.383 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 podman[302355]: 2025-11-22 04:18:34.412194884 +0000 UTC m=+0.094501494 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.414 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.415 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.433 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.434 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.500 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.501 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.520 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.521 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.539 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.540 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.572 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.573 253465 INFO barbicanclient.base [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Calculated Secrets uuid ref: secrets/0651fb34-f2f3-4936-ba17-d2a2463dd6b6
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.589 253465 DEBUG barbicanclient.client [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.591 253465 DEBUG nova.virt.libvirt.host [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <volume>32a4109a-1490-40cb-8838-25e3b6fd4d19</volume>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   </usage>
Nov 22 04:18:34 compute-0 nova_compute[253461]: </secret>
Nov 22 04:18:34 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 04:18:34 compute-0 ceph-mon[75011]: pgmap v2130: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:18:34 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2734906399' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.625 253465 DEBUG nova.virt.libvirt.vif [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:18:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-513005959',display_name='tempest-TransferEncryptedVolumeTest-server-513005959',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-513005959',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ATKJWVlmyGreKYnDCSQO/lCz0VA3LfT+2g0XAIL/EfA89Lu4gjHntRaTvYv3ssQtoWjE9SDx5lQG0mvCId2hvStMomFINnpiLPFYacZktgyZ/1N3JNIqwNfMqE81xQ==',key_name='tempest-TransferEncryptedVolumeTest-1718953926',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-m1yl9vr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:18:27Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6530527f-0897-45c4-83e0-8ec0e35c375d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.625 253465 DEBUG nova.network.os_vif_util [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.626 253465 DEBUG nova.network.os_vif_util [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:25:74,bridge_name='br-int',has_traffic_filtering=True,id=fb575dd5-2b58-4784-8566-bafe4380eb14,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb575dd5-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.628 253465 DEBUG nova.objects.instance [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6530527f-0897-45c4-83e0-8ec0e35c375d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.646 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <uuid>6530527f-0897-45c4-83e0-8ec0e35c375d</uuid>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <name>instance-0000001b</name>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-513005959</nova:name>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:18:33</nova:creationTime>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <nova:user uuid="ddff25657c74403e9ed9e91ff227badd">tempest-TransferEncryptedVolumeTest-1500496447-project-member</nova:user>
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <nova:project uuid="98e4451f91104cd88f6e19dd5c53fd00">tempest-TransferEncryptedVolumeTest-1500496447</nova:project>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <nova:port uuid="fb575dd5-2b58-4784-8566-bafe4380eb14">
Nov 22 04:18:34 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <system>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <entry name="serial">6530527f-0897-45c4-83e0-8ec0e35c375d</entry>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <entry name="uuid">6530527f-0897-45c4-83e0-8ec0e35c375d</entry>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </system>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <os>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   </os>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <features>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   </features>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/6530527f-0897-45c4-83e0-8ec0e35c375d_disk.config">
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       </source>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-32a4109a-1490-40cb-8838-25e3b6fd4d19">
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       </source>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <serial>32a4109a-1490-40cb-8838-25e3b6fd4d19</serial>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <encryption format="luks">
Nov 22 04:18:34 compute-0 nova_compute[253461]:         <secret type="passphrase" uuid="49920347-cb0c-4b5d-9a38-803b19abdc67"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       </encryption>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:a0:25:74"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <target dev="tapfb575dd5-2b"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d/console.log" append="off"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <video>
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </video>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:18:34 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:18:34 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:18:34 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:18:34 compute-0 nova_compute[253461]: </domain>
Nov 22 04:18:34 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.646 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Preparing to wait for external event network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.647 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.647 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.647 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.648 253465 DEBUG nova.virt.libvirt.vif [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:18:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-513005959',display_name='tempest-TransferEncryptedVolumeTest-server-513005959',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-513005959',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ATKJWVlmyGreKYnDCSQO/lCz0VA3LfT+2g0XAIL/EfA89Lu4gjHntRaTvYv3ssQtoWjE9SDx5lQG0mvCId2hvStMomFINnpiLPFYacZktgyZ/1N3JNIqwNfMqE81xQ==',key_name='tempest-TransferEncryptedVolumeTest-1718953926',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-m1yl9vr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:18:27Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6530527f-0897-45c4-83e0-8ec0e35c375d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.648 253465 DEBUG nova.network.os_vif_util [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.649 253465 DEBUG nova.network.os_vif_util [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:25:74,bridge_name='br-int',has_traffic_filtering=True,id=fb575dd5-2b58-4784-8566-bafe4380eb14,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb575dd5-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.649 253465 DEBUG os_vif [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:25:74,bridge_name='br-int',has_traffic_filtering=True,id=fb575dd5-2b58-4784-8566-bafe4380eb14,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb575dd5-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.650 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.650 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.650 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.654 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.654 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb575dd5-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.655 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb575dd5-2b, col_values=(('external_ids', {'iface-id': 'fb575dd5-2b58-4784-8566-bafe4380eb14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:25:74', 'vm-uuid': '6530527f-0897-45c4-83e0-8ec0e35c375d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:34 compute-0 NetworkManager[48916]: <info>  [1763785114.6576] manager: (tapfb575dd5-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.661 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.667 253465 INFO os_vif [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:25:74,bridge_name='br-int',has_traffic_filtering=True,id=fb575dd5-2b58-4784-8566-bafe4380eb14,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb575dd5-2b')
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.725 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.725 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.726 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] No VIF found with MAC fa:16:3e:a0:25:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.726 253465 INFO nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Using config drive
Nov 22 04:18:34 compute-0 nova_compute[253461]: 2025-11-22 04:18:34.750 253465 DEBUG nova.storage.rbd_utils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6530527f-0897-45c4-83e0-8ec0e35c375d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:18:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.210 253465 INFO nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Creating config drive at /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d/disk.config
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.216 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ewr5690 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.343 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9ewr5690" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.364 253465 DEBUG nova.storage.rbd_utils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] rbd image 6530527f-0897-45c4-83e0-8ec0e35c375d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.367 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d/disk.config 6530527f-0897-45c4-83e0-8ec0e35c375d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.387 253465 DEBUG nova.network.neutron [req-c861b15a-b6dd-4e9b-880f-483d946a08e9 req-310aba4b-e011-44c9-9a8a-f9dd927b9206 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updated VIF entry in instance network info cache for port fb575dd5-2b58-4784-8566-bafe4380eb14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.388 253465 DEBUG nova.network.neutron [req-c861b15a-b6dd-4e9b-880f-483d946a08e9 req-310aba4b-e011-44c9-9a8a-f9dd927b9206 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updating instance_info_cache with network_info: [{"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.406 253465 DEBUG oslo_concurrency.lockutils [req-c861b15a-b6dd-4e9b-880f-483d946a08e9 req-310aba4b-e011-44c9-9a8a-f9dd927b9206 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.516 253465 DEBUG oslo_concurrency.processutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d/disk.config 6530527f-0897-45c4-83e0-8ec0e35c375d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.517 253465 INFO nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Deleting local config drive /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d/disk.config because it was imported into RBD.
Nov 22 04:18:35 compute-0 kernel: tapfb575dd5-2b: entered promiscuous mode
Nov 22 04:18:35 compute-0 NetworkManager[48916]: <info>  [1763785115.5855] manager: (tapfb575dd5-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Nov 22 04:18:35 compute-0 ovn_controller[152691]: 2025-11-22T04:18:35Z|00278|binding|INFO|Claiming lport fb575dd5-2b58-4784-8566-bafe4380eb14 for this chassis.
Nov 22 04:18:35 compute-0 ovn_controller[152691]: 2025-11-22T04:18:35Z|00279|binding|INFO|fb575dd5-2b58-4784-8566-bafe4380eb14: Claiming fa:16:3e:a0:25:74 10.100.0.12
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.591 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.596 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:25:74 10.100.0.12'], port_security=['fa:16:3e:a0:25:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6530527f-0897-45c4-83e0-8ec0e35c375d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73bcc005-88ac-46b6-ad11-6207c6046246', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0c3cc5e7-a78d-415a-aaf6-2d09ae975fc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8139379-e220-4788-92e4-b495f0c34eb7, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=fb575dd5-2b58-4784-8566-bafe4380eb14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.599 162689 INFO neutron.agent.ovn.metadata.agent [-] Port fb575dd5-2b58-4784-8566-bafe4380eb14 in datapath 73bcc005-88ac-46b6-ad11-6207c6046246 bound to our chassis
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.602 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:18:35 compute-0 ovn_controller[152691]: 2025-11-22T04:18:35Z|00280|binding|INFO|Setting lport fb575dd5-2b58-4784-8566-bafe4380eb14 ovn-installed in OVS
Nov 22 04:18:35 compute-0 ovn_controller[152691]: 2025-11-22T04:18:35Z|00281|binding|INFO|Setting lport fb575dd5-2b58-4784-8566-bafe4380eb14 up in Southbound
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.612 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.616 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.620 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2e94d831-0603-40f9-9272-0d177f5887d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.621 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap73bcc005-81 in ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.623 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap73bcc005-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.623 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d78cdc38-c6f0-4f36-a852-9ea06b83928f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 systemd-udevd[302449]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:35 compute-0 systemd-machined[215728]: New machine qemu-27-instance-0000001b.
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.625 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[68f750af-f388-40ca-9519-980a28a01c6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.637 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[b282a8a4-78d4-4f85-8e59-ad1a12936e6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 NetworkManager[48916]: <info>  [1763785115.6425] device (tapfb575dd5-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:18:35 compute-0 NetworkManager[48916]: <info>  [1763785115.6437] device (tapfb575dd5-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.651 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[01d214e6-15cd-4a23-847d-e4eaca2a51ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.687 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[2cee16c2-ef86-4c5b-84c8-df5be4d27292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.695 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[51a51e6a-bc9a-4db7-8c34-56dc8bcc1b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 NetworkManager[48916]: <info>  [1763785115.6967] manager: (tap73bcc005-80): new Veth device (/org/freedesktop/NetworkManager/Devices/139)
Nov 22 04:18:35 compute-0 systemd-udevd[302452]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.735 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc32580-bc58-4d11-a3a0-4c25da81b5d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.738 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab4c574-b555-48b7-a0d8-f00a6b88c126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 NetworkManager[48916]: <info>  [1763785115.7706] device (tap73bcc005-80): carrier: link connected
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.782 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[f667eebf-e5f7-46cc-992a-c5a33dd0bc64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.801 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b00c87d2-dbd9-4340-a83a-24f49792dff7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73bcc005-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536836, 'reachable_time': 17859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302481, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.820 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[24c4ad00-0ce6-4c24-a0e9-fb1696b3cc6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:1121'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 536836, 'tstamp': 536836}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302493, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.841 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[deef624d-b42a-49f6-811b-1e5563927ecd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap73bcc005-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:11:21'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536836, 'reachable_time': 17859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302501, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.870 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5538deff-9b6c-4e35-8d0d-9f81d6db3f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.918 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[59a2c229-16bc-4424-8444-d97b67b8f244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.920 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73bcc005-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.920 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.921 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73bcc005-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:35 compute-0 kernel: tap73bcc005-80: entered promiscuous mode
Nov 22 04:18:35 compute-0 NetworkManager[48916]: <info>  [1763785115.9244] manager: (tap73bcc005-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.923 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.927 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap73bcc005-80, col_values=(('external_ids', {'iface-id': 'c0be682a-2fee-4917-82d9-be22b54079b1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:18:35 compute-0 ovn_controller[152691]: 2025-11-22T04:18:35Z|00282|binding|INFO|Releasing lport c0be682a-2fee-4917-82d9-be22b54079b1 from this chassis (sb_readonly=0)
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.948 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.949 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.950 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[68e244bf-9bc1-47fa-ab5b-0d9daff9ce87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.951 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/73bcc005-88ac-46b6-ad11-6207c6046246.pid.haproxy
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID 73bcc005-88ac-46b6-ad11-6207c6046246
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:18:35 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:18:35.952 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'env', 'PROCESS_TAG=haproxy-73bcc005-88ac-46b6-ad11-6207c6046246', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/73bcc005-88ac-46b6-ad11-6207c6046246.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.985 253465 DEBUG nova.compute.manager [req-affcfc61-4009-47f6-bc0f-3d81473f2689 req-a8babf6a-899e-4c94-8561-a9c1caec7cba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received event network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.986 253465 DEBUG oslo_concurrency.lockutils [req-affcfc61-4009-47f6-bc0f-3d81473f2689 req-a8babf6a-899e-4c94-8561-a9c1caec7cba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.986 253465 DEBUG oslo_concurrency.lockutils [req-affcfc61-4009-47f6-bc0f-3d81473f2689 req-a8babf6a-899e-4c94-8561-a9c1caec7cba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.986 253465 DEBUG oslo_concurrency.lockutils [req-affcfc61-4009-47f6-bc0f-3d81473f2689 req-a8babf6a-899e-4c94-8561-a9c1caec7cba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:35 compute-0 nova_compute[253461]: 2025-11-22 04:18:35.987 253465 DEBUG nova.compute.manager [req-affcfc61-4009-47f6-bc0f-3d81473f2689 req-a8babf6a-899e-4c94-8561-a9c1caec7cba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Processing event network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:18:36
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'vms']
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:18:36 compute-0 podman[302549]: 2025-11-22 04:18:36.326911585 +0000 UTC m=+0.064877873 container create d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:18:36 compute-0 podman[302549]: 2025-11-22 04:18:36.289027901 +0000 UTC m=+0.026994239 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:18:36 compute-0 systemd[1]: Started libpod-conmon-d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed.scope.
Nov 22 04:18:36 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9813f413d87989fae8c9df6a83282e6c8fd2bdadecda18446c3b1d2eef888335/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:18:36 compute-0 podman[302549]: 2025-11-22 04:18:36.453406068 +0000 UTC m=+0.191372416 container init d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 22 04:18:36 compute-0 podman[302549]: 2025-11-22 04:18:36.465835689 +0000 UTC m=+0.203801976 container start d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:18:36 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[302564]: [NOTICE]   (302569) : New worker (302571) forked
Nov 22 04:18:36 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[302564]: [NOTICE]   (302569) : Loading success.
Nov 22 04:18:36 compute-0 ceph-mon[75011]: pgmap v2131: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:18:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Nov 22 04:18:37 compute-0 ceph-mon[75011]: pgmap v2132: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.086 253465 DEBUG nova.compute.manager [req-c6dcca90-9903-4964-aeaa-2bc95c70a560 req-ee2306e8-43be-4a87-aeb6-a12d86ba40a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received event network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.087 253465 DEBUG oslo_concurrency.lockutils [req-c6dcca90-9903-4964-aeaa-2bc95c70a560 req-ee2306e8-43be-4a87-aeb6-a12d86ba40a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.087 253465 DEBUG oslo_concurrency.lockutils [req-c6dcca90-9903-4964-aeaa-2bc95c70a560 req-ee2306e8-43be-4a87-aeb6-a12d86ba40a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.087 253465 DEBUG oslo_concurrency.lockutils [req-c6dcca90-9903-4964-aeaa-2bc95c70a560 req-ee2306e8-43be-4a87-aeb6-a12d86ba40a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.088 253465 DEBUG nova.compute.manager [req-c6dcca90-9903-4964-aeaa-2bc95c70a560 req-ee2306e8-43be-4a87-aeb6-a12d86ba40a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] No waiting events found dispatching network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.088 253465 WARNING nova.compute.manager [req-c6dcca90-9903-4964-aeaa-2bc95c70a560 req-ee2306e8-43be-4a87-aeb6-a12d86ba40a8 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received unexpected event network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 for instance with vm_state building and task_state spawning.
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.331 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.332 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785118.3312225, 6530527f-0897-45c4-83e0-8ec0e35c375d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.333 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] VM Started (Lifecycle Event)
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.336 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.342 253465 INFO nova.virt.libvirt.driver [-] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Instance spawned successfully.
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.343 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.353 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.359 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.377 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.378 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.379 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.381 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.382 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.384 253465 DEBUG nova.virt.libvirt.driver [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.391 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.392 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785118.3334095, 6530527f-0897-45c4-83e0-8ec0e35c375d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.393 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] VM Paused (Lifecycle Event)
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.451 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.456 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785118.335996, 6530527f-0897-45c4-83e0-8ec0e35c375d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.457 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] VM Resumed (Lifecycle Event)
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.467 253465 INFO nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Took 9.59 seconds to spawn the instance on the hypervisor.
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.468 253465 DEBUG nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.479 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.483 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.525 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.557 253465 INFO nova.compute.manager [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Took 11.93 seconds to build instance.
Nov 22 04:18:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.574 253465 DEBUG oslo_concurrency.lockutils [None req-d647b5d9-d471-46bb-85eb-7d12a7f7c4a7 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.299s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Nov 22 04:18:38 compute-0 nova_compute[253461]: 2025-11-22 04:18:38.912 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:39 compute-0 nova_compute[253461]: 2025-11-22 04:18:39.657 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:39 compute-0 ceph-mon[75011]: pgmap v2133: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 341 B/s wr, 1 op/s
Nov 22 04:18:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 836 KiB/s rd, 13 KiB/s wr, 35 op/s
Nov 22 04:18:41 compute-0 ceph-mon[75011]: pgmap v2134: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 836 KiB/s rd, 13 KiB/s wr, 35 op/s
Nov 22 04:18:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 54 op/s
Nov 22 04:18:43 compute-0 nova_compute[253461]: 2025-11-22 04:18:43.123 253465 DEBUG nova.compute.manager [req-862dd4a6-32cc-4e68-82bd-6d096b9b68fb req-458bca56-ee6e-4552-9e18-72821822ee4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received event network-changed-fb575dd5-2b58-4784-8566-bafe4380eb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:18:43 compute-0 nova_compute[253461]: 2025-11-22 04:18:43.124 253465 DEBUG nova.compute.manager [req-862dd4a6-32cc-4e68-82bd-6d096b9b68fb req-458bca56-ee6e-4552-9e18-72821822ee4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Refreshing instance network info cache due to event network-changed-fb575dd5-2b58-4784-8566-bafe4380eb14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:18:43 compute-0 nova_compute[253461]: 2025-11-22 04:18:43.124 253465 DEBUG oslo_concurrency.lockutils [req-862dd4a6-32cc-4e68-82bd-6d096b9b68fb req-458bca56-ee6e-4552-9e18-72821822ee4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:18:43 compute-0 nova_compute[253461]: 2025-11-22 04:18:43.124 253465 DEBUG oslo_concurrency.lockutils [req-862dd4a6-32cc-4e68-82bd-6d096b9b68fb req-458bca56-ee6e-4552-9e18-72821822ee4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:18:43 compute-0 nova_compute[253461]: 2025-11-22 04:18:43.125 253465 DEBUG nova.network.neutron [req-862dd4a6-32cc-4e68-82bd-6d096b9b68fb req-458bca56-ee6e-4552-9e18-72821822ee4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Refreshing network info cache for port fb575dd5-2b58-4784-8566-bafe4380eb14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:18:43 compute-0 sshd-session[302586]: Invalid user admin from 27.79.46.85 port 51706
Nov 22 04:18:43 compute-0 sshd-session[302586]: Connection closed by invalid user admin 27.79.46.85 port 51706 [preauth]
Nov 22 04:18:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:43 compute-0 nova_compute[253461]: 2025-11-22 04:18:43.917 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:44 compute-0 ceph-mon[75011]: pgmap v2135: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 54 op/s
Nov 22 04:18:44 compute-0 nova_compute[253461]: 2025-11-22 04:18:44.659 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 22 04:18:44 compute-0 nova_compute[253461]: 2025-11-22 04:18:44.867 253465 DEBUG nova.network.neutron [req-862dd4a6-32cc-4e68-82bd-6d096b9b68fb req-458bca56-ee6e-4552-9e18-72821822ee4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updated VIF entry in instance network info cache for port fb575dd5-2b58-4784-8566-bafe4380eb14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:18:44 compute-0 nova_compute[253461]: 2025-11-22 04:18:44.868 253465 DEBUG nova.network.neutron [req-862dd4a6-32cc-4e68-82bd-6d096b9b68fb req-458bca56-ee6e-4552-9e18-72821822ee4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updating instance_info_cache with network_info: [{"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:18:44 compute-0 nova_compute[253461]: 2025-11-22 04:18:44.991 253465 DEBUG oslo_concurrency.lockutils [req-862dd4a6-32cc-4e68-82bd-6d096b9b68fb req-458bca56-ee6e-4552-9e18-72821822ee4a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:18:46 compute-0 ceph-mon[75011]: pgmap v2136: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.480037605000977e-06 of space, bias 1.0, pg target 0.0007440112815002931 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0028901340797356256 of space, bias 1.0, pg target 0.8670402239206877 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:18:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 22 04:18:48 compute-0 ceph-mon[75011]: pgmap v2137: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Nov 22 04:18:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 22 04:18:48 compute-0 nova_compute[253461]: 2025-11-22 04:18:48.916 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:49 compute-0 podman[302588]: 2025-11-22 04:18:49.417730981 +0000 UTC m=+0.093759802 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:18:49 compute-0 podman[302589]: 2025-11-22 04:18:49.473263828 +0000 UTC m=+0.146538523 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 04:18:49 compute-0 nova_compute[253461]: 2025-11-22 04:18:49.661 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:50 compute-0 ceph-mon[75011]: pgmap v2138: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 22 04:18:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 79 op/s
Nov 22 04:18:50 compute-0 ovn_controller[152691]: 2025-11-22T04:18:50Z|00064|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.12
Nov 22 04:18:50 compute-0 ovn_controller[152691]: 2025-11-22T04:18:50Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:a0:25:74 10.100.0.12
Nov 22 04:18:51 compute-0 nova_compute[253461]: 2025-11-22 04:18:51.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:51 compute-0 nova_compute[253461]: 2025-11-22 04:18:51.475 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:51 compute-0 nova_compute[253461]: 2025-11-22 04:18:51.475 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:51 compute-0 nova_compute[253461]: 2025-11-22 04:18:51.476 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:51 compute-0 nova_compute[253461]: 2025-11-22 04:18:51.476 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:18:51 compute-0 nova_compute[253461]: 2025-11-22 04:18:51.476 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:52 compute-0 ceph-mon[75011]: pgmap v2139: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 79 op/s
Nov 22 04:18:52 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:52 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1524541242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.222 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.745s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.470 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.470 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.633 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.634 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4192MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.635 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.635 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.713 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 6530527f-0897-45c4-83e0-8ec0e35c375d actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.713 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.714 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:18:52 compute-0 nova_compute[253461]: 2025-11-22 04:18:52.758 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:18:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.3 KiB/s wr, 58 op/s
Nov 22 04:18:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1524541242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:18:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4290609074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:53 compute-0 nova_compute[253461]: 2025-11-22 04:18:53.252 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:18:53 compute-0 nova_compute[253461]: 2025-11-22 04:18:53.259 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:18:53 compute-0 nova_compute[253461]: 2025-11-22 04:18:53.281 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:18:53 compute-0 nova_compute[253461]: 2025-11-22 04:18:53.314 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:18:53 compute-0 nova_compute[253461]: 2025-11-22 04:18:53.315 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:18:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:53 compute-0 nova_compute[253461]: 2025-11-22 04:18:53.918 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:54 compute-0 ceph-mon[75011]: pgmap v2140: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 7.3 KiB/s wr, 58 op/s
Nov 22 04:18:54 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4290609074' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:18:54 compute-0 nova_compute[253461]: 2025-11-22 04:18:54.664 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.4 KiB/s wr, 59 op/s
Nov 22 04:18:54 compute-0 ovn_controller[152691]: 2025-11-22T04:18:54Z|00066|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.12
Nov 22 04:18:54 compute-0 ovn_controller[152691]: 2025-11-22T04:18:54Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:a0:25:74 10.100.0.12
Nov 22 04:18:55 compute-0 ovn_controller[152691]: 2025-11-22T04:18:55Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a0:25:74 10.100.0.12
Nov 22 04:18:55 compute-0 ovn_controller[152691]: 2025-11-22T04:18:55Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:25:74 10.100.0.12
Nov 22 04:18:56 compute-0 ceph-mon[75011]: pgmap v2141: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 7.4 KiB/s wr, 59 op/s
Nov 22 04:18:56 compute-0 nova_compute[253461]: 2025-11-22 04:18:56.316 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:56 compute-0 nova_compute[253461]: 2025-11-22 04:18:56.317 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:18:56 compute-0 nova_compute[253461]: 2025-11-22 04:18:56.317 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:18:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 22 04:18:57 compute-0 nova_compute[253461]: 2025-11-22 04:18:57.060 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:18:57 compute-0 nova_compute[253461]: 2025-11-22 04:18:57.061 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:18:57 compute-0 nova_compute[253461]: 2025-11-22 04:18:57.061 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:18:57 compute-0 nova_compute[253461]: 2025-11-22 04:18:57.062 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6530527f-0897-45c4-83e0-8ec0e35c375d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:18:58 compute-0 ceph-mon[75011]: pgmap v2142: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.550 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updating instance_info_cache with network_info: [{"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.567 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-6530527f-0897-45c4-83e0-8ec0e35c375d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.568 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.569 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.569 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.570 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.570 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.571 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.571 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:18:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.679 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:18:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 22 04:18:58 compute-0 nova_compute[253461]: 2025-11-22 04:18:58.921 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:18:59 compute-0 nova_compute[253461]: 2025-11-22 04:18:59.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:00 compute-0 ceph-mon[75011]: pgmap v2143: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 22 04:19:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:19:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4174678964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:19:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:19:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4174678964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:19:00 compute-0 nova_compute[253461]: 2025-11-22 04:19:00.425 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 22 04:19:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4174678964' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:19:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/4174678964' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:19:01 compute-0 sudo[302679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:01 compute-0 sudo[302679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:01 compute-0 sudo[302679]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:01 compute-0 sudo[302704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:19:01 compute-0 sudo[302704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:01 compute-0 sudo[302704]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:02 compute-0 sudo[302729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:02 compute-0 sudo[302729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:02 compute-0 sudo[302729]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:02 compute-0 ceph-mon[75011]: pgmap v2144: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Nov 22 04:19:02 compute-0 sudo[302754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:19:02 compute-0 sudo[302754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:02 compute-0 sudo[302754]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Nov 22 04:19:02 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:19:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:19:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:19:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:19:02 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:19:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:19:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 515 KiB/s rd, 20 KiB/s wr, 37 op/s
Nov 22 04:19:02 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:19:02 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 386a0aaf-d1fa-45f9-9a9b-210ca50782a9 does not exist
Nov 22 04:19:02 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7d533570-3791-4166-88a9-c297457d835a does not exist
Nov 22 04:19:02 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev dea55231-598c-4a4c-9b0d-57be8f60a989 does not exist
Nov 22 04:19:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:19:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:19:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:19:02 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:19:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:19:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:19:03 compute-0 sudo[302811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:03 compute-0 sudo[302811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:03 compute-0 sudo[302811]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:03 compute-0 sudo[302836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:19:03 compute-0 sudo[302836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:03 compute-0 sudo[302836]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:03 compute-0 sudo[302861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:03 compute-0 sudo[302861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:03 compute-0 sudo[302861]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Nov 22 04:19:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:19:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:19:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:19:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:19:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:19:03 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:19:03 compute-0 sudo[302886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:19:03 compute-0 sudo[302886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:03 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:03 compute-0 podman[302951]: 2025-11-22 04:19:03.598650485 +0000 UTC m=+0.021163778 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:19:03 compute-0 podman[302951]: 2025-11-22 04:19:03.836299083 +0000 UTC m=+0.258812376 container create 01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:19:03 compute-0 nova_compute[253461]: 2025-11-22 04:19:03.924 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:04 compute-0 systemd[1]: Started libpod-conmon-01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132.scope.
Nov 22 04:19:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:19:04 compute-0 podman[302951]: 2025-11-22 04:19:04.144165525 +0000 UTC m=+0.566678828 container init 01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:19:04 compute-0 podman[302951]: 2025-11-22 04:19:04.154192974 +0000 UTC m=+0.576706247 container start 01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Nov 22 04:19:04 compute-0 podman[302951]: 2025-11-22 04:19:04.158357314 +0000 UTC m=+0.580870577 container attach 01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:19:04 compute-0 youthful_dijkstra[302967]: 167 167
Nov 22 04:19:04 compute-0 systemd[1]: libpod-01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132.scope: Deactivated successfully.
Nov 22 04:19:04 compute-0 podman[302951]: 2025-11-22 04:19:04.162936341 +0000 UTC m=+0.585449654 container died 01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:19:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-0336f0e5e2b450d3d4d454e3cb687783d966bea6672a3de151bbc830fecbf8a8-merged.mount: Deactivated successfully.
Nov 22 04:19:04 compute-0 podman[302951]: 2025-11-22 04:19:04.239179488 +0000 UTC m=+0.661692781 container remove 01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_dijkstra, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:19:04 compute-0 systemd[1]: libpod-conmon-01adc2105f3bd67da2b03c4ae1cd5f75ba4b456b82307e4367a04599db24f132.scope: Deactivated successfully.
Nov 22 04:19:04 compute-0 ceph-mon[75011]: pgmap v2145: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 515 KiB/s rd, 20 KiB/s wr, 37 op/s
Nov 22 04:19:04 compute-0 nova_compute[253461]: 2025-11-22 04:19:04.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:04 compute-0 podman[302990]: 2025-11-22 04:19:04.441201994 +0000 UTC m=+0.052283972 container create 75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Nov 22 04:19:04 compute-0 systemd[1]: Started libpod-conmon-75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0.scope.
Nov 22 04:19:04 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1bcb4818cddca002195ce36f54953e88072b3bfce4e72a5e71b10f7b51ac8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1bcb4818cddca002195ce36f54953e88072b3bfce4e72a5e71b10f7b51ac8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1bcb4818cddca002195ce36f54953e88072b3bfce4e72a5e71b10f7b51ac8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1bcb4818cddca002195ce36f54953e88072b3bfce4e72a5e71b10f7b51ac8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f1bcb4818cddca002195ce36f54953e88072b3bfce4e72a5e71b10f7b51ac8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:04 compute-0 podman[302990]: 2025-11-22 04:19:04.413480907 +0000 UTC m=+0.024562895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:19:04 compute-0 podman[302990]: 2025-11-22 04:19:04.522039038 +0000 UTC m=+0.133121056 container init 75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:19:04 compute-0 podman[302990]: 2025-11-22 04:19:04.529888599 +0000 UTC m=+0.140970577 container start 75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Nov 22 04:19:04 compute-0 podman[302990]: 2025-11-22 04:19:04.534247154 +0000 UTC m=+0.145329182 container attach 75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Nov 22 04:19:04 compute-0 podman[303004]: 2025-11-22 04:19:04.537040101 +0000 UTC m=+0.061181343 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Nov 22 04:19:04 compute-0 nova_compute[253461]: 2025-11-22 04:19:04.668 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 14 KiB/s wr, 24 op/s
Nov 22 04:19:05 compute-0 modest_germain[303012]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:19:05 compute-0 modest_germain[303012]: --> relative data size: 1.0
Nov 22 04:19:05 compute-0 modest_germain[303012]: --> All data devices are unavailable
Nov 22 04:19:05 compute-0 systemd[1]: libpod-75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0.scope: Deactivated successfully.
Nov 22 04:19:05 compute-0 systemd[1]: libpod-75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0.scope: Consumed 1.111s CPU time.
Nov 22 04:19:05 compute-0 podman[302990]: 2025-11-22 04:19:05.692693296 +0000 UTC m=+1.303775334 container died 75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:19:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f1bcb4818cddca002195ce36f54953e88072b3bfce4e72a5e71b10f7b51ac8f-merged.mount: Deactivated successfully.
Nov 22 04:19:05 compute-0 podman[302990]: 2025-11-22 04:19:05.767690049 +0000 UTC m=+1.378772027 container remove 75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_germain, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Nov 22 04:19:05 compute-0 systemd[1]: libpod-conmon-75ad204aa5f851d2ebb19ae203f99c42bbf5ab540b8bd845662d858bdcba3bb0.scope: Deactivated successfully.
Nov 22 04:19:05 compute-0 sudo[302886]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:05 compute-0 sudo[303070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:05 compute-0 sudo[303070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:05 compute-0 sudo[303070]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:05 compute-0 sudo[303095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:19:05 compute-0 sudo[303095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:05 compute-0 sudo[303095]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:06 compute-0 sudo[303120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:06 compute-0 sudo[303120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:06 compute-0 sudo[303120]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:06 compute-0 sudo[303145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:19:06 compute-0 sudo[303145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:06 compute-0 ceph-mon[75011]: pgmap v2146: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 282 KiB/s rd, 14 KiB/s wr, 24 op/s
Nov 22 04:19:06 compute-0 podman[303209]: 2025-11-22 04:19:06.480193173 +0000 UTC m=+0.045045463 container create ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:19:06 compute-0 systemd[1]: Started libpod-conmon-ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd.scope.
Nov 22 04:19:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:19:06 compute-0 podman[303209]: 2025-11-22 04:19:06.462106889 +0000 UTC m=+0.026959179 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:19:06 compute-0 podman[303209]: 2025-11-22 04:19:06.568932957 +0000 UTC m=+0.133785267 container init ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Nov 22 04:19:06 compute-0 podman[303209]: 2025-11-22 04:19:06.575212036 +0000 UTC m=+0.140064326 container start ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:19:06 compute-0 podman[303209]: 2025-11-22 04:19:06.57844521 +0000 UTC m=+0.143297530 container attach ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:19:06 compute-0 brave_austin[303225]: 167 167
Nov 22 04:19:06 compute-0 systemd[1]: libpod-ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd.scope: Deactivated successfully.
Nov 22 04:19:06 compute-0 podman[303209]: 2025-11-22 04:19:06.584036486 +0000 UTC m=+0.148888776 container died ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:19:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c897380405b06d938fc8abbf0ea2fd8b1a6b1002089c046e7e082ea22d7e23ad-merged.mount: Deactivated successfully.
Nov 22 04:19:06 compute-0 podman[303209]: 2025-11-22 04:19:06.634603793 +0000 UTC m=+0.199456083 container remove ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:19:06 compute-0 systemd[1]: libpod-conmon-ff8cee103af58d001c65da78976bd76ab16133304581a15b953366ced2764bfd.scope: Deactivated successfully.
Nov 22 04:19:06 compute-0 podman[303249]: 2025-11-22 04:19:06.79921143 +0000 UTC m=+0.047413264 container create 2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Nov 22 04:19:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 26 KiB/s wr, 6 op/s
Nov 22 04:19:06 compute-0 systemd[1]: Started libpod-conmon-2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea.scope.
Nov 22 04:19:06 compute-0 podman[303249]: 2025-11-22 04:19:06.776996246 +0000 UTC m=+0.025198060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:19:06 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5920a20ce72209b58aa3f2be180478eb8753480d060bf50c51e71b617289a217/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5920a20ce72209b58aa3f2be180478eb8753480d060bf50c51e71b617289a217/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5920a20ce72209b58aa3f2be180478eb8753480d060bf50c51e71b617289a217/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5920a20ce72209b58aa3f2be180478eb8753480d060bf50c51e71b617289a217/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:06 compute-0 podman[303249]: 2025-11-22 04:19:06.914769269 +0000 UTC m=+0.162971113 container init 2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:19:06 compute-0 podman[303249]: 2025-11-22 04:19:06.93002161 +0000 UTC m=+0.178223414 container start 2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:19:06 compute-0 podman[303249]: 2025-11-22 04:19:06.933630486 +0000 UTC m=+0.181832320 container attach 2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:19:07 compute-0 happy_clarke[303265]: {
Nov 22 04:19:07 compute-0 happy_clarke[303265]:     "0": [
Nov 22 04:19:07 compute-0 happy_clarke[303265]:         {
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "devices": [
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "/dev/loop3"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             ],
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_name": "ceph_lv0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_size": "21470642176",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "name": "ceph_lv0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "tags": {
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cluster_name": "ceph",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.crush_device_class": "",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.encrypted": "0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osd_id": "0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.type": "block",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.vdo": "0"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             },
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "type": "block",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "vg_name": "ceph_vg0"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:         }
Nov 22 04:19:07 compute-0 happy_clarke[303265]:     ],
Nov 22 04:19:07 compute-0 happy_clarke[303265]:     "1": [
Nov 22 04:19:07 compute-0 happy_clarke[303265]:         {
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "devices": [
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "/dev/loop4"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             ],
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_name": "ceph_lv1",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_size": "21470642176",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "name": "ceph_lv1",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "tags": {
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cluster_name": "ceph",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.crush_device_class": "",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.encrypted": "0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osd_id": "1",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.type": "block",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.vdo": "0"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             },
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "type": "block",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "vg_name": "ceph_vg1"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:         }
Nov 22 04:19:07 compute-0 happy_clarke[303265]:     ],
Nov 22 04:19:07 compute-0 happy_clarke[303265]:     "2": [
Nov 22 04:19:07 compute-0 happy_clarke[303265]:         {
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "devices": [
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "/dev/loop5"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             ],
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_name": "ceph_lv2",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_size": "21470642176",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "name": "ceph_lv2",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "tags": {
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.cluster_name": "ceph",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.crush_device_class": "",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.encrypted": "0",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osd_id": "2",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.type": "block",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:                 "ceph.vdo": "0"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             },
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "type": "block",
Nov 22 04:19:07 compute-0 happy_clarke[303265]:             "vg_name": "ceph_vg2"
Nov 22 04:19:07 compute-0 happy_clarke[303265]:         }
Nov 22 04:19:07 compute-0 happy_clarke[303265]:     ]
Nov 22 04:19:07 compute-0 happy_clarke[303265]: }
Nov 22 04:19:07 compute-0 systemd[1]: libpod-2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea.scope: Deactivated successfully.
Nov 22 04:19:07 compute-0 podman[303274]: 2025-11-22 04:19:07.750575448 +0000 UTC m=+0.023861478 container died 2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:19:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5920a20ce72209b58aa3f2be180478eb8753480d060bf50c51e71b617289a217-merged.mount: Deactivated successfully.
Nov 22 04:19:07 compute-0 podman[303274]: 2025-11-22 04:19:07.80754093 +0000 UTC m=+0.080826960 container remove 2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_clarke, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:19:07 compute-0 systemd[1]: libpod-conmon-2db9099f51fca60c6e11230dc75b82a46fa8ad6668c8f193e1c713b744819fea.scope: Deactivated successfully.
Nov 22 04:19:07 compute-0 sudo[303145]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:07 compute-0 sudo[303289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:07 compute-0 sudo[303289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:07 compute-0 sudo[303289]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:07 compute-0 sudo[303314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:19:07 compute-0 sudo[303314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:07 compute-0 sudo[303314]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:08 compute-0 sudo[303339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:08 compute-0 sudo[303339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:08 compute-0 sudo[303339]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:08 compute-0 sudo[303364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:19:08 compute-0 sudo[303364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:08 compute-0 ceph-mon[75011]: pgmap v2147: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 26 KiB/s wr, 6 op/s
Nov 22 04:19:08 compute-0 podman[303428]: 2025-11-22 04:19:08.454046468 +0000 UTC m=+0.045090547 container create 073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:19:08 compute-0 systemd[1]: Started libpod-conmon-073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f.scope.
Nov 22 04:19:08 compute-0 podman[303428]: 2025-11-22 04:19:08.433497265 +0000 UTC m=+0.024541324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:19:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:19:08 compute-0 podman[303428]: 2025-11-22 04:19:08.551832084 +0000 UTC m=+0.142876203 container init 073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rhodes, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:19:08 compute-0 podman[303428]: 2025-11-22 04:19:08.559625734 +0000 UTC m=+0.150669773 container start 073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Nov 22 04:19:08 compute-0 podman[303428]: 2025-11-22 04:19:08.563600384 +0000 UTC m=+0.154644463 container attach 073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rhodes, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Nov 22 04:19:08 compute-0 sad_rhodes[303444]: 167 167
Nov 22 04:19:08 compute-0 systemd[1]: libpod-073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f.scope: Deactivated successfully.
Nov 22 04:19:08 compute-0 podman[303428]: 2025-11-22 04:19:08.56863029 +0000 UTC m=+0.159674359 container died 073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rhodes, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Nov 22 04:19:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-15bf97a9dc4dbc742453d109e9f49dac7071acf875a8f9efd6dcc3c038fd74e4-merged.mount: Deactivated successfully.
Nov 22 04:19:08 compute-0 podman[303428]: 2025-11-22 04:19:08.614036057 +0000 UTC m=+0.205080096 container remove 073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_rhodes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:19:08 compute-0 systemd[1]: libpod-conmon-073be1709e0926ada83b730a0b14b99a0b9023690db204f858726e32fc2d387f.scope: Deactivated successfully.
Nov 22 04:19:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 1 op/s
Nov 22 04:19:08 compute-0 podman[303469]: 2025-11-22 04:19:08.818702033 +0000 UTC m=+0.059441441 container create 74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:19:08 compute-0 systemd[1]: Started libpod-conmon-74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0.scope.
Nov 22 04:19:08 compute-0 podman[303469]: 2025-11-22 04:19:08.795892944 +0000 UTC m=+0.036632392 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:19:08 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfe44b5430ffae7913b24334bf28101e1367bc6beb1d4777805a80f985169db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfe44b5430ffae7913b24334bf28101e1367bc6beb1d4777805a80f985169db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfe44b5430ffae7913b24334bf28101e1367bc6beb1d4777805a80f985169db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6dfe44b5430ffae7913b24334bf28101e1367bc6beb1d4777805a80f985169db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:19:08 compute-0 nova_compute[253461]: 2025-11-22 04:19:08.925 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:08 compute-0 podman[303469]: 2025-11-22 04:19:08.929299047 +0000 UTC m=+0.170038465 container init 74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Nov 22 04:19:08 compute-0 podman[303469]: 2025-11-22 04:19:08.936673499 +0000 UTC m=+0.177412917 container start 74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:19:08 compute-0 podman[303469]: 2025-11-22 04:19:08.941230155 +0000 UTC m=+0.181969583 container attach 74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:19:09 compute-0 nova_compute[253461]: 2025-11-22 04:19:09.670 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:09 compute-0 sweet_swartz[303485]: {
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "osd_id": 1,
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "type": "bluestore"
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:     },
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "osd_id": 0,
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "type": "bluestore"
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:     },
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "osd_id": 2,
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:         "type": "bluestore"
Nov 22 04:19:09 compute-0 sweet_swartz[303485]:     }
Nov 22 04:19:09 compute-0 sweet_swartz[303485]: }
Nov 22 04:19:09 compute-0 systemd[1]: libpod-74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0.scope: Deactivated successfully.
Nov 22 04:19:09 compute-0 podman[303469]: 2025-11-22 04:19:09.956664061 +0000 UTC m=+1.197403519 container died 74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Nov 22 04:19:09 compute-0 systemd[1]: libpod-74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0.scope: Consumed 1.028s CPU time.
Nov 22 04:19:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6dfe44b5430ffae7913b24334bf28101e1367bc6beb1d4777805a80f985169db-merged.mount: Deactivated successfully.
Nov 22 04:19:10 compute-0 podman[303469]: 2025-11-22 04:19:10.027571548 +0000 UTC m=+1.268310986 container remove 74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 22 04:19:10 compute-0 systemd[1]: libpod-conmon-74797739f46c55dfdf18738c69e77d47cb21222e57acb574a36b7cb0dc62d4d0.scope: Deactivated successfully.
Nov 22 04:19:10 compute-0 sudo[303364]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:19:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:19:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:19:10 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:19:10 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ef73bfc9-d83c-4de5-8105-ca003abc7d83 does not exist
Nov 22 04:19:10 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 61080b02-d66e-4aa8-acee-10289bc7f388 does not exist
Nov 22 04:19:10 compute-0 sudo[303531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:19:10 compute-0 sudo[303531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:10 compute-0 sudo[303531]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:10 compute-0 sudo[303556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:19:10 compute-0 sudo[303556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:19:10 compute-0 sudo[303556]: pam_unix(sudo:session): session closed for user root
Nov 22 04:19:10 compute-0 ceph-mon[75011]: pgmap v2148: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 1 op/s
Nov 22 04:19:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:19:10 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:19:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 22 04:19:12 compute-0 ovn_controller[152691]: 2025-11-22T04:19:12Z|00283|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 22 04:19:12 compute-0 ceph-mon[75011]: pgmap v2149: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 22 04:19:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 22 04:19:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:13 compute-0 nova_compute[253461]: 2025-11-22 04:19:13.928 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:14 compute-0 ceph-mon[75011]: pgmap v2150: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 22 04:19:14 compute-0 nova_compute[253461]: 2025-11-22 04:19:14.672 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.230 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6530527f-0897-45c4-83e0-8ec0e35c375d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.231 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.231 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.231 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.232 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.233 253465 INFO nova.compute.manager [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Terminating instance
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.234 253465 DEBUG nova.compute.manager [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:19:15 compute-0 sshd-session[303581]: Invalid user admin from 27.79.46.85 port 54346
Nov 22 04:19:15 compute-0 kernel: tapfb575dd5-2b (unregistering): left promiscuous mode
Nov 22 04:19:15 compute-0 NetworkManager[48916]: <info>  [1763785155.3067] device (tapfb575dd5-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:19:15 compute-0 ovn_controller[152691]: 2025-11-22T04:19:15Z|00284|binding|INFO|Releasing lport fb575dd5-2b58-4784-8566-bafe4380eb14 from this chassis (sb_readonly=0)
Nov 22 04:19:15 compute-0 ovn_controller[152691]: 2025-11-22T04:19:15Z|00285|binding|INFO|Setting lport fb575dd5-2b58-4784-8566-bafe4380eb14 down in Southbound
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.311 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:15 compute-0 ovn_controller[152691]: 2025-11-22T04:19:15Z|00286|binding|INFO|Removing iface tapfb575dd5-2b ovn-installed in OVS
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.313 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.317 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:25:74 10.100.0.12'], port_security=['fa:16:3e:a0:25:74 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6530527f-0897-45c4-83e0-8ec0e35c375d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-73bcc005-88ac-46b6-ad11-6207c6046246', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '98e4451f91104cd88f6e19dd5c53fd00', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0c3cc5e7-a78d-415a-aaf6-2d09ae975fc0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8139379-e220-4788-92e4-b495f0c34eb7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=fb575dd5-2b58-4784-8566-bafe4380eb14) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.319 162689 INFO neutron.agent.ovn.metadata.agent [-] Port fb575dd5-2b58-4784-8566-bafe4380eb14 in datapath 73bcc005-88ac-46b6-ad11-6207c6046246 unbound from our chassis
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.320 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 73bcc005-88ac-46b6-ad11-6207c6046246, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.321 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c09caac8-34de-4047-a8b8-321e67b20200]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.322 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 namespace which is not needed anymore
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.333 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:15 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Nov 22 04:19:15 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 16.590s CPU time.
Nov 22 04:19:15 compute-0 systemd-machined[215728]: Machine qemu-27-instance-0000001b terminated.
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.478 253465 INFO nova.virt.libvirt.driver [-] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Instance destroyed successfully.
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.479 253465 DEBUG nova.objects.instance [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lazy-loading 'resources' on Instance uuid 6530527f-0897-45c4-83e0-8ec0e35c375d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:19:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[302564]: [NOTICE]   (302569) : haproxy version is 2.8.14-c23fe91
Nov 22 04:19:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[302564]: [NOTICE]   (302569) : path to executable is /usr/sbin/haproxy
Nov 22 04:19:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[302564]: [WARNING]  (302569) : Exiting Master process...
Nov 22 04:19:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[302564]: [WARNING]  (302569) : Exiting Master process...
Nov 22 04:19:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[302564]: [ALERT]    (302569) : Current worker (302571) exited with code 143 (Terminated)
Nov 22 04:19:15 compute-0 neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246[302564]: [WARNING]  (302569) : All workers exited. Exiting... (0)
Nov 22 04:19:15 compute-0 systemd[1]: libpod-d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed.scope: Deactivated successfully.
Nov 22 04:19:15 compute-0 conmon[302564]: conmon d8624b9098dd8864235e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed.scope/container/memory.events
Nov 22 04:19:15 compute-0 sshd-session[303581]: Connection closed by invalid user admin 27.79.46.85 port 54346 [preauth]
Nov 22 04:19:15 compute-0 podman[303608]: 2025-11-22 04:19:15.495327028 +0000 UTC m=+0.056970162 container died d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3)
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.495 253465 DEBUG nova.virt.libvirt.vif [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:18:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-513005959',display_name='tempest-TransferEncryptedVolumeTest-server-513005959',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-513005959',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNg2ATKJWVlmyGreKYnDCSQO/lCz0VA3LfT+2g0XAIL/EfA89Lu4gjHntRaTvYv3ssQtoWjE9SDx5lQG0mvCId2hvStMomFINnpiLPFYacZktgyZ/1N3JNIqwNfMqE81xQ==',key_name='tempest-TransferEncryptedVolumeTest-1718953926',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:18:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='98e4451f91104cd88f6e19dd5c53fd00',ramdisk_id='',reservation_id='r-m1yl9vr1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1500496447',owner_user_name='tempest-TransferEncryptedVolumeTest-1500496447-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:18:38Z,user_data=None,user_id='ddff25657c74403e9ed9e91ff227badd',uuid=6530527f-0897-45c4-83e0-8ec0e35c375d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.496 253465 DEBUG nova.network.os_vif_util [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converting VIF {"id": "fb575dd5-2b58-4784-8566-bafe4380eb14", "address": "fa:16:3e:a0:25:74", "network": {"id": "73bcc005-88ac-46b6-ad11-6207c6046246", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-1096743600-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "98e4451f91104cd88f6e19dd5c53fd00", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb575dd5-2b", "ovs_interfaceid": "fb575dd5-2b58-4784-8566-bafe4380eb14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.496 253465 DEBUG nova.network.os_vif_util [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a0:25:74,bridge_name='br-int',has_traffic_filtering=True,id=fb575dd5-2b58-4784-8566-bafe4380eb14,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb575dd5-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.497 253465 DEBUG os_vif [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:25:74,bridge_name='br-int',has_traffic_filtering=True,id=fb575dd5-2b58-4784-8566-bafe4380eb14,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb575dd5-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.498 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.499 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb575dd5-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.500 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.502 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.505 253465 INFO os_vif [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a0:25:74,bridge_name='br-int',has_traffic_filtering=True,id=fb575dd5-2b58-4784-8566-bafe4380eb14,network=Network(73bcc005-88ac-46b6-ad11-6207c6046246),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb575dd5-2b')
Nov 22 04:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed-userdata-shm.mount: Deactivated successfully.
Nov 22 04:19:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-9813f413d87989fae8c9df6a83282e6c8fd2bdadecda18446c3b1d2eef888335-merged.mount: Deactivated successfully.
Nov 22 04:19:15 compute-0 podman[303608]: 2025-11-22 04:19:15.547961495 +0000 UTC m=+0.109604609 container cleanup d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:19:15 compute-0 systemd[1]: libpod-conmon-d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed.scope: Deactivated successfully.
Nov 22 04:19:15 compute-0 podman[303668]: 2025-11-22 04:19:15.626084885 +0000 UTC m=+0.052565977 container remove d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.632 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8e796714-2354-47f2-a1d5-0a5171f8ed19]: (4, ('Sat Nov 22 04:19:15 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 (d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed)\nd8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed\nSat Nov 22 04:19:15 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 (d8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed)\nd8624b9098dd8864235e8f9f04e3e124a6cabd141da51a0f0b965a1f9b1460ed\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.634 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9d46e4d4-769f-488c-ab0d-5b53cb17af4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.637 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73bcc005-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.640 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:15 compute-0 kernel: tap73bcc005-80: left promiscuous mode
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.666 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.670 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd67b00-4e79-448d-8302-da29c85f5e22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.687 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[f91b4a61-9d1e-4632-8956-284cebf94da6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.689 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[731cdabc-f2a0-4647-beb0-5ad82b346d35]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.714 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2b4c1b59-2e5f-4e65-9e6f-bbda612e33c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 536828, 'reachable_time': 43466, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303683, 'error': None, 'target': 'ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:19:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d73bcc005\x2d88ac\x2d46b6\x2dad11\x2d6207c6046246.mount: Deactivated successfully.
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.719 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-73bcc005-88ac-46b6-ad11-6207c6046246 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:19:15 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:15.720 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[87ff39bc-ba35-4ec2-bdb6-01e9514dbdcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.728 253465 INFO nova.virt.libvirt.driver [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Deleting instance files /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d_del
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.729 253465 INFO nova.virt.libvirt.driver [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Deletion of /var/lib/nova/instances/6530527f-0897-45c4-83e0-8ec0e35c375d_del complete
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.800 253465 INFO nova.compute.manager [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Took 0.57 seconds to destroy the instance on the hypervisor.
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.800 253465 DEBUG oslo.service.loopingcall [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.801 253465 DEBUG nova.compute.manager [-] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:19:15 compute-0 nova_compute[253461]: 2025-11-22 04:19:15.801 253465 DEBUG nova.network.neutron [-] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.220 253465 DEBUG nova.compute.manager [req-47902588-589e-41f8-a68b-97cbccee0d5c req-762b8d2a-0284-4b9b-9d75-fd9f4870b07a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received event network-vif-unplugged-fb575dd5-2b58-4784-8566-bafe4380eb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.221 253465 DEBUG oslo_concurrency.lockutils [req-47902588-589e-41f8-a68b-97cbccee0d5c req-762b8d2a-0284-4b9b-9d75-fd9f4870b07a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.222 253465 DEBUG oslo_concurrency.lockutils [req-47902588-589e-41f8-a68b-97cbccee0d5c req-762b8d2a-0284-4b9b-9d75-fd9f4870b07a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.223 253465 DEBUG oslo_concurrency.lockutils [req-47902588-589e-41f8-a68b-97cbccee0d5c req-762b8d2a-0284-4b9b-9d75-fd9f4870b07a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.223 253465 DEBUG nova.compute.manager [req-47902588-589e-41f8-a68b-97cbccee0d5c req-762b8d2a-0284-4b9b-9d75-fd9f4870b07a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] No waiting events found dispatching network-vif-unplugged-fb575dd5-2b58-4784-8566-bafe4380eb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.224 253465 DEBUG nova.compute.manager [req-47902588-589e-41f8-a68b-97cbccee0d5c req-762b8d2a-0284-4b9b-9d75-fd9f4870b07a f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received event network-vif-unplugged-fb575dd5-2b58-4784-8566-bafe4380eb14 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:19:16 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:16.388 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.389 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:16 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:16.389 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:19:16 compute-0 ceph-mon[75011]: pgmap v2151: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Nov 22 04:19:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 14 KiB/s wr, 9 op/s
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.857 253465 DEBUG nova.network.neutron [-] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.895 253465 INFO nova.compute.manager [-] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Took 1.09 seconds to deallocate network for instance.
Nov 22 04:19:16 compute-0 nova_compute[253461]: 2025-11-22 04:19:16.963 253465 DEBUG nova.compute.manager [req-6a0d1478-98a9-4543-87ca-ffde857e1451 req-dd3b9a7f-6c76-419c-91bb-8c8e8832a38b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received event network-vif-deleted-fb575dd5-2b58-4784-8566-bafe4380eb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:19:17 compute-0 nova_compute[253461]: 2025-11-22 04:19:17.147 253465 INFO nova.compute.manager [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Took 0.25 seconds to detach 1 volumes for instance.
Nov 22 04:19:17 compute-0 nova_compute[253461]: 2025-11-22 04:19:17.333 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:19:17 compute-0 nova_compute[253461]: 2025-11-22 04:19:17.334 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:19:17 compute-0 nova_compute[253461]: 2025-11-22 04:19:17.392 253465 DEBUG oslo_concurrency.processutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:19:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685527783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:17 compute-0 nova_compute[253461]: 2025-11-22 04:19:17.850 253465 DEBUG oslo_concurrency.processutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:19:17 compute-0 nova_compute[253461]: 2025-11-22 04:19:17.859 253465 DEBUG nova.compute.provider_tree [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.003 253465 DEBUG nova.scheduler.client.report [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.242 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.327 253465 INFO nova.scheduler.client.report [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Deleted allocations for instance 6530527f-0897-45c4-83e0-8ec0e35c375d
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.421 253465 DEBUG nova.compute.manager [req-6c80a4ce-a1df-4fca-9d17-7deeb9563f25 req-a623571d-2672-4778-9df2-04860f986aba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received event network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.421 253465 DEBUG oslo_concurrency.lockutils [req-6c80a4ce-a1df-4fca-9d17-7deeb9563f25 req-a623571d-2672-4778-9df2-04860f986aba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.422 253465 DEBUG oslo_concurrency.lockutils [req-6c80a4ce-a1df-4fca-9d17-7deeb9563f25 req-a623571d-2672-4778-9df2-04860f986aba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.423 253465 DEBUG oslo_concurrency.lockutils [req-6c80a4ce-a1df-4fca-9d17-7deeb9563f25 req-a623571d-2672-4778-9df2-04860f986aba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.423 253465 DEBUG nova.compute.manager [req-6c80a4ce-a1df-4fca-9d17-7deeb9563f25 req-a623571d-2672-4778-9df2-04860f986aba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] No waiting events found dispatching network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.424 253465 WARNING nova.compute.manager [req-6c80a4ce-a1df-4fca-9d17-7deeb9563f25 req-a623571d-2672-4778-9df2-04860f986aba f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Received unexpected event network-vif-plugged-fb575dd5-2b58-4784-8566-bafe4380eb14 for instance with vm_state deleted and task_state None.
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.450 253465 DEBUG oslo_concurrency.lockutils [None req-d3a78dd5-27b8-4b69-b2a1-7abd370ee225 ddff25657c74403e9ed9e91ff227badd 98e4451f91104cd88f6e19dd5c53fd00 - - default default] Lock "6530527f-0897-45c4-83e0-8ec0e35c375d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:19:18 compute-0 ceph-mon[75011]: pgmap v2152: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 14 KiB/s wr, 9 op/s
Nov 22 04:19:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1685527783' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 2.3 KiB/s wr, 8 op/s
Nov 22 04:19:18 compute-0 nova_compute[253461]: 2025-11-22 04:19:18.929 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:20 compute-0 nova_compute[253461]: 2025-11-22 04:19:20.502 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:20 compute-0 ceph-mon[75011]: pgmap v2153: 305 pgs: 305 active+clean; 270 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 2.3 KiB/s wr, 8 op/s
Nov 22 04:19:20 compute-0 podman[303707]: 2025-11-22 04:19:20.546133423 +0000 UTC m=+0.214117798 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 22 04:19:20 compute-0 podman[303708]: 2025-11-22 04:19:20.615114733 +0000 UTC m=+0.281899609 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Nov 22 04:19:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 2.6 KiB/s wr, 18 op/s
Nov 22 04:19:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:19:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080875609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:19:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:19:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3080875609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:19:22 compute-0 ceph-mon[75011]: pgmap v2154: 305 pgs: 305 active+clean; 270 MiB data, 652 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 2.6 KiB/s wr, 18 op/s
Nov 22 04:19:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3080875609' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:19:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3080875609' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:19:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 199 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 22 04:19:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:23.035 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:19:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:23.035 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:19:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:23.035 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:19:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:19:23.392 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:19:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:23 compute-0 nova_compute[253461]: 2025-11-22 04:19:23.932 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:24 compute-0 ceph-mon[75011]: pgmap v2155: 305 pgs: 305 active+clean; 199 MiB data, 593 MiB used, 59 GiB / 60 GiB avail; 222 KiB/s rd, 1.2 KiB/s wr, 23 op/s
Nov 22 04:19:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 134 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 36 op/s
Nov 22 04:19:25 compute-0 nova_compute[253461]: 2025-11-22 04:19:25.506 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:26 compute-0 ceph-mon[75011]: pgmap v2156: 305 pgs: 305 active+clean; 134 MiB data, 532 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 36 op/s
Nov 22 04:19:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 22 04:19:27 compute-0 nova_compute[253461]: 2025-11-22 04:19:27.218 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:27 compute-0 nova_compute[253461]: 2025-11-22 04:19:27.420 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:28 compute-0 ceph-mon[75011]: pgmap v2157: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Nov 22 04:19:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 852 B/s wr, 29 op/s
Nov 22 04:19:28 compute-0 nova_compute[253461]: 2025-11-22 04:19:28.933 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:30 compute-0 nova_compute[253461]: 2025-11-22 04:19:30.476 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763785155.4748278, 6530527f-0897-45c4-83e0-8ec0e35c375d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:19:30 compute-0 nova_compute[253461]: 2025-11-22 04:19:30.476 253465 INFO nova.compute.manager [-] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] VM Stopped (Lifecycle Event)
Nov 22 04:19:30 compute-0 nova_compute[253461]: 2025-11-22 04:19:30.509 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:30 compute-0 nova_compute[253461]: 2025-11-22 04:19:30.575 253465 DEBUG nova.compute.manager [None req-cb6c1faa-6565-4175-86bd-08247226750e - - - - - -] [instance: 6530527f-0897-45c4-83e0-8ec0e35c375d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:19:30 compute-0 ceph-mon[75011]: pgmap v2158: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 852 B/s wr, 29 op/s
Nov 22 04:19:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 852 B/s wr, 29 op/s
Nov 22 04:19:31 compute-0 ceph-mon[75011]: pgmap v2159: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 852 B/s wr, 29 op/s
Nov 22 04:19:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Nov 22 04:19:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:33 compute-0 ceph-mon[75011]: pgmap v2160: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Nov 22 04:19:33 compute-0 nova_compute[253461]: 2025-11-22 04:19:33.936 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:19:35 compute-0 podman[303752]: 2025-11-22 04:19:35.365346821 +0000 UTC m=+0.055201045 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:19:35 compute-0 nova_compute[253461]: 2025-11-22 04:19:35.511 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:35 compute-0 ceph-mon[75011]: pgmap v2161: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 13 op/s
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:19:36
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.meta', '.mgr', '.rgw.root']
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:19:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 1 op/s
Nov 22 04:19:37 compute-0 ceph-mon[75011]: pgmap v2162: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 1 op/s
Nov 22 04:19:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:38 compute-0 nova_compute[253461]: 2025-11-22 04:19:38.937 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:39 compute-0 ceph-mon[75011]: pgmap v2163: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:40 compute-0 nova_compute[253461]: 2025-11-22 04:19:40.514 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:41 compute-0 ceph-mon[75011]: pgmap v2164: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:43 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:43 compute-0 ceph-mon[75011]: pgmap v2165: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:43 compute-0 nova_compute[253461]: 2025-11-22 04:19:43.939 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:45 compute-0 nova_compute[253461]: 2025-11-22 04:19:45.518 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:46 compute-0 ceph-mon[75011]: pgmap v2166: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00034720526470013676 of space, bias 1.0, pg target 0.10416157941004103 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:19:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:49 compute-0 nova_compute[253461]: 2025-11-22 04:19:49.110 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:49 compute-0 ceph-mon[75011]: pgmap v2167: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:50 compute-0 ceph-mon[75011]: pgmap v2168: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:50 compute-0 nova_compute[253461]: 2025-11-22 04:19:50.521 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:51 compute-0 ceph-mon[75011]: pgmap v2169: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:51 compute-0 podman[303773]: 2025-11-22 04:19:51.388484816 +0000 UTC m=+0.064261704 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, managed_by=edpm_ansible)
Nov 22 04:19:51 compute-0 podman[303774]: 2025-11-22 04:19:51.432038586 +0000 UTC m=+0.103856813 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_controller)
Nov 22 04:19:52 compute-0 nova_compute[253461]: 2025-11-22 04:19:52.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:52 compute-0 nova_compute[253461]: 2025-11-22 04:19:52.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 04:19:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:53 compute-0 nova_compute[253461]: 2025-11-22 04:19:53.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:53 compute-0 nova_compute[253461]: 2025-11-22 04:19:53.460 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:19:53 compute-0 nova_compute[253461]: 2025-11-22 04:19:53.460 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:19:53 compute-0 nova_compute[253461]: 2025-11-22 04:19:53.460 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:19:53 compute-0 nova_compute[253461]: 2025-11-22 04:19:53.461 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:19:53 compute-0 nova_compute[253461]: 2025-11-22 04:19:53.461 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:19:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215317976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:53 compute-0 nova_compute[253461]: 2025-11-22 04:19:53.865 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:19:53 compute-0 ceph-mon[75011]: pgmap v2170: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3215317976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.037 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.038 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4375MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.039 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.039 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.112 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.221 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.222 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.272 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:19:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:19:54 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3057142959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.747 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.755 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.781 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.822 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.823 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:19:54 compute-0 nova_compute[253461]: 2025-11-22 04:19:54.824 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:54 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3057142959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:19:55 compute-0 nova_compute[253461]: 2025-11-22 04:19:55.524 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:55 compute-0 nova_compute[253461]: 2025-11-22 04:19:55.839 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:55 compute-0 nova_compute[253461]: 2025-11-22 04:19:55.840 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:19:55 compute-0 nova_compute[253461]: 2025-11-22 04:19:55.840 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:19:55 compute-0 ceph-mon[75011]: pgmap v2171: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:55 compute-0 nova_compute[253461]: 2025-11-22 04:19:55.984 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:19:55 compute-0 nova_compute[253461]: 2025-11-22 04:19:55.985 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:56 compute-0 nova_compute[253461]: 2025-11-22 04:19:56.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:57 compute-0 nova_compute[253461]: 2025-11-22 04:19:57.425 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:57 compute-0 ceph-mon[75011]: pgmap v2172: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:58 compute-0 nova_compute[253461]: 2025-11-22 04:19:58.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:58 compute-0 nova_compute[253461]: 2025-11-22 04:19:58.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:58 compute-0 nova_compute[253461]: 2025-11-22 04:19:58.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:19:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:19:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:19:59 compute-0 nova_compute[253461]: 2025-11-22 04:19:59.148 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:19:59 compute-0 nova_compute[253461]: 2025-11-22 04:19:59.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:19:59 compute-0 ceph-mon[75011]: pgmap v2173: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:20:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:20:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150063502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:20:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:20:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150063502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:20:00 compute-0 nova_compute[253461]: 2025-11-22 04:20:00.527 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:20:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/150063502' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:20:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/150063502' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:20:02 compute-0 ceph-mon[75011]: pgmap v2174: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:20:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:20:03 compute-0 nova_compute[253461]: 2025-11-22 04:20:03.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:20:03 compute-0 nova_compute[253461]: 2025-11-22 04:20:03.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 04:20:03 compute-0 nova_compute[253461]: 2025-11-22 04:20:03.455 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 04:20:04 compute-0 ceph-mon[75011]: pgmap v2175: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:20:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:04 compute-0 nova_compute[253461]: 2025-11-22 04:20:04.149 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:20:05 compute-0 nova_compute[253461]: 2025-11-22 04:20:05.530 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:06 compute-0 ceph-mon[75011]: pgmap v2176: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail
Nov 22 04:20:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:06 compute-0 podman[303864]: 2025-11-22 04:20:06.388907419 +0000 UTC m=+0.070672809 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3)
Nov 22 04:20:06 compute-0 nova_compute[253461]: 2025-11-22 04:20:06.455 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:20:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 170 B/s wr, 2 op/s
Nov 22 04:20:08 compute-0 ceph-mon[75011]: pgmap v2177: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 170 B/s wr, 2 op/s
Nov 22 04:20:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 170 B/s wr, 2 op/s
Nov 22 04:20:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:09 compute-0 nova_compute[253461]: 2025-11-22 04:20:09.152 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e522 do_prune osdmap full prune enabled
Nov 22 04:20:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 e523: 3 total, 3 up, 3 in
Nov 22 04:20:10 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e523: 3 total, 3 up, 3 in
Nov 22 04:20:10 compute-0 ceph-mon[75011]: pgmap v2178: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 170 B/s wr, 2 op/s
Nov 22 04:20:10 compute-0 sudo[303884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:10 compute-0 sudo[303884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:10 compute-0 sudo[303884]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:10 compute-0 sudo[303909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:20:10 compute-0 sudo[303909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:10 compute-0 sudo[303909]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:10 compute-0 sudo[303934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:10 compute-0 sudo[303934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:10 compute-0 sudo[303934]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:10 compute-0 nova_compute[253461]: 2025-11-22 04:20:10.533 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:10 compute-0 sudo[303959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ls
Nov 22 04:20:10 compute-0 sudo[303959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 27 KiB/s wr, 9 op/s
Nov 22 04:20:11 compute-0 ceph-mon[75011]: osdmap e523: 3 total, 3 up, 3 in
Nov 22 04:20:11 compute-0 podman[304057]: 2025-11-22 04:20:11.187136615 +0000 UTC m=+0.082051395 container exec ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:20:11 compute-0 podman[304057]: 2025-11-22 04:20:11.306872052 +0000 UTC m=+0.201786813 container exec_died ae4dd69c2f8051d95e6a2f8e2b10c600a1d75428bc6f425d29f45f28b22e6000 (image=quay.io/ceph/ceph:v18, name=ceph-7adcc38b-6484-5de6-b879-33a0309153df-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:20:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1327056003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:12 compute-0 sudo[303959]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:20:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:20:12 compute-0 ceph-mon[75011]: pgmap v2180: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 27 KiB/s wr, 9 op/s
Nov 22 04:20:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1327056003' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:12 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:12 compute-0 sudo[304217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:12 compute-0 sudo[304217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:12 compute-0 sudo[304217]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:12 compute-0 sudo[304242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:20:12 compute-0 sudo[304242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:12 compute-0 sudo[304242]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:12 compute-0 sudo[304267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:12 compute-0 sudo[304267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:12 compute-0 sudo[304267]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:12 compute-0 sudo[304292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:20:12 compute-0 sudo[304292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 27 KiB/s wr, 24 op/s
Nov 22 04:20:12 compute-0 sudo[304292]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:20:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:20:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:20:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:13 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 2377afa6-7141-4dc9-b1e7-bdb2131f8675 does not exist
Nov 22 04:20:13 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ae12ede8-b43c-453f-940a-bf7386ea60d1 does not exist
Nov 22 04:20:13 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 47fb3f25-b671-4093-b7d0-08055614169e does not exist
Nov 22 04:20:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:20:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:20:13 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:20:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:20:13 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:20:13 compute-0 sudo[304348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:13 compute-0 sudo[304348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:13 compute-0 sudo[304348]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:13 compute-0 sudo[304373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:20:13 compute-0 sudo[304373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:13 compute-0 sudo[304373]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:13 compute-0 sudo[304398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:13 compute-0 sudo[304398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:13 compute-0 sudo[304398]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:13 compute-0 sudo[304423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:20:13 compute-0 sudo[304423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:13 compute-0 podman[304488]: 2025-11-22 04:20:13.775085294 +0000 UTC m=+0.045316074 container create 6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:20:13 compute-0 systemd[1]: Started libpod-conmon-6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01.scope.
Nov 22 04:20:13 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:20:13 compute-0 podman[304488]: 2025-11-22 04:20:13.760094147 +0000 UTC m=+0.030325017 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:13 compute-0 podman[304488]: 2025-11-22 04:20:13.877108934 +0000 UTC m=+0.147339804 container init 6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:20:13 compute-0 podman[304488]: 2025-11-22 04:20:13.888660715 +0000 UTC m=+0.158891505 container start 6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:20:13 compute-0 podman[304488]: 2025-11-22 04:20:13.892966447 +0000 UTC m=+0.163197267 container attach 6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:20:13 compute-0 infallible_turing[304504]: 167 167
Nov 22 04:20:13 compute-0 systemd[1]: libpod-6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01.scope: Deactivated successfully.
Nov 22 04:20:13 compute-0 podman[304488]: 2025-11-22 04:20:13.897032916 +0000 UTC m=+0.167263737 container died 6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:20:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-efd40c1af942832b779b50a352a274853b7bad51c6fe60fc2559e9dc8e236976-merged.mount: Deactivated successfully.
Nov 22 04:20:13 compute-0 podman[304488]: 2025-11-22 04:20:13.961788245 +0000 UTC m=+0.232019035 container remove 6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_turing, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:20:13 compute-0 systemd[1]: libpod-conmon-6ec4e0c3b5cebef320118a1e9efa2c9d5e5f846af692cc44ca0b555ad3606b01.scope: Deactivated successfully.
Nov 22 04:20:14 compute-0 ceph-mon[75011]: pgmap v2181: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 27 KiB/s wr, 24 op/s
Nov 22 04:20:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:14 compute-0 nova_compute[253461]: 2025-11-22 04:20:14.155 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:14 compute-0 podman[304529]: 2025-11-22 04:20:14.235084563 +0000 UTC m=+0.090776906 container create 9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:20:14 compute-0 podman[304529]: 2025-11-22 04:20:14.188273198 +0000 UTC m=+0.043965591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:14 compute-0 systemd[1]: Started libpod-conmon-9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72.scope.
Nov 22 04:20:14 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d18e3941328358acd5176bdc56295c70313e40f17ccd00efe8385943c68d0ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d18e3941328358acd5176bdc56295c70313e40f17ccd00efe8385943c68d0ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d18e3941328358acd5176bdc56295c70313e40f17ccd00efe8385943c68d0ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d18e3941328358acd5176bdc56295c70313e40f17ccd00efe8385943c68d0ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d18e3941328358acd5176bdc56295c70313e40f17ccd00efe8385943c68d0ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:14 compute-0 podman[304529]: 2025-11-22 04:20:14.388322474 +0000 UTC m=+0.244014837 container init 9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:20:14 compute-0 podman[304529]: 2025-11-22 04:20:14.397577863 +0000 UTC m=+0.253270206 container start 9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:20:14 compute-0 podman[304529]: 2025-11-22 04:20:14.401554892 +0000 UTC m=+0.257247275 container attach 9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:20:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 28 KiB/s wr, 28 op/s
Nov 22 04:20:15 compute-0 ceph-mon[75011]: pgmap v2182: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 28 KiB/s wr, 28 op/s
Nov 22 04:20:15 compute-0 competent_ptolemy[304546]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:20:15 compute-0 competent_ptolemy[304546]: --> relative data size: 1.0
Nov 22 04:20:15 compute-0 competent_ptolemy[304546]: --> All data devices are unavailable
Nov 22 04:20:15 compute-0 systemd[1]: libpod-9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72.scope: Deactivated successfully.
Nov 22 04:20:15 compute-0 podman[304529]: 2025-11-22 04:20:15.490771747 +0000 UTC m=+1.346464090 container died 9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:20:15 compute-0 systemd[1]: libpod-9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72.scope: Consumed 1.037s CPU time.
Nov 22 04:20:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d18e3941328358acd5176bdc56295c70313e40f17ccd00efe8385943c68d0ce-merged.mount: Deactivated successfully.
Nov 22 04:20:15 compute-0 nova_compute[253461]: 2025-11-22 04:20:15.535 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:15 compute-0 podman[304529]: 2025-11-22 04:20:15.561291253 +0000 UTC m=+1.416983556 container remove 9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:20:15 compute-0 systemd[1]: libpod-conmon-9a7188c6da0208fa59a9a551d49ba875ce2aab54c56d171c3d366fe6db963f72.scope: Deactivated successfully.
Nov 22 04:20:15 compute-0 sudo[304423]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:15 compute-0 sudo[304589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:15 compute-0 sudo[304589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:15 compute-0 sudo[304589]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:15 compute-0 sudo[304614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:20:15 compute-0 sudo[304614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:15 compute-0 sudo[304614]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:15 compute-0 sudo[304639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:15 compute-0 sudo[304639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:15 compute-0 sudo[304639]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:15 compute-0 sudo[304664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:20:15 compute-0 sudo[304664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:16 compute-0 podman[304730]: 2025-11-22 04:20:16.386889664 +0000 UTC m=+0.053999615 container create da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galois, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Nov 22 04:20:16 compute-0 systemd[1]: Started libpod-conmon-da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8.scope.
Nov 22 04:20:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:20:16 compute-0 podman[304730]: 2025-11-22 04:20:16.367584481 +0000 UTC m=+0.034694442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:16 compute-0 podman[304730]: 2025-11-22 04:20:16.48055147 +0000 UTC m=+0.147661441 container init da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:20:16 compute-0 podman[304730]: 2025-11-22 04:20:16.492082477 +0000 UTC m=+0.159192428 container start da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:20:16 compute-0 podman[304730]: 2025-11-22 04:20:16.49670017 +0000 UTC m=+0.163810222 container attach da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galois, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:20:16 compute-0 flamboyant_galois[304747]: 167 167
Nov 22 04:20:16 compute-0 systemd[1]: libpod-da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8.scope: Deactivated successfully.
Nov 22 04:20:16 compute-0 podman[304730]: 2025-11-22 04:20:16.499417148 +0000 UTC m=+0.166527089 container died da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:20:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-385cc8bc0f86041f95d00ccb8aa22055beef7e9024baa6f47b34db7ff7288e0e-merged.mount: Deactivated successfully.
Nov 22 04:20:16 compute-0 podman[304730]: 2025-11-22 04:20:16.547317168 +0000 UTC m=+0.214427119 container remove da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_galois, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:20:16 compute-0 systemd[1]: libpod-conmon-da94bbfa3f0205f71f9d73315b4efed54843a68b447e6d8f453857922ab6cad8.scope: Deactivated successfully.
Nov 22 04:20:16 compute-0 podman[304771]: 2025-11-22 04:20:16.758616642 +0000 UTC m=+0.068792489 container create 4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Nov 22 04:20:16 compute-0 systemd[1]: Started libpod-conmon-4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47.scope.
Nov 22 04:20:16 compute-0 podman[304771]: 2025-11-22 04:20:16.728724028 +0000 UTC m=+0.038899925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:16 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00778b2d6d94bd83b3a042feb5daaaa44cd6e8593e94e4eea881dd21765f9237/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00778b2d6d94bd83b3a042feb5daaaa44cd6e8593e94e4eea881dd21765f9237/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00778b2d6d94bd83b3a042feb5daaaa44cd6e8593e94e4eea881dd21765f9237/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00778b2d6d94bd83b3a042feb5daaaa44cd6e8593e94e4eea881dd21765f9237/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 28 KiB/s wr, 31 op/s
Nov 22 04:20:16 compute-0 podman[304771]: 2025-11-22 04:20:16.859888444 +0000 UTC m=+0.170064301 container init 4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:20:16 compute-0 podman[304771]: 2025-11-22 04:20:16.873984655 +0000 UTC m=+0.184160502 container start 4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:20:16 compute-0 podman[304771]: 2025-11-22 04:20:16.878254863 +0000 UTC m=+0.188430720 container attach 4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]: {
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:     "0": [
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:         {
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "devices": [
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "/dev/loop3"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             ],
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_name": "ceph_lv0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_size": "21470642176",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "name": "ceph_lv0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "tags": {
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cluster_name": "ceph",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.crush_device_class": "",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.encrypted": "0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osd_id": "0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.type": "block",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.vdo": "0"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             },
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "type": "block",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "vg_name": "ceph_vg0"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:         }
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:     ],
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:     "1": [
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:         {
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "devices": [
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "/dev/loop4"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             ],
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_name": "ceph_lv1",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_size": "21470642176",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "name": "ceph_lv1",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "tags": {
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cluster_name": "ceph",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.crush_device_class": "",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.encrypted": "0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osd_id": "1",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.type": "block",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.vdo": "0"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             },
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "type": "block",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "vg_name": "ceph_vg1"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:         }
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:     ],
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:     "2": [
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:         {
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "devices": [
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "/dev/loop5"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             ],
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_name": "ceph_lv2",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_size": "21470642176",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "name": "ceph_lv2",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "tags": {
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.cluster_name": "ceph",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.crush_device_class": "",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.encrypted": "0",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osd_id": "2",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.type": "block",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:                 "ceph.vdo": "0"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             },
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "type": "block",
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:             "vg_name": "ceph_vg2"
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:         }
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]:     ]
Nov 22 04:20:17 compute-0 jolly_keldysh[304787]: }
Nov 22 04:20:17 compute-0 systemd[1]: libpod-4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47.scope: Deactivated successfully.
Nov 22 04:20:17 compute-0 podman[304771]: 2025-11-22 04:20:17.641771284 +0000 UTC m=+0.951947121 container died 4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:20:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-00778b2d6d94bd83b3a042feb5daaaa44cd6e8593e94e4eea881dd21765f9237-merged.mount: Deactivated successfully.
Nov 22 04:20:17 compute-0 podman[304771]: 2025-11-22 04:20:17.704641358 +0000 UTC m=+1.014817175 container remove 4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:20:17 compute-0 systemd[1]: libpod-conmon-4c916cfa4d53d23975b920d26ed5d7b34f52e36e1c930c8cce38d5d93ac3bd47.scope: Deactivated successfully.
Nov 22 04:20:17 compute-0 sudo[304664]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:17 compute-0 sudo[304810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:17 compute-0 sudo[304810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:17 compute-0 sudo[304810]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:17 compute-0 sudo[304835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:20:17 compute-0 sudo[304835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:17 compute-0 sudo[304835]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:17 compute-0 ceph-mon[75011]: pgmap v2183: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 28 KiB/s wr, 31 op/s
Nov 22 04:20:17 compute-0 sudo[304861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:17 compute-0 sudo[304861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:17 compute-0 sudo[304861]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:18 compute-0 sudo[304886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:20:18 compute-0 sudo[304886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:18 compute-0 podman[304951]: 2025-11-22 04:20:18.396250555 +0000 UTC m=+0.045402043 container create 4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mcclintock, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:20:18 compute-0 systemd[1]: Started libpod-conmon-4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055.scope.
Nov 22 04:20:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:20:18 compute-0 podman[304951]: 2025-11-22 04:20:18.377133697 +0000 UTC m=+0.026285175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:18 compute-0 podman[304951]: 2025-11-22 04:20:18.471743433 +0000 UTC m=+0.120894961 container init 4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Nov 22 04:20:18 compute-0 podman[304951]: 2025-11-22 04:20:18.481895479 +0000 UTC m=+0.131046937 container start 4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mcclintock, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:20:18 compute-0 podman[304951]: 2025-11-22 04:20:18.486311685 +0000 UTC m=+0.135463153 container attach 4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mcclintock, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:20:18 compute-0 inspiring_mcclintock[304967]: 167 167
Nov 22 04:20:18 compute-0 systemd[1]: libpod-4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055.scope: Deactivated successfully.
Nov 22 04:20:18 compute-0 podman[304951]: 2025-11-22 04:20:18.488930256 +0000 UTC m=+0.138081724 container died 4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Nov 22 04:20:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5a612247c3f707cc79cc56e4082a5912eee7c7946fb9405c540d41d142814ce-merged.mount: Deactivated successfully.
Nov 22 04:20:18 compute-0 nova_compute[253461]: 2025-11-22 04:20:18.527 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:18 compute-0 nova_compute[253461]: 2025-11-22 04:20:18.530 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:18 compute-0 podman[304951]: 2025-11-22 04:20:18.541417048 +0000 UTC m=+0.190568506 container remove 4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mcclintock, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:20:18 compute-0 nova_compute[253461]: 2025-11-22 04:20:18.546 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:20:18 compute-0 systemd[1]: libpod-conmon-4c2ec885dc741cf7c3022b1f2a90b29edafbd9e7d7d7284b44e4e34174e50055.scope: Deactivated successfully.
Nov 22 04:20:18 compute-0 nova_compute[253461]: 2025-11-22 04:20:18.621 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:18 compute-0 nova_compute[253461]: 2025-11-22 04:20:18.622 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:18 compute-0 nova_compute[253461]: 2025-11-22 04:20:18.634 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:20:18 compute-0 nova_compute[253461]: 2025-11-22 04:20:18.635 253465 INFO nova.compute.claims [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:20:18 compute-0 podman[304988]: 2025-11-22 04:20:18.736796317 +0000 UTC m=+0.050461794 container create cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wright, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:20:18 compute-0 nova_compute[253461]: 2025-11-22 04:20:18.744 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:18 compute-0 ovn_controller[152691]: 2025-11-22T04:20:18Z|00287|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 22 04:20:18 compute-0 systemd[1]: Started libpod-conmon-cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94.scope.
Nov 22 04:20:18 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:20:18 compute-0 podman[304988]: 2025-11-22 04:20:18.714344665 +0000 UTC m=+0.028010172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1475541b9b13b09c55543b5bac9241be16686989f698da541a9a7b725efb5ae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1475541b9b13b09c55543b5bac9241be16686989f698da541a9a7b725efb5ae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1475541b9b13b09c55543b5bac9241be16686989f698da541a9a7b725efb5ae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1475541b9b13b09c55543b5bac9241be16686989f698da541a9a7b725efb5ae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:18 compute-0 podman[304988]: 2025-11-22 04:20:18.832225597 +0000 UTC m=+0.145891114 container init cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wright, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:20:18 compute-0 podman[304988]: 2025-11-22 04:20:18.845263674 +0000 UTC m=+0.158929161 container start cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wright, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:20:18 compute-0 podman[304988]: 2025-11-22 04:20:18.849756792 +0000 UTC m=+0.163422279 container attach cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:20:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 28 KiB/s wr, 31 op/s
Nov 22 04:20:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.157 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/52732838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.201 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.209 253465 DEBUG nova.compute.provider_tree [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:20:19 compute-0 sshd-session[304792]: Invalid user user from 27.79.46.85 port 59902
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.234 253465 DEBUG nova.scheduler.client.report [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.264 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.265 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.316 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.317 253465 DEBUG nova.network.neutron [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.340 253465 INFO nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.357 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.453 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.454 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.455 253465 INFO nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Creating image(s)
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.478 253465 DEBUG nova.storage.rbd_utils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image a8949f48-8745-4610-8b5a-55b807d8796a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.505 253465 DEBUG nova.storage.rbd_utils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image a8949f48-8745-4610-8b5a-55b807d8796a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.530 253465 DEBUG nova.storage.rbd_utils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image a8949f48-8745-4610-8b5a-55b807d8796a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.534 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:19 compute-0 sshd-session[304792]: Connection closed by invalid user user 27.79.46.85 port 59902 [preauth]
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.622 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.623 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "25c5d46187abbddf047b2aea949ae06d253afe2d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.624 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.625 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "25c5d46187abbddf047b2aea949ae06d253afe2d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.645 253465 DEBUG nova.storage.rbd_utils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image a8949f48-8745-4610-8b5a-55b807d8796a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:20:19 compute-0 nova_compute[253461]: 2025-11-22 04:20:19.649 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d a8949f48-8745-4610-8b5a-55b807d8796a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:19 compute-0 bold_wright[305006]: {
Nov 22 04:20:19 compute-0 bold_wright[305006]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "osd_id": 1,
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "type": "bluestore"
Nov 22 04:20:19 compute-0 bold_wright[305006]:     },
Nov 22 04:20:19 compute-0 bold_wright[305006]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "osd_id": 0,
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "type": "bluestore"
Nov 22 04:20:19 compute-0 bold_wright[305006]:     },
Nov 22 04:20:19 compute-0 bold_wright[305006]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "osd_id": 2,
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:20:19 compute-0 bold_wright[305006]:         "type": "bluestore"
Nov 22 04:20:19 compute-0 bold_wright[305006]:     }
Nov 22 04:20:19 compute-0 bold_wright[305006]: }
Nov 22 04:20:19 compute-0 systemd[1]: libpod-cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94.scope: Deactivated successfully.
Nov 22 04:20:19 compute-0 systemd[1]: libpod-cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94.scope: Consumed 1.031s CPU time.
Nov 22 04:20:19 compute-0 podman[304988]: 2025-11-22 04:20:19.884702417 +0000 UTC m=+1.198367914 container died cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:20:20 compute-0 ceph-mon[75011]: pgmap v2184: 305 pgs: 305 active+clean; 88 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 28 KiB/s wr, 31 op/s
Nov 22 04:20:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/52732838' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1475541b9b13b09c55543b5bac9241be16686989f698da541a9a7b725efb5ae3-merged.mount: Deactivated successfully.
Nov 22 04:20:20 compute-0 podman[304988]: 2025-11-22 04:20:20.127744966 +0000 UTC m=+1.441410473 container remove cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.134 253465 DEBUG nova.policy [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '26147ad59e2d4763b8edc27d80567b09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:20:20 compute-0 systemd[1]: libpod-conmon-cfbfd3dd0a5d7cdf80f3b72407e9cc444394b32b959d41bc8fcbf001aa7bbd94.scope: Deactivated successfully.
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.140 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d a8949f48-8745-4610-8b5a-55b807d8796a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:20 compute-0 sudo[304886]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:20:20 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:20:20 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:20 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev dc8fad6f-7ef6-42fc-8271-910e6babcae2 does not exist
Nov 22 04:20:20 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 77e70e37-4529-4f8b-95bf-529385d20e5a does not exist
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.240 253465 DEBUG nova.storage.rbd_utils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] resizing rbd image a8949f48-8745-4610-8b5a-55b807d8796a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Nov 22 04:20:20 compute-0 sudo[305206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:20:20 compute-0 sudo[305206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:20 compute-0 sudo[305206]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:20 compute-0 sudo[305249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:20:20 compute-0 sudo[305249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:20:20 compute-0 sudo[305249]: pam_unix(sudo:session): session closed for user root
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.432 253465 DEBUG nova.objects.instance [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'migration_context' on Instance uuid a8949f48-8745-4610-8b5a-55b807d8796a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.471 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.472 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Ensure instance console log exists: /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.474 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.474 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.475 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:20 compute-0 nova_compute[253461]: 2025-11-22 04:20:20.539 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 102 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 821 KiB/s wr, 28 op/s
Nov 22 04:20:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:21 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:20:21 compute-0 ceph-mon[75011]: pgmap v2185: 305 pgs: 305 active+clean; 102 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 821 KiB/s wr, 28 op/s
Nov 22 04:20:22 compute-0 podman[305292]: 2025-11-22 04:20:22.434744702 +0000 UTC m=+0.087606471 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:20:22 compute-0 podman[305293]: 2025-11-22 04:20:22.486849506 +0000 UTC m=+0.139453381 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 22 04:20:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 113 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 836 KiB/s wr, 35 op/s
Nov 22 04:20:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:23.037 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:23.037 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:23.037 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:23.165 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:20:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:23.166 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:20:23 compute-0 nova_compute[253461]: 2025-11-22 04:20:23.168 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:23 compute-0 nova_compute[253461]: 2025-11-22 04:20:23.269 253465 DEBUG nova.network.neutron [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Successfully created port: c3496b0e-592b-474e-a238-1017d28773e8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:20:23 compute-0 ceph-mon[75011]: pgmap v2186: 305 pgs: 305 active+clean; 113 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 836 KiB/s wr, 35 op/s
Nov 22 04:20:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:24 compute-0 nova_compute[253461]: 2025-11-22 04:20:24.216 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 22 04:20:25 compute-0 ceph-mon[75011]: pgmap v2187: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.542 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.605 253465 DEBUG nova.network.neutron [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Successfully updated port: c3496b0e-592b-474e-a238-1017d28773e8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.624 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.624 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquired lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.624 253465 DEBUG nova.network.neutron [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.819 253465 DEBUG nova.compute.manager [req-5cb175c1-e8c5-4170-80cf-fcbe2fa32b51 req-4b758be3-15b9-472d-bc5c-34df8135892b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received event network-changed-c3496b0e-592b-474e-a238-1017d28773e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.819 253465 DEBUG nova.compute.manager [req-5cb175c1-e8c5-4170-80cf-fcbe2fa32b51 req-4b758be3-15b9-472d-bc5c-34df8135892b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Refreshing instance network info cache due to event network-changed-c3496b0e-592b-474e-a238-1017d28773e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.820 253465 DEBUG oslo_concurrency.lockutils [req-5cb175c1-e8c5-4170-80cf-fcbe2fa32b51 req-4b758be3-15b9-472d-bc5c-34df8135892b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:20:25 compute-0 nova_compute[253461]: 2025-11-22 04:20:25.831 253465 DEBUG nova.network.neutron [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:20:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.104 253465 DEBUG nova.network.neutron [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Updating instance_info_cache with network_info: [{"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.141 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Releasing lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.141 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Instance network_info: |[{"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.142 253465 DEBUG oslo_concurrency.lockutils [req-5cb175c1-e8c5-4170-80cf-fcbe2fa32b51 req-4b758be3-15b9-472d-bc5c-34df8135892b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.142 253465 DEBUG nova.network.neutron [req-5cb175c1-e8c5-4170-80cf-fcbe2fa32b51 req-4b758be3-15b9-472d-bc5c-34df8135892b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Refreshing network info cache for port c3496b0e-592b-474e-a238-1017d28773e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.146 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Start _get_guest_xml network_info=[{"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'boot_index': 0, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'guest_format': None, 'image_id': 'feac2ecd-89f4-4e45-b9fb-68cb0cf353c3'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.151 253465 WARNING nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.156 253465 DEBUG nova.virt.libvirt.host [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.157 253465 DEBUG nova.virt.libvirt.host [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.165 253465 DEBUG nova.virt.libvirt.host [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.165 253465 DEBUG nova.virt.libvirt.host [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.166 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.166 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-22T03:49:24Z,direct_url=<?>,disk_format='qcow2',id=feac2ecd-89f4-4e45-b9fb-68cb0cf353c3,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='206f6077ca594450bfe783c9e9646000',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-22T03:49:27Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.167 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.167 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.168 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.168 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.168 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.169 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.169 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.170 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.170 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.170 253465 DEBUG nova.virt.hardware [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.174 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3007939835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.643 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.682 253465 DEBUG nova.storage.rbd_utils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image a8949f48-8745-4610-8b5a-55b807d8796a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:20:27 compute-0 nova_compute[253461]: 2025-11-22 04:20:27.685 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:27 compute-0 ceph-mon[75011]: pgmap v2188: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Nov 22 04:20:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3007939835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:28 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:28 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845888410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.150 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.152 253465 DEBUG nova.virt.libvirt.vif [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1735314438',display_name='tempest-TestEncryptedCinderVolumes-server-1735314438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1735314438',id=28,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMUrIBsKqqcoDmVRtGnTWH8PLWNzOIsa3Tj7T5v4loxUeDYjLK5BES1B3rVlsNh95K2CrCjjEL/5+EhRw79dznGjCC78b/ZBOyNqE4QBUsnhkjQgIOdXQH847JR66Wvmqw==',key_name='tempest-keypair-145789610',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-cqcpxd7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:20:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='26147ad59e2d4763b8edc27d80567b09',uuid=a8949f48-8745-4610-8b5a-55b807d8796a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.152 253465 DEBUG nova.network.os_vif_util [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.153 253465 DEBUG nova.network.os_vif_util [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:b7:07,bridge_name='br-int',has_traffic_filtering=True,id=c3496b0e-592b-474e-a238-1017d28773e8,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3496b0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.154 253465 DEBUG nova.objects.instance [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'pci_devices' on Instance uuid a8949f48-8745-4610-8b5a-55b807d8796a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.199 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <uuid>a8949f48-8745-4610-8b5a-55b807d8796a</uuid>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <name>instance-0000001c</name>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1735314438</nova:name>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:20:27</nova:creationTime>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <nova:user uuid="26147ad59e2d4763b8edc27d80567b09">tempest-TestEncryptedCinderVolumes-230639986-project-member</nova:user>
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <nova:project uuid="c9d01ebd7e4f4251b66172a246b8a08f">tempest-TestEncryptedCinderVolumes-230639986</nova:project>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <nova:root type="image" uuid="feac2ecd-89f4-4e45-b9fb-68cb0cf353c3"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <nova:port uuid="c3496b0e-592b-474e-a238-1017d28773e8">
Nov 22 04:20:28 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <system>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <entry name="serial">a8949f48-8745-4610-8b5a-55b807d8796a</entry>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <entry name="uuid">a8949f48-8745-4610-8b5a-55b807d8796a</entry>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </system>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <os>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   </os>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <features>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   </features>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/a8949f48-8745-4610-8b5a-55b807d8796a_disk">
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       </source>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/a8949f48-8745-4610-8b5a-55b807d8796a_disk.config">
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       </source>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:20:28 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:2a:b7:07"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <target dev="tapc3496b0e-59"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a/console.log" append="off"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <video>
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </video>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:20:28 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:20:28 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:20:28 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:20:28 compute-0 nova_compute[253461]: </domain>
Nov 22 04:20:28 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.200 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Preparing to wait for external event network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.200 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.200 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.201 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.201 253465 DEBUG nova.virt.libvirt.vif [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1735314438',display_name='tempest-TestEncryptedCinderVolumes-server-1735314438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1735314438',id=28,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMUrIBsKqqcoDmVRtGnTWH8PLWNzOIsa3Tj7T5v4loxUeDYjLK5BES1B3rVlsNh95K2CrCjjEL/5+EhRw79dznGjCC78b/ZBOyNqE4QBUsnhkjQgIOdXQH847JR66Wvmqw==',key_name='tempest-keypair-145789610',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-cqcpxd7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:20:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='26147ad59e2d4763b8edc27d80567b09',uuid=a8949f48-8745-4610-8b5a-55b807d8796a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.201 253465 DEBUG nova.network.os_vif_util [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.202 253465 DEBUG nova.network.os_vif_util [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:b7:07,bridge_name='br-int',has_traffic_filtering=True,id=c3496b0e-592b-474e-a238-1017d28773e8,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3496b0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.202 253465 DEBUG os_vif [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:b7:07,bridge_name='br-int',has_traffic_filtering=True,id=c3496b0e-592b-474e-a238-1017d28773e8,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3496b0e-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.203 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.203 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.203 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.206 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.206 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3496b0e-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.207 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc3496b0e-59, col_values=(('external_ids', {'iface-id': 'c3496b0e-592b-474e-a238-1017d28773e8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:b7:07', 'vm-uuid': 'a8949f48-8745-4610-8b5a-55b807d8796a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.208 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:28 compute-0 NetworkManager[48916]: <info>  [1763785228.2093] manager: (tapc3496b0e-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/141)
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.210 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.214 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.216 253465 INFO os_vif [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:b7:07,bridge_name='br-int',has_traffic_filtering=True,id=c3496b0e-592b-474e-a238-1017d28773e8,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3496b0e-59')
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.452 253465 DEBUG nova.network.neutron [req-5cb175c1-e8c5-4170-80cf-fcbe2fa32b51 req-4b758be3-15b9-472d-bc5c-34df8135892b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Updated VIF entry in instance network info cache for port c3496b0e-592b-474e-a238-1017d28773e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.452 253465 DEBUG nova.network.neutron [req-5cb175c1-e8c5-4170-80cf-fcbe2fa32b51 req-4b758be3-15b9-472d-bc5c-34df8135892b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Updating instance_info_cache with network_info: [{"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.457 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.458 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.458 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No VIF found with MAC fa:16:3e:2a:b7:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.459 253465 INFO nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Using config drive
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.492 253465 DEBUG nova.storage.rbd_utils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image a8949f48-8745-4610-8b5a-55b807d8796a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:20:28 compute-0 nova_compute[253461]: 2025-11-22 04:20:28.501 253465 DEBUG oslo_concurrency.lockutils [req-5cb175c1-e8c5-4170-80cf-fcbe2fa32b51 req-4b758be3-15b9-472d-bc5c-34df8135892b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:20:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:20:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1845888410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:29 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:29.169 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.216 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.227 253465 INFO nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Creating config drive at /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a/disk.config
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.235 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpemzd7pf4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.383 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpemzd7pf4" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.414 253465 DEBUG nova.storage.rbd_utils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image a8949f48-8745-4610-8b5a-55b807d8796a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.418 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a/disk.config a8949f48-8745-4610-8b5a-55b807d8796a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.905 253465 DEBUG oslo_concurrency.processutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a/disk.config a8949f48-8745-4610-8b5a-55b807d8796a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.906 253465 INFO nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Deleting local config drive /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a/disk.config because it was imported into RBD.
Nov 22 04:20:29 compute-0 kernel: tapc3496b0e-59: entered promiscuous mode
Nov 22 04:20:29 compute-0 ovn_controller[152691]: 2025-11-22T04:20:29Z|00288|binding|INFO|Claiming lport c3496b0e-592b-474e-a238-1017d28773e8 for this chassis.
Nov 22 04:20:29 compute-0 ovn_controller[152691]: 2025-11-22T04:20:29Z|00289|binding|INFO|c3496b0e-592b-474e-a238-1017d28773e8: Claiming fa:16:3e:2a:b7:07 10.100.0.14
Nov 22 04:20:29 compute-0 NetworkManager[48916]: <info>  [1763785229.9727] manager: (tapc3496b0e-59): new Tun device (/org/freedesktop/NetworkManager/Devices/142)
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.971 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.976 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:29 compute-0 nova_compute[253461]: 2025-11-22 04:20:29.980 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:29 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:29.992 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:b7:07 10.100.0.14'], port_security=['fa:16:3e:2a:b7:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a8949f48-8745-4610-8b5a-55b807d8796a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '60266484-e60a-46d3-b144-27318949b1bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d096b3-2344-4434-a488-92084cb46974, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=c3496b0e-592b-474e-a238-1017d28773e8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:20:29 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:29.993 162689 INFO neutron.agent.ovn.metadata.agent [-] Port c3496b0e-592b-474e-a238-1017d28773e8 in datapath bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 bound to our chassis
Nov 22 04:20:29 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:29.994 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.006 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d0712751-4e3a-433c-b5e6-75e529136ad0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.007 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd550fd2-d1 in ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.009 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd550fd2-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.009 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e4b255c6-fabe-4a61-b155-f5a02c38ea90]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 systemd-machined[215728]: New machine qemu-28-instance-0000001c.
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.010 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[192bf3bc-267d-4552-b6ae-3bb07727fd6d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.020 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe1d9c9-abe0-484c-a131-f6e0d62fdbaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Nov 22 04:20:30 compute-0 ceph-mon[75011]: pgmap v2189: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.049 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0b4b070e-9dd8-4bc6-be7c-440cf64e4a94]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.053 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:30 compute-0 ovn_controller[152691]: 2025-11-22T04:20:30Z|00290|binding|INFO|Setting lport c3496b0e-592b-474e-a238-1017d28773e8 ovn-installed in OVS
Nov 22 04:20:30 compute-0 ovn_controller[152691]: 2025-11-22T04:20:30Z|00291|binding|INFO|Setting lport c3496b0e-592b-474e-a238-1017d28773e8 up in Southbound
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.060 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:30 compute-0 systemd-udevd[305482]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.079 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[67df9408-0a1f-4892-a2ec-2dd357e1ce7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 NetworkManager[48916]: <info>  [1763785230.0844] device (tapc3496b0e-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:20:30 compute-0 NetworkManager[48916]: <info>  [1763785230.0853] device (tapc3496b0e-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:20:30 compute-0 NetworkManager[48916]: <info>  [1763785230.0867] manager: (tapbd550fd2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/143)
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.086 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4b49b240-9946-49d0-bbe1-cd6d17394e21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 systemd-udevd[305484]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.113 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c97323-6512-4328-9ff3-c104223c44f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.117 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[f6574113-2013-4a2e-810a-bc961ab7e4a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 NetworkManager[48916]: <info>  [1763785230.1358] device (tapbd550fd2-d0): carrier: link connected
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.141 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[6abc5764-ace2-44fc-9efe-7c400707b4f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.159 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[33710de2-c404-4400-b3c0-b38c38146310]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd550fd2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:cb:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548273, 'reachable_time': 26463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305505, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.179 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[621a6afe-f495-4ea5-a4b6-7b63e5a932c5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:cb6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 548273, 'tstamp': 548273}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305506, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.199 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[5e59e042-ec94-4254-be90-820b628e40ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd550fd2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:cb:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548273, 'reachable_time': 26463, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305507, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.243 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[207d7acf-a749-4822-a82c-c058996dad4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.312 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[91b61e7f-21bd-4b3c-be20-b25cdbad5b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.314 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd550fd2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.314 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.315 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd550fd2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:20:30 compute-0 NetworkManager[48916]: <info>  [1763785230.3177] manager: (tapbd550fd2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.317 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:30 compute-0 kernel: tapbd550fd2-d0: entered promiscuous mode
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.320 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.322 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd550fd2-d0, col_values=(('external_ids', {'iface-id': '1cfe38fd-445a-4e2d-9728-1f7ee0085422'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.323 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:30 compute-0 ovn_controller[152691]: 2025-11-22T04:20:30Z|00292|binding|INFO|Releasing lport 1cfe38fd-445a-4e2d-9728-1f7ee0085422 from this chassis (sb_readonly=0)
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.344 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.346 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.347 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[93280b8b-d246-47db-898b-95bbcfc8b9d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.348 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:20:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:30.348 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'env', 'PROCESS_TAG=haproxy-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.709 253465 DEBUG nova.compute.manager [req-92ccce3e-ba10-4c2c-9815-9714ff8be619 req-2dea4c54-bf19-4ed7-86d4-5cd231324039 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received event network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.710 253465 DEBUG oslo_concurrency.lockutils [req-92ccce3e-ba10-4c2c-9815-9714ff8be619 req-2dea4c54-bf19-4ed7-86d4-5cd231324039 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.710 253465 DEBUG oslo_concurrency.lockutils [req-92ccce3e-ba10-4c2c-9815-9714ff8be619 req-2dea4c54-bf19-4ed7-86d4-5cd231324039 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.710 253465 DEBUG oslo_concurrency.lockutils [req-92ccce3e-ba10-4c2c-9815-9714ff8be619 req-2dea4c54-bf19-4ed7-86d4-5cd231324039 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.711 253465 DEBUG nova.compute.manager [req-92ccce3e-ba10-4c2c-9815-9714ff8be619 req-2dea4c54-bf19-4ed7-86d4-5cd231324039 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Processing event network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:20:30 compute-0 podman[305575]: 2025-11-22 04:20:30.755566334 +0000 UTC m=+0.052036829 container create 2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 22 04:20:30 compute-0 systemd[1]: Started libpod-conmon-2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd.scope.
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.806 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.807 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785230.8058114, a8949f48-8745-4610-8b5a-55b807d8796a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.807 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] VM Started (Lifecycle Event)
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.810 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:20:30 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.815 253465 INFO nova.virt.libvirt.driver [-] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Instance spawned successfully.
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.816 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:20:30 compute-0 podman[305575]: 2025-11-22 04:20:30.722986954 +0000 UTC m=+0.019457429 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:20:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89370dcfb7c85ab98aa21d157b163a1268fc011ad665cfa43117c03146916392/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.836 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.845 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:20:30 compute-0 podman[305575]: 2025-11-22 04:20:30.84729186 +0000 UTC m=+0.143762305 container init 2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.847 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.847 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.848 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.848 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.849 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.849 253465 DEBUG nova.virt.libvirt.driver [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:20:30 compute-0 podman[305575]: 2025-11-22 04:20:30.853638152 +0000 UTC m=+0.150108597 container start 2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Nov 22 04:20:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:20:30 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[305596]: [NOTICE]   (305600) : New worker (305602) forked
Nov 22 04:20:30 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[305596]: [NOTICE]   (305600) : Loading success.
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.881 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.882 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785230.8068902, a8949f48-8745-4610-8b5a-55b807d8796a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.882 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] VM Paused (Lifecycle Event)
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.909 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.913 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785230.8103883, a8949f48-8745-4610-8b5a-55b807d8796a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.913 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] VM Resumed (Lifecycle Event)
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.922 253465 INFO nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Took 11.47 seconds to spawn the instance on the hypervisor.
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.922 253465 DEBUG nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.932 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.936 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:20:30 compute-0 nova_compute[253461]: 2025-11-22 04:20:30.974 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:20:31 compute-0 nova_compute[253461]: 2025-11-22 04:20:31.016 253465 INFO nova.compute.manager [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Took 12.42 seconds to build instance.
Nov 22 04:20:31 compute-0 nova_compute[253461]: 2025-11-22 04:20:31.046 253465 DEBUG oslo_concurrency.lockutils [None req-44d4dcdd-236a-4843-87a2-90e134b1b118 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.516s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:32 compute-0 ceph-mon[75011]: pgmap v2190: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Nov 22 04:20:32 compute-0 nova_compute[253461]: 2025-11-22 04:20:32.843 253465 DEBUG nova.compute.manager [req-7e6086e0-2298-426c-a3cb-0fd2907df836 req-e9e8ab94-94bc-4912-9bce-0064996ee443 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received event network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:20:32 compute-0 nova_compute[253461]: 2025-11-22 04:20:32.844 253465 DEBUG oslo_concurrency.lockutils [req-7e6086e0-2298-426c-a3cb-0fd2907df836 req-e9e8ab94-94bc-4912-9bce-0064996ee443 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:32 compute-0 nova_compute[253461]: 2025-11-22 04:20:32.844 253465 DEBUG oslo_concurrency.lockutils [req-7e6086e0-2298-426c-a3cb-0fd2907df836 req-e9e8ab94-94bc-4912-9bce-0064996ee443 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:32 compute-0 nova_compute[253461]: 2025-11-22 04:20:32.845 253465 DEBUG oslo_concurrency.lockutils [req-7e6086e0-2298-426c-a3cb-0fd2907df836 req-e9e8ab94-94bc-4912-9bce-0064996ee443 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:32 compute-0 nova_compute[253461]: 2025-11-22 04:20:32.845 253465 DEBUG nova.compute.manager [req-7e6086e0-2298-426c-a3cb-0fd2907df836 req-e9e8ab94-94bc-4912-9bce-0064996ee443 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] No waiting events found dispatching network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:20:32 compute-0 nova_compute[253461]: 2025-11-22 04:20:32.846 253465 WARNING nova.compute.manager [req-7e6086e0-2298-426c-a3cb-0fd2907df836 req-e9e8ab94-94bc-4912-9bce-0064996ee443 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received unexpected event network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 for instance with vm_state active and task_state None.
Nov 22 04:20:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Nov 22 04:20:33 compute-0 nova_compute[253461]: 2025-11-22 04:20:33.243 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:34 compute-0 ceph-mon[75011]: pgmap v2191: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 122 KiB/s rd, 1.1 MiB/s wr, 31 op/s
Nov 22 04:20:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:34 compute-0 nova_compute[253461]: 2025-11-22 04:20:34.222 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 994 KiB/s wr, 55 op/s
Nov 22 04:20:35 compute-0 NetworkManager[48916]: <info>  [1763785235.4225] manager: (patch-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/145)
Nov 22 04:20:35 compute-0 NetworkManager[48916]: <info>  [1763785235.4232] manager: (patch-br-int-to-provnet-44994419-adcd-4ac1-96f2-273d73ef38c9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Nov 22 04:20:35 compute-0 nova_compute[253461]: 2025-11-22 04:20:35.422 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:35 compute-0 ovn_controller[152691]: 2025-11-22T04:20:35Z|00293|binding|INFO|Releasing lport 1cfe38fd-445a-4e2d-9728-1f7ee0085422 from this chassis (sb_readonly=0)
Nov 22 04:20:35 compute-0 nova_compute[253461]: 2025-11-22 04:20:35.551 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:35 compute-0 ovn_controller[152691]: 2025-11-22T04:20:35Z|00294|binding|INFO|Releasing lport 1cfe38fd-445a-4e2d-9728-1f7ee0085422 from this chassis (sb_readonly=0)
Nov 22 04:20:35 compute-0 nova_compute[253461]: 2025-11-22 04:20:35.702 253465 DEBUG nova.compute.manager [req-c88544af-4521-444e-9299-fea9d7c88668 req-ed8400af-b72e-40bb-a8f5-ef1fa42c0028 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received event network-changed-c3496b0e-592b-474e-a238-1017d28773e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:20:35 compute-0 nova_compute[253461]: 2025-11-22 04:20:35.704 253465 DEBUG nova.compute.manager [req-c88544af-4521-444e-9299-fea9d7c88668 req-ed8400af-b72e-40bb-a8f5-ef1fa42c0028 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Refreshing instance network info cache due to event network-changed-c3496b0e-592b-474e-a238-1017d28773e8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:20:35 compute-0 nova_compute[253461]: 2025-11-22 04:20:35.707 253465 DEBUG oslo_concurrency.lockutils [req-c88544af-4521-444e-9299-fea9d7c88668 req-ed8400af-b72e-40bb-a8f5-ef1fa42c0028 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:20:35 compute-0 nova_compute[253461]: 2025-11-22 04:20:35.707 253465 DEBUG oslo_concurrency.lockutils [req-c88544af-4521-444e-9299-fea9d7c88668 req-ed8400af-b72e-40bb-a8f5-ef1fa42c0028 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:20:35 compute-0 nova_compute[253461]: 2025-11-22 04:20:35.707 253465 DEBUG nova.network.neutron [req-c88544af-4521-444e-9299-fea9d7c88668 req-ed8400af-b72e-40bb-a8f5-ef1fa42c0028 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Refreshing network info cache for port c3496b0e-592b-474e-a238-1017d28773e8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:20:36 compute-0 ceph-mon[75011]: pgmap v2192: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 994 KiB/s wr, 55 op/s
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:20:36
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'backups', 'images', 'vms', 'default.rgw.control']
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:20:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 22 04:20:37 compute-0 nova_compute[253461]: 2025-11-22 04:20:37.268 253465 DEBUG nova.network.neutron [req-c88544af-4521-444e-9299-fea9d7c88668 req-ed8400af-b72e-40bb-a8f5-ef1fa42c0028 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Updated VIF entry in instance network info cache for port c3496b0e-592b-474e-a238-1017d28773e8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:20:37 compute-0 nova_compute[253461]: 2025-11-22 04:20:37.269 253465 DEBUG nova.network.neutron [req-c88544af-4521-444e-9299-fea9d7c88668 req-ed8400af-b72e-40bb-a8f5-ef1fa42c0028 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Updating instance_info_cache with network_info: [{"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:20:37 compute-0 nova_compute[253461]: 2025-11-22 04:20:37.297 253465 DEBUG oslo_concurrency.lockutils [req-c88544af-4521-444e-9299-fea9d7c88668 req-ed8400af-b72e-40bb-a8f5-ef1fa42c0028 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-a8949f48-8745-4610-8b5a-55b807d8796a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:20:37 compute-0 podman[305612]: 2025-11-22 04:20:37.408154892 +0000 UTC m=+0.087606991 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:20:38 compute-0 ceph-mon[75011]: pgmap v2193: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 22 04:20:38 compute-0 nova_compute[253461]: 2025-11-22 04:20:38.249 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 22 04:20:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:39 compute-0 nova_compute[253461]: 2025-11-22 04:20:39.226 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:40 compute-0 ceph-mon[75011]: pgmap v2194: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 22 04:20:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 22 04:20:42 compute-0 ceph-mon[75011]: pgmap v2195: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 22 04:20:42 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 04:20:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 22 04:20:43 compute-0 nova_compute[253461]: 2025-11-22 04:20:43.251 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:43 compute-0 ovn_controller[152691]: 2025-11-22T04:20:43Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:b7:07 10.100.0.14
Nov 22 04:20:43 compute-0 ovn_controller[152691]: 2025-11-22T04:20:43Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:b7:07 10.100.0.14
Nov 22 04:20:44 compute-0 ceph-mon[75011]: pgmap v2196: 305 pgs: 305 active+clean; 134 MiB data, 518 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Nov 22 04:20:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:44 compute-0 nova_compute[253461]: 2025-11-22 04:20:44.227 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 148 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 589 KiB/s wr, 86 op/s
Nov 22 04:20:46 compute-0 ceph-mon[75011]: pgmap v2197: 305 pgs: 305 active+clean; 148 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 589 KiB/s wr, 86 op/s
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004605874967441558 of space, bias 1.0, pg target 0.13817624902324674 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035159302353975384 of space, bias 1.0, pg target 0.10547790706192615 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:20:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 22 04:20:47 compute-0 ceph-mon[75011]: pgmap v2198: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 94 op/s
Nov 22 04:20:48 compute-0 nova_compute[253461]: 2025-11-22 04:20:48.255 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:20:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:49 compute-0 nova_compute[253461]: 2025-11-22 04:20:49.229 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:49 compute-0 ceph-mon[75011]: pgmap v2199: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:20:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:20:52 compute-0 ceph-mon[75011]: pgmap v2200: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.531 253465 DEBUG oslo_concurrency.lockutils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.532 253465 DEBUG oslo_concurrency.lockutils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.551 253465 DEBUG nova.objects.instance [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'flavor' on Instance uuid a8949f48-8745-4610-8b5a-55b807d8796a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.594 253465 DEBUG oslo_concurrency.lockutils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.827 253465 DEBUG oslo_concurrency.lockutils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.827 253465 DEBUG oslo_concurrency.lockutils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.828 253465 INFO nova.compute.manager [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Attaching volume 46939baa-b83b-4c12-8a43-045be7cd9191 to /dev/vdb
Nov 22 04:20:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.984 253465 DEBUG os_brick.utils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:20:52 compute-0 nova_compute[253461]: 2025-11-22 04:20:52.985 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.004 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.005 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[a4f2cc45-fa7c-491b-961f-88c34250ae03]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.007 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.023 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.023 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[adc3be01-bc74-4b53-a881-e014fd05dc9c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.026 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.043 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.044 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[f8538dd3-2d2e-46bc-8914-bf5df5b220e4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.045 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbf1656-c06b-41fc-9213-6466db7ad039]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.046 253465 DEBUG oslo_concurrency.processutils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.080 253465 DEBUG oslo_concurrency.processutils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.084 253465 DEBUG os_brick.initiator.connectors.lightos [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.085 253465 DEBUG os_brick.initiator.connectors.lightos [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.086 253465 DEBUG os_brick.initiator.connectors.lightos [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.086 253465 DEBUG os_brick.utils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] <== get_connector_properties: return (101ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.087 253465 DEBUG nova.virt.block_device [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Updating existing volume attachment record: f49c7059-181c-4b94-a302-87ab3a908ddd _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:20:53 compute-0 nova_compute[253461]: 2025-11-22 04:20:53.306 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:53 compute-0 podman[305642]: 2025-11-22 04:20:53.455325087 +0000 UTC m=+0.112040514 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 22 04:20:53 compute-0 podman[305643]: 2025-11-22 04:20:53.483394248 +0000 UTC m=+0.135011007 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 22 04:20:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:20:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/301091665' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.030 253465 DEBUG os_brick.encryptors [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Using volume encryption metadata '{'encryption_key_id': '64d417d2-8644-436b-83a7-2ea0db0a7feb', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-46939baa-b83b-4c12-8a43-045be7cd9191', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '46939baa-b83b-4c12-8a43-045be7cd9191', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a8949f48-8745-4610-8b5a-55b807d8796a', 'attached_at': '', 'detached_at': '', 'volume_id': '46939baa-b83b-4c12-8a43-045be7cd9191', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.039 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.060 253465 DEBUG barbicanclient.v1.secrets [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.061 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.102 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.103 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 ceph-mon[75011]: pgmap v2201: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:20:54 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/301091665' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:20:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.151 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.152 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.186 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.186 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.225 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.226 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.231 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.266 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.267 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.295 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.296 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.328 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.329 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.353 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.353 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.377 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.377 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.396 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.396 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.414 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.415 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.440 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.441 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.452 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.453 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.453 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.453 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.454 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.485 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.486 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.513 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.513 253465 INFO barbicanclient.base [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/64d417d2-8644-436b-83a7-2ea0db0a7feb
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.539 253465 DEBUG barbicanclient.client [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.541 253465 DEBUG nova.virt.libvirt.host [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 04:20:54 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 04:20:54 compute-0 nova_compute[253461]:     <volume>46939baa-b83b-4c12-8a43-045be7cd9191</volume>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   </usage>
Nov 22 04:20:54 compute-0 nova_compute[253461]: </secret>
Nov 22 04:20:54 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.555 253465 DEBUG nova.objects.instance [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'flavor' on Instance uuid a8949f48-8745-4610-8b5a-55b807d8796a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.578 253465 DEBUG nova.virt.libvirt.driver [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Attempting to attach volume 46939baa-b83b-4c12-8a43-045be7cd9191 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.581 253465 DEBUG nova.virt.libvirt.guest [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] attach device xml: <disk type="network" device="disk">
Nov 22 04:20:54 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-46939baa-b83b-4c12-8a43-045be7cd9191">
Nov 22 04:20:54 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   </source>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   <auth username="openstack">
Nov 22 04:20:54 compute-0 nova_compute[253461]:     <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   </auth>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   <serial>46939baa-b83b-4c12-8a43-045be7cd9191</serial>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 04:20:54 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="dbda03a8-f92c-484b-8dc1-fa651514feac"/>
Nov 22 04:20:54 compute-0 nova_compute[253461]:   </encryption>
Nov 22 04:20:54 compute-0 nova_compute[253461]: </disk>
Nov 22 04:20:54 compute-0 nova_compute[253461]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Nov 22 04:20:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:20:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:54 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3290678099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:54 compute-0 nova_compute[253461]: 2025-11-22 04:20:54.958 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:55 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3290678099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:56 compute-0 ceph-mon[75011]: pgmap v2202: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Nov 22 04:20:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Nov 22 04:20:56 compute-0 nova_compute[253461]: 2025-11-22 04:20:56.993 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:20:56 compute-0 nova_compute[253461]: 2025-11-22 04:20:56.994 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:20:56 compute-0 nova_compute[253461]: 2025-11-22 04:20:56.995 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.046 253465 DEBUG nova.virt.libvirt.driver [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.047 253465 DEBUG nova.virt.libvirt.driver [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.048 253465 DEBUG nova.virt.libvirt.driver [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.048 253465 DEBUG nova.virt.libvirt.driver [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No VIF found with MAC fa:16:3e:2a:b7:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:20:57 compute-0 ceph-mon[75011]: pgmap v2203: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 1.6 MiB/s wr, 51 op/s
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.276 253465 DEBUG oslo_concurrency.lockutils [None req-6e5180e1-9645-4bb7-88ec-604c619ab704 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.296 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.297 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4138MB free_disk=59.942718505859375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.298 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.298 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.372 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance a8949f48-8745-4610-8b5a-55b807d8796a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.373 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.373 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.498 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing inventories for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.522 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating ProviderTree inventory for provider 62e18608-eaef-4f09-8e92-06d41e51d580 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.523 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Updating inventory in ProviderTree for provider 62e18608-eaef-4f09-8e92-06d41e51d580 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.541 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing aggregate associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.580 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Refreshing trait associations for resource provider 62e18608-eaef-4f09-8e92-06d41e51d580, traits: COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_USB,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_ABM,HW_CPU_X86_AMD_SVM,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE4A,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_SSSE3,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_AESNI,COMPUTE_ACCELERATORS,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AVX,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE,HW_CPU_X86_SSE41,COMPUTE_DEVICE_TAGGING,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AVX2 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.645 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.906 253465 DEBUG oslo_concurrency.lockutils [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.907 253465 DEBUG oslo_concurrency.lockutils [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:57 compute-0 nova_compute[253461]: 2025-11-22 04:20:57.934 253465 INFO nova.compute.manager [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Detaching volume 46939baa-b83b-4c12-8a43-045be7cd9191
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.099 253465 INFO nova.virt.block_device [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Attempting to driver detach volume 46939baa-b83b-4c12-8a43-045be7cd9191 from mountpoint /dev/vdb
Nov 22 04:20:58 compute-0 sshd-session[305729]: Invalid user admin from 27.79.46.85 port 34486
Nov 22 04:20:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:20:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/774905147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.126 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.134 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.150 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.169 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.169 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/774905147' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.271 253465 DEBUG os_brick.encryptors [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Using volume encryption metadata '{'encryption_key_id': '64d417d2-8644-436b-83a7-2ea0db0a7feb', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-46939baa-b83b-4c12-8a43-045be7cd9191', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '46939baa-b83b-4c12-8a43-045be7cd9191', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'a8949f48-8745-4610-8b5a-55b807d8796a', 'attached_at': '', 'detached_at': '', 'volume_id': '46939baa-b83b-4c12-8a43-045be7cd9191', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.280 253465 DEBUG nova.virt.libvirt.driver [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Attempting to detach device vdb from instance a8949f48-8745-4610-8b5a-55b807d8796a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.281 253465 DEBUG nova.virt.libvirt.guest [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-46939baa-b83b-4c12-8a43-045be7cd9191">
Nov 22 04:20:58 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   </source>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <serial>46939baa-b83b-4c12-8a43-045be7cd9191</serial>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 04:20:58 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="dbda03a8-f92c-484b-8dc1-fa651514feac"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   </encryption>
Nov 22 04:20:58 compute-0 nova_compute[253461]: </disk>
Nov 22 04:20:58 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.290 253465 INFO nova.virt.libvirt.driver [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Successfully detached device vdb from instance a8949f48-8745-4610-8b5a-55b807d8796a from the persistent domain config.
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.291 253465 DEBUG nova.virt.libvirt.driver [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a8949f48-8745-4610-8b5a-55b807d8796a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.292 253465 DEBUG nova.virt.libvirt.guest [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] detach device xml: <disk type="network" device="disk">
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <source protocol="rbd" name="volumes/volume-46939baa-b83b-4c12-8a43-045be7cd9191">
Nov 22 04:20:58 compute-0 nova_compute[253461]:     <host name="192.168.122.100" port="6789"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   </source>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <target dev="vdb" bus="virtio"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <serial>46939baa-b83b-4c12-8a43-045be7cd9191</serial>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   <encryption format="luks">
Nov 22 04:20:58 compute-0 nova_compute[253461]:     <secret type="passphrase" uuid="dbda03a8-f92c-484b-8dc1-fa651514feac"/>
Nov 22 04:20:58 compute-0 nova_compute[253461]:   </encryption>
Nov 22 04:20:58 compute-0 nova_compute[253461]: </disk>
Nov 22 04:20:58 compute-0 nova_compute[253461]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.361 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.423 253465 DEBUG nova.virt.libvirt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Received event <DeviceRemovedEvent: 1763785258.4228616, a8949f48-8745-4610-8b5a-55b807d8796a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.425 253465 DEBUG nova.virt.libvirt.driver [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a8949f48-8745-4610-8b5a-55b807d8796a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.428 253465 INFO nova.virt.libvirt.driver [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Successfully detached device vdb from instance a8949f48-8745-4610-8b5a-55b807d8796a from the live domain config.
Nov 22 04:20:58 compute-0 sshd-session[305729]: Connection closed by invalid user admin 27.79.46.85 port 34486 [preauth]
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.790 253465 DEBUG nova.objects.instance [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'flavor' on Instance uuid a8949f48-8745-4610-8b5a-55b807d8796a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:20:58 compute-0 nova_compute[253461]: 2025-11-22 04:20:58.838 253465 DEBUG oslo_concurrency.lockutils [None req-8eab9801-341e-4d78-892e-7b011e17d8d6 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 13 KiB/s wr, 5 op/s
Nov 22 04:20:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:20:59 compute-0 ceph-mon[75011]: pgmap v2204: 305 pgs: 305 active+clean; 167 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 13 KiB/s wr, 5 op/s
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.234 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.605 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.606 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.606 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.607 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.608 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.609 253465 INFO nova.compute.manager [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Terminating instance
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.612 253465 DEBUG nova.compute.manager [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:20:59 compute-0 kernel: tapc3496b0e-59 (unregistering): left promiscuous mode
Nov 22 04:20:59 compute-0 NetworkManager[48916]: <info>  [1763785259.6779] device (tapc3496b0e-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.689 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:59 compute-0 ovn_controller[152691]: 2025-11-22T04:20:59Z|00295|binding|INFO|Releasing lport c3496b0e-592b-474e-a238-1017d28773e8 from this chassis (sb_readonly=0)
Nov 22 04:20:59 compute-0 ovn_controller[152691]: 2025-11-22T04:20:59Z|00296|binding|INFO|Setting lport c3496b0e-592b-474e-a238-1017d28773e8 down in Southbound
Nov 22 04:20:59 compute-0 ovn_controller[152691]: 2025-11-22T04:20:59Z|00297|binding|INFO|Removing iface tapc3496b0e-59 ovn-installed in OVS
Nov 22 04:20:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:59.699 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:b7:07 10.100.0.14'], port_security=['fa:16:3e:2a:b7:07 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a8949f48-8745-4610-8b5a-55b807d8796a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '60266484-e60a-46d3-b144-27318949b1bf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.194'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d096b3-2344-4434-a488-92084cb46974, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=c3496b0e-592b-474e-a238-1017d28773e8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:20:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:59.702 162689 INFO neutron.agent.ovn.metadata.agent [-] Port c3496b0e-592b-474e-a238-1017d28773e8 in datapath bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 unbound from our chassis
Nov 22 04:20:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:59.704 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:20:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:59.706 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c61d472b-9fc7-4cbc-ac01-7f7a0df2a27b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:20:59 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:20:59.707 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 namespace which is not needed anymore
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.721 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:59 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Nov 22 04:20:59 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 16.655s CPU time.
Nov 22 04:20:59 compute-0 systemd-machined[215728]: Machine qemu-28-instance-0000001c terminated.
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.838 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.848 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.851 253465 INFO nova.virt.libvirt.driver [-] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Instance destroyed successfully.
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.851 253465 DEBUG nova.objects.instance [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'resources' on Instance uuid a8949f48-8745-4610-8b5a-55b807d8796a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.863 253465 DEBUG nova.virt.libvirt.vif [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1735314438',display_name='tempest-TestEncryptedCinderVolumes-server-1735314438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1735314438',id=28,image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMUrIBsKqqcoDmVRtGnTWH8PLWNzOIsa3Tj7T5v4loxUeDYjLK5BES1B3rVlsNh95K2CrCjjEL/5+EhRw79dznGjCC78b/ZBOyNqE4QBUsnhkjQgIOdXQH847JR66Wvmqw==',key_name='tempest-keypair-145789610',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:20:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-cqcpxd7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='feac2ecd-89f4-4e45-b9fb-68cb0cf353c3',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:20:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='26147ad59e2d4763b8edc27d80567b09',uuid=a8949f48-8745-4610-8b5a-55b807d8796a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.863 253465 DEBUG nova.network.os_vif_util [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "c3496b0e-592b-474e-a238-1017d28773e8", "address": "fa:16:3e:2a:b7:07", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc3496b0e-59", "ovs_interfaceid": "c3496b0e-592b-474e-a238-1017d28773e8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.864 253465 DEBUG nova.network.os_vif_util [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2a:b7:07,bridge_name='br-int',has_traffic_filtering=True,id=c3496b0e-592b-474e-a238-1017d28773e8,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3496b0e-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.864 253465 DEBUG os_vif [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:b7:07,bridge_name='br-int',has_traffic_filtering=True,id=c3496b0e-592b-474e-a238-1017d28773e8,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3496b0e-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.866 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.866 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3496b0e-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.867 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.869 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:20:59 compute-0 nova_compute[253461]: 2025-11-22 04:20:59.872 253465 INFO os_vif [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2a:b7:07,bridge_name='br-int',has_traffic_filtering=True,id=c3496b0e-592b-474e-a238-1017d28773e8,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc3496b0e-59')
Nov 22 04:20:59 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[305596]: [NOTICE]   (305600) : haproxy version is 2.8.14-c23fe91
Nov 22 04:20:59 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[305596]: [NOTICE]   (305600) : path to executable is /usr/sbin/haproxy
Nov 22 04:20:59 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[305596]: [WARNING]  (305600) : Exiting Master process...
Nov 22 04:20:59 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[305596]: [ALERT]    (305600) : Current worker (305602) exited with code 143 (Terminated)
Nov 22 04:20:59 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[305596]: [WARNING]  (305600) : All workers exited. Exiting... (0)
Nov 22 04:20:59 compute-0 systemd[1]: libpod-2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd.scope: Deactivated successfully.
Nov 22 04:20:59 compute-0 podman[305780]: 2025-11-22 04:20:59.897553836 +0000 UTC m=+0.069469328 container died 2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 04:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd-userdata-shm.mount: Deactivated successfully.
Nov 22 04:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-89370dcfb7c85ab98aa21d157b163a1268fc011ad665cfa43117c03146916392-merged.mount: Deactivated successfully.
Nov 22 04:20:59 compute-0 podman[305780]: 2025-11-22 04:20:59.941309004 +0000 UTC m=+0.113224466 container cleanup 2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 22 04:20:59 compute-0 systemd[1]: libpod-conmon-2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd.scope: Deactivated successfully.
Nov 22 04:21:00 compute-0 podman[305835]: 2025-11-22 04:21:00.035973908 +0000 UTC m=+0.064846681 container remove 2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.046 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3e3537-5420-480d-8384-23bcbe87d74d]: (4, ('Sat Nov 22 04:20:59 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 (2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd)\n2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd\nSat Nov 22 04:20:59 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 (2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd)\n2c7918cc77436ef114b4e7465db3c7959efadc2def41a7ed537c415f2b1f52bd\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.048 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a78150-5768-4cd6-bf08-d6fe1e232b10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.050 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd550fd2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.052 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:00 compute-0 kernel: tapbd550fd2-d0: left promiscuous mode
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.070 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.071 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.073 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[a6108361-4313-4adb-aa9c-87b51ec3c561]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.088 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[567dde65-e846-4e42-9237-5aa34a5a8d58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.089 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4b31d952-0f7c-4f22-9fb8-3cc178dbd4bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.119 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[99e1e9cc-3a08-4b4d-b12a-2ac00eea4417]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 548267, 'reachable_time': 35207, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305850, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.122 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:21:00 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:00.122 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[b4c7a432-7190-4325-858a-70d079956e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:00 compute-0 systemd[1]: run-netns-ovnmeta\x2dbd550fd2\x2dd0e4\x2d4f32\x2d84d1\x2db7eca9fc7e07.mount: Deactivated successfully.
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.171 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.171 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.172 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.172 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.192 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.193 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.194 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.194 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.195 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.195 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.196 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:21:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:21:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673587294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:21:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:21:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/673587294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.304 253465 INFO nova.virt.libvirt.driver [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Deleting instance files /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a_del
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.305 253465 INFO nova.virt.libvirt.driver [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Deletion of /var/lib/nova/instances/a8949f48-8745-4610-8b5a-55b807d8796a_del complete
Nov 22 04:21:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/673587294' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:21:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/673587294' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.372 253465 INFO nova.compute.manager [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Took 0.76 seconds to destroy the instance on the hypervisor.
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.373 253465 DEBUG oslo.service.loopingcall [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.373 253465 DEBUG nova.compute.manager [-] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.374 253465 DEBUG nova.network.neutron [-] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.403 253465 DEBUG nova.compute.manager [req-1df5e833-3acc-4b73-b5d4-876a2180440e req-cdd6a1dd-12f7-44f8-9f3c-55a3c41b47d0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received event network-vif-unplugged-c3496b0e-592b-474e-a238-1017d28773e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.403 253465 DEBUG oslo_concurrency.lockutils [req-1df5e833-3acc-4b73-b5d4-876a2180440e req-cdd6a1dd-12f7-44f8-9f3c-55a3c41b47d0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.404 253465 DEBUG oslo_concurrency.lockutils [req-1df5e833-3acc-4b73-b5d4-876a2180440e req-cdd6a1dd-12f7-44f8-9f3c-55a3c41b47d0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.404 253465 DEBUG oslo_concurrency.lockutils [req-1df5e833-3acc-4b73-b5d4-876a2180440e req-cdd6a1dd-12f7-44f8-9f3c-55a3c41b47d0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.404 253465 DEBUG nova.compute.manager [req-1df5e833-3acc-4b73-b5d4-876a2180440e req-cdd6a1dd-12f7-44f8-9f3c-55a3c41b47d0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] No waiting events found dispatching network-vif-unplugged-c3496b0e-592b-474e-a238-1017d28773e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:21:00 compute-0 nova_compute[253461]: 2025-11-22 04:21:00.405 253465 DEBUG nova.compute.manager [req-1df5e833-3acc-4b73-b5d4-876a2180440e req-cdd6a1dd-12f7-44f8-9f3c-55a3c41b47d0 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received event network-vif-unplugged-c3496b0e-592b-474e-a238-1017d28773e8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:21:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 144 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 14 KiB/s wr, 7 op/s
Nov 22 04:21:01 compute-0 ceph-mon[75011]: pgmap v2205: 305 pgs: 305 active+clean; 144 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 14 KiB/s wr, 7 op/s
Nov 22 04:21:01 compute-0 nova_compute[253461]: 2025-11-22 04:21:01.658 253465 DEBUG nova.network.neutron [-] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:21:01 compute-0 nova_compute[253461]: 2025-11-22 04:21:01.675 253465 INFO nova.compute.manager [-] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Took 1.30 seconds to deallocate network for instance.
Nov 22 04:21:01 compute-0 nova_compute[253461]: 2025-11-22 04:21:01.744 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:01 compute-0 nova_compute[253461]: 2025-11-22 04:21:01.744 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:01 compute-0 nova_compute[253461]: 2025-11-22 04:21:01.769 253465 DEBUG nova.compute.manager [req-7075ffd9-24b3-4197-ab2a-71e88c497ccc req-767cce74-3003-4971-86f9-de0912c46fa7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received event network-vif-deleted-c3496b0e-592b-474e-a238-1017d28773e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:21:01 compute-0 nova_compute[253461]: 2025-11-22 04:21:01.795 253465 DEBUG oslo_concurrency.processutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:02 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:21:02 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429226555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.332 253465 DEBUG oslo_concurrency.processutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.339 253465 DEBUG nova.compute.provider_tree [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:21:02 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3429226555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.354 253465 DEBUG nova.scheduler.client.report [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.378 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.404 253465 INFO nova.scheduler.client.report [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Deleted allocations for instance a8949f48-8745-4610-8b5a-55b807d8796a
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.503 253465 DEBUG nova.compute.manager [req-294f82e4-2e8e-4f69-99c1-4060f5d07921 req-de823013-8165-4678-99b8-b91f097cc3e2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received event network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.503 253465 DEBUG oslo_concurrency.lockutils [req-294f82e4-2e8e-4f69-99c1-4060f5d07921 req-de823013-8165-4678-99b8-b91f097cc3e2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.504 253465 DEBUG oslo_concurrency.lockutils [req-294f82e4-2e8e-4f69-99c1-4060f5d07921 req-de823013-8165-4678-99b8-b91f097cc3e2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.504 253465 DEBUG oslo_concurrency.lockutils [req-294f82e4-2e8e-4f69-99c1-4060f5d07921 req-de823013-8165-4678-99b8-b91f097cc3e2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.505 253465 DEBUG nova.compute.manager [req-294f82e4-2e8e-4f69-99c1-4060f5d07921 req-de823013-8165-4678-99b8-b91f097cc3e2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] No waiting events found dispatching network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.505 253465 WARNING nova.compute.manager [req-294f82e4-2e8e-4f69-99c1-4060f5d07921 req-de823013-8165-4678-99b8-b91f097cc3e2 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Received unexpected event network-vif-plugged-c3496b0e-592b-474e-a238-1017d28773e8 for instance with vm_state deleted and task_state None.
Nov 22 04:21:02 compute-0 nova_compute[253461]: 2025-11-22 04:21:02.531 253465 DEBUG oslo_concurrency.lockutils [None req-37d270c0-cbee-4d4a-b217-502ea83618f0 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "a8949f48-8745-4610-8b5a-55b807d8796a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 123 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.7 KiB/s wr, 15 op/s
Nov 22 04:21:03 compute-0 ceph-mon[75011]: pgmap v2206: 305 pgs: 305 active+clean; 123 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 3.7 KiB/s wr, 15 op/s
Nov 22 04:21:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:04 compute-0 nova_compute[253461]: 2025-11-22 04:21:04.236 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 88 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 33 op/s
Nov 22 04:21:04 compute-0 nova_compute[253461]: 2025-11-22 04:21:04.907 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:05 compute-0 nova_compute[253461]: 2025-11-22 04:21:05.450 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:05 compute-0 ceph-mon[75011]: pgmap v2207: 305 pgs: 305 active+clean; 88 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 2.8 KiB/s wr, 33 op/s
Nov 22 04:21:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:21:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803159043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:21:06 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:21:06 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803159043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:21:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:21:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:21:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:21:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:21:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:21:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:21:06 compute-0 nova_compute[253461]: 2025-11-22 04:21:06.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.2 KiB/s wr, 38 op/s
Nov 22 04:21:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3803159043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:21:07 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3803159043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:21:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:21:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150387292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:21:07 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:21:07 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150387292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:21:08 compute-0 ceph-mon[75011]: pgmap v2208: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.2 KiB/s wr, 38 op/s
Nov 22 04:21:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/150387292' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:21:08 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/150387292' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:21:08 compute-0 podman[305874]: 2025-11-22 04:21:08.425852897 +0000 UTC m=+0.097508267 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 22 04:21:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.125155) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785269125237, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2060, "num_deletes": 251, "total_data_size": 3324490, "memory_usage": 3387744, "flush_reason": "Manual Compaction"}
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Nov 22 04:21:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785269152266, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 3258317, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41889, "largest_seqno": 43948, "table_properties": {"data_size": 3249057, "index_size": 5818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19102, "raw_average_key_size": 20, "raw_value_size": 3230413, "raw_average_value_size": 3411, "num_data_blocks": 257, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763785053, "oldest_key_time": 1763785053, "file_creation_time": 1763785269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 27157 microseconds, and 7276 cpu microseconds.
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.152320) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 3258317 bytes OK
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.152342) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.154181) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.154193) EVENT_LOG_v1 {"time_micros": 1763785269154189, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.154209) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3315855, prev total WAL file size 3315855, number of live WAL files 2.
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.155161) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(3181KB)], [89(10226KB)]
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785269155195, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 13729951, "oldest_snapshot_seqno": -1}
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 7553 keys, 11989337 bytes, temperature: kUnknown
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785269235232, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11989337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11932907, "index_size": 36414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 191204, "raw_average_key_size": 25, "raw_value_size": 11791611, "raw_average_value_size": 1561, "num_data_blocks": 1439, "num_entries": 7553, "num_filter_entries": 7553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763785269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.235950) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11989337 bytes
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.237974) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.8 rd, 149.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 10.0 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 8074, records dropped: 521 output_compression: NoCompression
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.238022) EVENT_LOG_v1 {"time_micros": 1763785269237999, "job": 52, "event": "compaction_finished", "compaction_time_micros": 80389, "compaction_time_cpu_micros": 26577, "output_level": 6, "num_output_files": 1, "total_output_size": 11989337, "num_input_records": 8074, "num_output_records": 7553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:21:09 compute-0 nova_compute[253461]: 2025-11-22 04:21:09.238 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785269240602, "job": 52, "event": "table_file_deletion", "file_number": 91}
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785269246997, "job": 52, "event": "table_file_deletion", "file_number": 89}
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.155057) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.247189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.247196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.247198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.247200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:21:09 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:21:09.247201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:21:09 compute-0 nova_compute[253461]: 2025-11-22 04:21:09.910 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:10 compute-0 ceph-mon[75011]: pgmap v2209: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Nov 22 04:21:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.4 KiB/s wr, 47 op/s
Nov 22 04:21:12 compute-0 ceph-mon[75011]: pgmap v2210: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 3.4 KiB/s wr, 47 op/s
Nov 22 04:21:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 51 op/s
Nov 22 04:21:13 compute-0 ceph-mon[75011]: pgmap v2211: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 51 op/s
Nov 22 04:21:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:14 compute-0 sshd-session[305895]: Connection closed by authenticating user ftp 27.79.46.85 port 55364 [preauth]
Nov 22 04:21:14 compute-0 nova_compute[253461]: 2025-11-22 04:21:14.851 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763785259.849571, a8949f48-8745-4610-8b5a-55b807d8796a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:21:14 compute-0 nova_compute[253461]: 2025-11-22 04:21:14.851 253465 INFO nova.compute.manager [-] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] VM Stopped (Lifecycle Event)
Nov 22 04:21:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Nov 22 04:21:14 compute-0 nova_compute[253461]: 2025-11-22 04:21:14.879 253465 DEBUG nova.compute.manager [None req-dc52b6d7-38b7-4a4a-a2ef-8d446ceb02ff - - - - - -] [instance: a8949f48-8745-4610-8b5a-55b807d8796a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:21:14 compute-0 nova_compute[253461]: 2025-11-22 04:21:14.911 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:15 compute-0 ceph-mon[75011]: pgmap v2212: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.1 KiB/s wr, 43 op/s
Nov 22 04:21:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 597 B/s wr, 25 op/s
Nov 22 04:21:17 compute-0 ceph-mon[75011]: pgmap v2213: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 597 B/s wr, 25 op/s
Nov 22 04:21:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 170 B/s wr, 20 op/s
Nov 22 04:21:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:19 compute-0 nova_compute[253461]: 2025-11-22 04:21:19.241 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:19 compute-0 nova_compute[253461]: 2025-11-22 04:21:19.914 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:20 compute-0 ceph-mon[75011]: pgmap v2214: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 170 B/s wr, 20 op/s
Nov 22 04:21:20 compute-0 sudo[305897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:20 compute-0 sudo[305897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:20 compute-0 sudo[305897]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:20 compute-0 sudo[305922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:21:20 compute-0 sudo[305922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:20 compute-0 sudo[305922]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:20 compute-0 sudo[305947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:20 compute-0 sudo[305947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:20 compute-0 sudo[305947]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:20 compute-0 sudo[305972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:21:20 compute-0 sudo[305972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 25 op/s
Nov 22 04:21:21 compute-0 sudo[305972]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:21:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:21:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:21:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:21:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:21:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:21:21 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 2a5adb0b-378b-4957-b77b-a1ed705c02e4 does not exist
Nov 22 04:21:21 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev ef374a07-b4e9-4252-9580-a4ffd058663e does not exist
Nov 22 04:21:21 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5e1a7b69-e2d3-4c29-a97a-732259581087 does not exist
Nov 22 04:21:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:21:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:21:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:21:21 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:21:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:21:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:21:21 compute-0 sudo[306028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:21 compute-0 sudo[306028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:21 compute-0 sudo[306028]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:21 compute-0 sudo[306053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:21:21 compute-0 sudo[306053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:21 compute-0 sudo[306053]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:21 compute-0 sudo[306078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:21 compute-0 sudo[306078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:21 compute-0 sudo[306078]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:21 compute-0 sudo[306103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:21:21 compute-0 sudo[306103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:22 compute-0 podman[306168]: 2025-11-22 04:21:22.008713323 +0000 UTC m=+0.050057185 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:21:22 compute-0 ceph-mon[75011]: pgmap v2215: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 255 B/s wr, 25 op/s
Nov 22 04:21:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:21:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:21:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:21:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:21:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:21:22 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:21:22 compute-0 podman[306168]: 2025-11-22 04:21:22.105054444 +0000 UTC m=+0.146398246 container create ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:21:22 compute-0 systemd[1]: Started libpod-conmon-ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a.scope.
Nov 22 04:21:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:21:22 compute-0 podman[306168]: 2025-11-22 04:21:22.227028942 +0000 UTC m=+0.268372724 container init ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Nov 22 04:21:22 compute-0 podman[306168]: 2025-11-22 04:21:22.23547131 +0000 UTC m=+0.276815082 container start ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:21:22 compute-0 podman[306168]: 2025-11-22 04:21:22.239284355 +0000 UTC m=+0.280628147 container attach ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:21:22 compute-0 compassionate_chatterjee[306185]: 167 167
Nov 22 04:21:22 compute-0 systemd[1]: libpod-ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a.scope: Deactivated successfully.
Nov 22 04:21:22 compute-0 conmon[306185]: conmon ffeaef9ad43c269d3af4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a.scope/container/memory.events
Nov 22 04:21:22 compute-0 podman[306168]: 2025-11-22 04:21:22.243158162 +0000 UTC m=+0.284501924 container died ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Nov 22 04:21:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-44c69f202b6b1dc739bf9ba08c67b89a85be8b2a4b2384b54c0a44d18e04f906-merged.mount: Deactivated successfully.
Nov 22 04:21:22 compute-0 podman[306168]: 2025-11-22 04:21:22.566232242 +0000 UTC m=+0.607576004 container remove ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:21:22 compute-0 systemd[1]: libpod-conmon-ffeaef9ad43c269d3af439723cd2b71ece41af8f34690c993e755d8bf3512b4a.scope: Deactivated successfully.
Nov 22 04:21:22 compute-0 podman[306209]: 2025-11-22 04:21:22.735900508 +0000 UTC m=+0.047949694 container create 46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hofstadter, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:21:22 compute-0 systemd[1]: Started libpod-conmon-46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922.scope.
Nov 22 04:21:22 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb132cc0b00c7c21e52f61752f25ef92770924d8d1699490370ad465765c01d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb132cc0b00c7c21e52f61752f25ef92770924d8d1699490370ad465765c01d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb132cc0b00c7c21e52f61752f25ef92770924d8d1699490370ad465765c01d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb132cc0b00c7c21e52f61752f25ef92770924d8d1699490370ad465765c01d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb132cc0b00c7c21e52f61752f25ef92770924d8d1699490370ad465765c01d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:22 compute-0 podman[306209]: 2025-11-22 04:21:22.71525155 +0000 UTC m=+0.027300766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:21:22 compute-0 podman[306209]: 2025-11-22 04:21:22.826562167 +0000 UTC m=+0.138611423 container init 46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hofstadter, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:21:22 compute-0 podman[306209]: 2025-11-22 04:21:22.8341762 +0000 UTC m=+0.146225366 container start 46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:21:22 compute-0 podman[306209]: 2025-11-22 04:21:22.838099651 +0000 UTC m=+0.150148917 container attach 46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:21:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 12 op/s
Nov 22 04:21:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:23.039 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:23.040 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:23.041 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:23 compute-0 jolly_hofstadter[306226]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:21:23 compute-0 jolly_hofstadter[306226]: --> relative data size: 1.0
Nov 22 04:21:23 compute-0 jolly_hofstadter[306226]: --> All data devices are unavailable
Nov 22 04:21:23 compute-0 systemd[1]: libpod-46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922.scope: Deactivated successfully.
Nov 22 04:21:23 compute-0 systemd[1]: libpod-46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922.scope: Consumed 1.041s CPU time.
Nov 22 04:21:23 compute-0 podman[306209]: 2025-11-22 04:21:23.925367011 +0000 UTC m=+1.237416207 container died 46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:21:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb132cc0b00c7c21e52f61752f25ef92770924d8d1699490370ad465765c01d4-merged.mount: Deactivated successfully.
Nov 22 04:21:24 compute-0 podman[306209]: 2025-11-22 04:21:24.019052378 +0000 UTC m=+1.331101584 container remove 46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_hofstadter, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:21:24 compute-0 systemd[1]: libpod-conmon-46f7872b7ad647ee8aa130395056eae9b591612f30449cc3d106b0f05f7cf922.scope: Deactivated successfully.
Nov 22 04:21:24 compute-0 sudo[306103]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:24 compute-0 podman[306256]: 2025-11-22 04:21:24.066769872 +0000 UTC m=+0.101689998 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 22 04:21:24 compute-0 podman[306259]: 2025-11-22 04:21:24.082062203 +0000 UTC m=+0.117142079 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118)
Nov 22 04:21:24 compute-0 ceph-mon[75011]: pgmap v2216: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 85 B/s wr, 12 op/s
Nov 22 04:21:24 compute-0 sudo[306309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:24 compute-0 sudo[306309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:24 compute-0 sudo[306309]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:24 compute-0 sudo[306338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:21:24 compute-0 sudo[306338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:24 compute-0 sudo[306338]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:24 compute-0 nova_compute[253461]: 2025-11-22 04:21:24.242 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:24 compute-0 sudo[306363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:24 compute-0 sudo[306363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:24 compute-0 sudo[306363]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:24 compute-0 sudo[306388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:21:24 compute-0 sudo[306388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:24 compute-0 podman[306453]: 2025-11-22 04:21:24.829753111 +0000 UTC m=+0.066483052 container create b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:21:24 compute-0 systemd[1]: Started libpod-conmon-b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8.scope.
Nov 22 04:21:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 22 04:21:24 compute-0 podman[306453]: 2025-11-22 04:21:24.802276081 +0000 UTC m=+0.039006072 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:21:24 compute-0 nova_compute[253461]: 2025-11-22 04:21:24.915 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:24 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:21:24 compute-0 podman[306453]: 2025-11-22 04:21:24.943150367 +0000 UTC m=+0.179880398 container init b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:21:24 compute-0 podman[306453]: 2025-11-22 04:21:24.953824932 +0000 UTC m=+0.190554863 container start b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Nov 22 04:21:24 compute-0 podman[306453]: 2025-11-22 04:21:24.957315523 +0000 UTC m=+0.194045504 container attach b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:21:24 compute-0 thirsty_jang[306470]: 167 167
Nov 22 04:21:24 compute-0 systemd[1]: libpod-b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8.scope: Deactivated successfully.
Nov 22 04:21:24 compute-0 conmon[306470]: conmon b4d0242578d5ea96fa0c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8.scope/container/memory.events
Nov 22 04:21:25 compute-0 podman[306475]: 2025-11-22 04:21:25.033926246 +0000 UTC m=+0.046536859 container died b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Nov 22 04:21:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ffc2474a3b7f3fa2ddd97f3ecab5583925d7c80ccecce625635016838af580d-merged.mount: Deactivated successfully.
Nov 22 04:21:25 compute-0 podman[306475]: 2025-11-22 04:21:25.073686292 +0000 UTC m=+0.086296905 container remove b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_jang, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:21:25 compute-0 systemd[1]: libpod-conmon-b4d0242578d5ea96fa0cc8c3c06f6ad17c1ea7e532ef1896eb60a2161b6bebd8.scope: Deactivated successfully.
Nov 22 04:21:25 compute-0 podman[306497]: 2025-11-22 04:21:25.28365159 +0000 UTC m=+0.054733468 container create faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:21:25 compute-0 systemd[1]: Started libpod-conmon-faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c.scope.
Nov 22 04:21:25 compute-0 podman[306497]: 2025-11-22 04:21:25.256019978 +0000 UTC m=+0.027101947 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:21:25 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dcf52a18a0ee1002df48105ed2808ba87d8dea59bc420bab4e7b37c14a24f7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dcf52a18a0ee1002df48105ed2808ba87d8dea59bc420bab4e7b37c14a24f7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dcf52a18a0ee1002df48105ed2808ba87d8dea59bc420bab4e7b37c14a24f7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71dcf52a18a0ee1002df48105ed2808ba87d8dea59bc420bab4e7b37c14a24f7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:25 compute-0 podman[306497]: 2025-11-22 04:21:25.39568226 +0000 UTC m=+0.166764219 container init faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:21:25 compute-0 podman[306497]: 2025-11-22 04:21:25.403741333 +0000 UTC m=+0.174823241 container start faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dirac, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:21:25 compute-0 podman[306497]: 2025-11-22 04:21:25.409784883 +0000 UTC m=+0.180866821 container attach faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dirac, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:21:26 compute-0 ceph-mon[75011]: pgmap v2217: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Nov 22 04:21:26 compute-0 serene_dirac[306514]: {
Nov 22 04:21:26 compute-0 serene_dirac[306514]:     "0": [
Nov 22 04:21:26 compute-0 serene_dirac[306514]:         {
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "devices": [
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "/dev/loop3"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             ],
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_name": "ceph_lv0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_size": "21470642176",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "name": "ceph_lv0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "tags": {
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cluster_name": "ceph",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.crush_device_class": "",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.encrypted": "0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osd_id": "0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.type": "block",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.vdo": "0"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             },
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "type": "block",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "vg_name": "ceph_vg0"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:         }
Nov 22 04:21:26 compute-0 serene_dirac[306514]:     ],
Nov 22 04:21:26 compute-0 serene_dirac[306514]:     "1": [
Nov 22 04:21:26 compute-0 serene_dirac[306514]:         {
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "devices": [
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "/dev/loop4"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             ],
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_name": "ceph_lv1",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_size": "21470642176",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "name": "ceph_lv1",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "tags": {
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cluster_name": "ceph",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.crush_device_class": "",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.encrypted": "0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osd_id": "1",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.type": "block",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.vdo": "0"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             },
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "type": "block",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "vg_name": "ceph_vg1"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:         }
Nov 22 04:21:26 compute-0 serene_dirac[306514]:     ],
Nov 22 04:21:26 compute-0 serene_dirac[306514]:     "2": [
Nov 22 04:21:26 compute-0 serene_dirac[306514]:         {
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "devices": [
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "/dev/loop5"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             ],
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_name": "ceph_lv2",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_size": "21470642176",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "name": "ceph_lv2",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "tags": {
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.cluster_name": "ceph",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.crush_device_class": "",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.encrypted": "0",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osd_id": "2",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.type": "block",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:                 "ceph.vdo": "0"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             },
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "type": "block",
Nov 22 04:21:26 compute-0 serene_dirac[306514]:             "vg_name": "ceph_vg2"
Nov 22 04:21:26 compute-0 serene_dirac[306514]:         }
Nov 22 04:21:26 compute-0 serene_dirac[306514]:     ]
Nov 22 04:21:26 compute-0 serene_dirac[306514]: }
Nov 22 04:21:26 compute-0 systemd[1]: libpod-faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c.scope: Deactivated successfully.
Nov 22 04:21:26 compute-0 podman[306497]: 2025-11-22 04:21:26.252639419 +0000 UTC m=+1.023721348 container died faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dirac, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:21:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-71dcf52a18a0ee1002df48105ed2808ba87d8dea59bc420bab4e7b37c14a24f7-merged.mount: Deactivated successfully.
Nov 22 04:21:26 compute-0 podman[306497]: 2025-11-22 04:21:26.336118515 +0000 UTC m=+1.107200393 container remove faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:21:26 compute-0 systemd[1]: libpod-conmon-faf4c2f22ecfa916bb597ed297db46b4e3c37c6665661220d1b69b5436c3417c.scope: Deactivated successfully.
Nov 22 04:21:26 compute-0 sudo[306388]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:26 compute-0 sudo[306536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:26 compute-0 sudo[306536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:26 compute-0 sudo[306536]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:26 compute-0 sudo[306561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:21:26 compute-0 sudo[306561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:26 compute-0 sudo[306561]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:26 compute-0 sudo[306586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:26 compute-0 sudo[306586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:26 compute-0 sudo[306586]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:26 compute-0 sudo[306611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:21:26 compute-0 sudo[306611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 8 op/s
Nov 22 04:21:27 compute-0 podman[306679]: 2025-11-22 04:21:27.172996312 +0000 UTC m=+0.081108040 container create 312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Nov 22 04:21:27 compute-0 podman[306679]: 2025-11-22 04:21:27.127980742 +0000 UTC m=+0.036092540 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:21:27 compute-0 ceph-mon[75011]: pgmap v2218: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 8 op/s
Nov 22 04:21:27 compute-0 systemd[1]: Started libpod-conmon-312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475.scope.
Nov 22 04:21:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:21:27 compute-0 podman[306679]: 2025-11-22 04:21:27.296352908 +0000 UTC m=+0.204464676 container init 312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:21:27 compute-0 podman[306679]: 2025-11-22 04:21:27.30402641 +0000 UTC m=+0.212138138 container start 312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:21:27 compute-0 hopeful_shtern[306696]: 167 167
Nov 22 04:21:27 compute-0 systemd[1]: libpod-312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475.scope: Deactivated successfully.
Nov 22 04:21:27 compute-0 podman[306679]: 2025-11-22 04:21:27.318851309 +0000 UTC m=+0.226963097 container attach 312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Nov 22 04:21:27 compute-0 podman[306679]: 2025-11-22 04:21:27.320306558 +0000 UTC m=+0.228418296 container died 312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:21:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-934ac1a5fbd0da65dba9d16415428735e72ceb8cce7f760958a4730c66de05fe-merged.mount: Deactivated successfully.
Nov 22 04:21:27 compute-0 podman[306679]: 2025-11-22 04:21:27.490700385 +0000 UTC m=+0.398812093 container remove 312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:21:27 compute-0 systemd[1]: libpod-conmon-312933fbdd433f16c3b8559a0155b490e7de98f8a3ed2fbad11aa2017841c475.scope: Deactivated successfully.
Nov 22 04:21:27 compute-0 podman[306722]: 2025-11-22 04:21:27.730775824 +0000 UTC m=+0.066120446 container create 13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:21:27 compute-0 podman[306722]: 2025-11-22 04:21:27.696106807 +0000 UTC m=+0.031451469 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:21:27 compute-0 systemd[1]: Started libpod-conmon-13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8.scope.
Nov 22 04:21:27 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21c45940d90aaa791316a12ce5813e3d6c34fba876133917250dcc3b9783fb22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21c45940d90aaa791316a12ce5813e3d6c34fba876133917250dcc3b9783fb22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21c45940d90aaa791316a12ce5813e3d6c34fba876133917250dcc3b9783fb22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21c45940d90aaa791316a12ce5813e3d6c34fba876133917250dcc3b9783fb22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:27 compute-0 podman[306722]: 2025-11-22 04:21:27.878922298 +0000 UTC m=+0.214266940 container init 13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Nov 22 04:21:27 compute-0 podman[306722]: 2025-11-22 04:21:27.890394784 +0000 UTC m=+0.225739446 container start 13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Nov 22 04:21:27 compute-0 podman[306722]: 2025-11-22 04:21:27.907141283 +0000 UTC m=+0.242485935 container attach 13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 22 04:21:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 8 op/s
Nov 22 04:21:28 compute-0 tender_carson[306738]: {
Nov 22 04:21:28 compute-0 tender_carson[306738]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "osd_id": 1,
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "type": "bluestore"
Nov 22 04:21:28 compute-0 tender_carson[306738]:     },
Nov 22 04:21:28 compute-0 tender_carson[306738]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "osd_id": 0,
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "type": "bluestore"
Nov 22 04:21:28 compute-0 tender_carson[306738]:     },
Nov 22 04:21:28 compute-0 tender_carson[306738]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "osd_id": 2,
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:21:28 compute-0 tender_carson[306738]:         "type": "bluestore"
Nov 22 04:21:28 compute-0 tender_carson[306738]:     }
Nov 22 04:21:28 compute-0 tender_carson[306738]: }
Nov 22 04:21:28 compute-0 systemd[1]: libpod-13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8.scope: Deactivated successfully.
Nov 22 04:21:28 compute-0 systemd[1]: libpod-13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8.scope: Consumed 1.106s CPU time.
Nov 22 04:21:28 compute-0 podman[306722]: 2025-11-22 04:21:28.998689793 +0000 UTC m=+1.334034425 container died 13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Nov 22 04:21:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-21c45940d90aaa791316a12ce5813e3d6c34fba876133917250dcc3b9783fb22-merged.mount: Deactivated successfully.
Nov 22 04:21:29 compute-0 podman[306722]: 2025-11-22 04:21:29.070840878 +0000 UTC m=+1.406185490 container remove 13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_carson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:21:29 compute-0 systemd[1]: libpod-conmon-13ed29bf792d18ecc05c94219cab066e25ab9e0390e82fd5d169aeb1c5ad56c8.scope: Deactivated successfully.
Nov 22 04:21:29 compute-0 sudo[306611]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:21:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:21:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:21:29 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:21:29 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev caf6f546-60ac-49ad-bc86-d138ca37cef0 does not exist
Nov 22 04:21:29 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 76565406-def3-4458-a358-7b2171ae9147 does not exist
Nov 22 04:21:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:29 compute-0 sudo[306783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:21:29 compute-0 sudo[306783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:29 compute-0 sudo[306783]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:29 compute-0 nova_compute[253461]: 2025-11-22 04:21:29.244 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:29 compute-0 sudo[306808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:21:29 compute-0 sudo[306808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:21:29 compute-0 sudo[306808]: pam_unix(sudo:session): session closed for user root
Nov 22 04:21:29 compute-0 nova_compute[253461]: 2025-11-22 04:21:29.918 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:30 compute-0 ceph-mon[75011]: pgmap v2219: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 426 B/s wr, 8 op/s
Nov 22 04:21:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:21:30 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:21:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:21:31 compute-0 ovn_controller[152691]: 2025-11-22T04:21:31Z|00298|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 22 04:21:32 compute-0 ceph-mon[75011]: pgmap v2220: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Nov 22 04:21:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 6 op/s
Nov 22 04:21:33 compute-0 ceph-mon[75011]: pgmap v2221: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 6 op/s
Nov 22 04:21:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:34 compute-0 nova_compute[253461]: 2025-11-22 04:21:34.246 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 22 KiB/s wr, 5 op/s
Nov 22 04:21:34 compute-0 nova_compute[253461]: 2025-11-22 04:21:34.920 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:36 compute-0 ceph-mon[75011]: pgmap v2222: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 22 KiB/s wr, 5 op/s
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:21:36
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', 'backups']
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:21:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 22 04:21:38 compute-0 ceph-mon[75011]: pgmap v2223: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Nov 22 04:21:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 21 KiB/s wr, 3 op/s
Nov 22 04:21:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:39 compute-0 nova_compute[253461]: 2025-11-22 04:21:39.248 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:39 compute-0 podman[306833]: 2025-11-22 04:21:39.424376707 +0000 UTC m=+0.097835019 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251118, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=multipathd, container_name=multipathd)
Nov 22 04:21:39 compute-0 nova_compute[253461]: 2025-11-22 04:21:39.923 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:40 compute-0 ceph-mon[75011]: pgmap v2224: 305 pgs: 305 active+clean; 88 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.4 KiB/s rd, 21 KiB/s wr, 3 op/s
Nov 22 04:21:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 116 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.2 MiB/s wr, 31 op/s
Nov 22 04:21:42 compute-0 ceph-mon[75011]: pgmap v2225: 305 pgs: 305 active+clean; 116 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.2 MiB/s wr, 31 op/s
Nov 22 04:21:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 144 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 4.5 MiB/s wr, 34 op/s
Nov 22 04:21:44 compute-0 ceph-mon[75011]: pgmap v2226: 305 pgs: 305 active+clean; 144 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 4.5 MiB/s wr, 34 op/s
Nov 22 04:21:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:44 compute-0 nova_compute[253461]: 2025-11-22 04:21:44.250 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 192 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.5 MiB/s wr, 36 op/s
Nov 22 04:21:44 compute-0 nova_compute[253461]: 2025-11-22 04:21:44.961 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:46 compute-0 ceph-mon[75011]: pgmap v2227: 305 pgs: 305 active+clean; 192 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 8.5 MiB/s wr, 36 op/s
Nov 22 04:21:46 compute-0 nova_compute[253461]: 2025-11-22 04:21:46.609 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:46 compute-0 nova_compute[253461]: 2025-11-22 04:21:46.609 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:46 compute-0 nova_compute[253461]: 2025-11-22 04:21:46.626 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002016588526404897 of space, bias 1.0, pg target 0.6049765579214691 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 nova_compute[253461]: 2025-11-22 04:21:46.733 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 nova_compute[253461]: 2025-11-22 04:21:46.734 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:21:46 compute-0 nova_compute[253461]: 2025-11-22 04:21:46.746 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:21:46 compute-0 nova_compute[253461]: 2025-11-22 04:21:46.747 253465 INFO nova.compute.claims [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:21:46 compute-0 nova_compute[253461]: 2025-11-22 04:21:46.885 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:21:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:21:47 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/476229709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.366 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.378 253465 DEBUG nova.compute.provider_tree [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.394 253465 DEBUG nova.scheduler.client.report [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.422 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.423 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.469 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.470 253465 DEBUG nova.network.neutron [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.489 253465 INFO nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.506 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.552 253465 INFO nova.virt.block_device [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Booting with volume b094100f-6218-4825-9b5a-871bcc060d02 at /dev/vda
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.674 253465 DEBUG os_brick.utils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.676 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.686 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.686 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd71354-5b7e-45d3-830c-e83109651555]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.688 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.698 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.698 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[3624974d-39b0-452e-925b-68cd18b0987e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.700 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.714 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.714 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[30b24458-9396-4e1f-bb4f-e9c3b5777c31]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.716 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[046ac6b6-cf4a-4dfb-a84b-0a669a31b3a3]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.717 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.741 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.744 253465 DEBUG os_brick.initiator.connectors.lightos [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.744 253465 DEBUG os_brick.initiator.connectors.lightos [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.744 253465 DEBUG os_brick.initiator.connectors.lightos [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.745 253465 DEBUG os_brick.utils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] <== get_connector_properties: return (70ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:21:47 compute-0 nova_compute[253461]: 2025-11-22 04:21:47.745 253465 DEBUG nova.virt.block_device [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updating existing volume attachment record: 9de7de7f-1919-4b7f-8a44-c88b6e874735 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:21:48 compute-0 nova_compute[253461]: 2025-11-22 04:21:48.098 253465 DEBUG nova.policy [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '26147ad59e2d4763b8edc27d80567b09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:21:48 compute-0 ceph-mon[75011]: pgmap v2228: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:21:48 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/476229709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:21:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:21:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1224953672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:21:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:48.813 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:21:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:48.814 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:21:48 compute-0 nova_compute[253461]: 2025-11-22 04:21:48.817 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:21:48 compute-0 nova_compute[253461]: 2025-11-22 04:21:48.928 253465 DEBUG nova.network.neutron [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Successfully created port: 3cbea320-13ce-4711-bb63-09fb1a3cf673 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.048 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.050 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.050 253465 INFO nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Creating image(s)
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.051 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.051 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Ensure instance console log exists: /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.052 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.052 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.052 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.252 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:49 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1224953672' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:21:49 compute-0 ceph-mon[75011]: pgmap v2229: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:21:49 compute-0 nova_compute[253461]: 2025-11-22 04:21:49.963 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:21:51 compute-0 nova_compute[253461]: 2025-11-22 04:21:51.263 253465 DEBUG nova.network.neutron [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Successfully updated port: 3cbea320-13ce-4711-bb63-09fb1a3cf673 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:21:51 compute-0 nova_compute[253461]: 2025-11-22 04:21:51.369 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:21:51 compute-0 nova_compute[253461]: 2025-11-22 04:21:51.370 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquired lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:21:51 compute-0 nova_compute[253461]: 2025-11-22 04:21:51.370 253465 DEBUG nova.network.neutron [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:21:51 compute-0 nova_compute[253461]: 2025-11-22 04:21:51.406 253465 DEBUG nova.compute.manager [req-39d9f221-04bb-4567-9485-b853d2b81ae0 req-a8be1224-0455-499e-b090-9a0023e17d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received event network-changed-3cbea320-13ce-4711-bb63-09fb1a3cf673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:21:51 compute-0 nova_compute[253461]: 2025-11-22 04:21:51.406 253465 DEBUG nova.compute.manager [req-39d9f221-04bb-4567-9485-b853d2b81ae0 req-a8be1224-0455-499e-b090-9a0023e17d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Refreshing instance network info cache due to event network-changed-3cbea320-13ce-4711-bb63-09fb1a3cf673. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:21:51 compute-0 nova_compute[253461]: 2025-11-22 04:21:51.407 253465 DEBUG oslo_concurrency.lockutils [req-39d9f221-04bb-4567-9485-b853d2b81ae0 req-a8be1224-0455-499e-b090-9a0023e17d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:21:51 compute-0 nova_compute[253461]: 2025-11-22 04:21:51.536 253465 DEBUG nova.network.neutron [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:21:51 compute-0 ceph-mon[75011]: pgmap v2230: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.690 253465 DEBUG nova.network.neutron [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updating instance_info_cache with network_info: [{"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.713 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Releasing lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.713 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Instance network_info: |[{"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.714 253465 DEBUG oslo_concurrency.lockutils [req-39d9f221-04bb-4567-9485-b853d2b81ae0 req-a8be1224-0455-499e-b090-9a0023e17d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.715 253465 DEBUG nova.network.neutron [req-39d9f221-04bb-4567-9485-b853d2b81ae0 req-a8be1224-0455-499e-b090-9a0023e17d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Refreshing network info cache for port 3cbea320-13ce-4711-bb63-09fb1a3cf673 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.722 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Start _get_guest_xml network_info=[{"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': '9de7de7f-1919-4b7f-8a44-c88b6e874735', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b094100f-6218-4825-9b5a-871bcc060d02', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b094100f-6218-4825-9b5a-871bcc060d02', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c', 'attached_at': '', 'detached_at': '', 'volume_id': 'b094100f-6218-4825-9b5a-871bcc060d02', 'serial': 'b094100f-6218-4825-9b5a-871bcc060d02'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.729 253465 WARNING nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.736 253465 DEBUG nova.virt.libvirt.host [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.737 253465 DEBUG nova.virt.libvirt.host [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.741 253465 DEBUG nova.virt.libvirt.host [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.741 253465 DEBUG nova.virt.libvirt.host [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.742 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.743 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.743 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.743 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.744 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.744 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.744 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.745 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.745 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.745 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.745 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.746 253465 DEBUG nova.virt.hardware [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.776 253465 DEBUG nova.storage.rbd_utils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:21:52 compute-0 nova_compute[253461]: 2025-11-22 04:21:52.781 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 7.2 MiB/s wr, 16 op/s
Nov 22 04:21:53 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:21:53 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/669593095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.316 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.434 253465 DEBUG os_brick.encryptors [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Using volume encryption metadata '{'encryption_key_id': '2b567ad6-4e79-463d-9d48-c551566deeae', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-b094100f-6218-4825-9b5a-871bcc060d02', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'b094100f-6218-4825-9b5a-871bcc060d02', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c', 'attached_at': '', 'detached_at': '', 'volume_id': 'b094100f-6218-4825-9b5a-871bcc060d02', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.436 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.450 253465 DEBUG barbicanclient.v1.secrets [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/2b567ad6-4e79-463d-9d48-c551566deeae get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.451 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.483 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.484 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.509 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.510 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.533 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.533 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.557 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.557 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.589 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.590 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.621 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.622 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.647 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.648 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.672 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.673 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.698 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.699 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.725 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.726 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.817 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.818 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.855 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.855 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.895 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.896 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.920 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.921 253465 INFO barbicanclient.base [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/2b567ad6-4e79-463d-9d48-c551566deeae
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.948 253465 DEBUG barbicanclient.client [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:21:53 compute-0 nova_compute[253461]: 2025-11-22 04:21:53.949 253465 DEBUG nova.virt.libvirt.host [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 04:21:53 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 04:21:53 compute-0 nova_compute[253461]:     <volume>b094100f-6218-4825-9b5a-871bcc060d02</volume>
Nov 22 04:21:53 compute-0 nova_compute[253461]:   </usage>
Nov 22 04:21:53 compute-0 nova_compute[253461]: </secret>
Nov 22 04:21:53 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 04:21:53 compute-0 ceph-mon[75011]: pgmap v2231: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 7.2 MiB/s wr, 16 op/s
Nov 22 04:21:53 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/669593095' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.010 253465 DEBUG nova.virt.libvirt.vif [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:21:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1766491265',display_name='tempest-TestEncryptedCinderVolumes-server-1766491265',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1766491265',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9UKFMy9ycYw2Hm5en4L27DNwg0er6Lb0HRsD7AiMiSCvtvdx7izIV74D1MmE18lnPG59cKz/vp+1MZkJaUaik+lgJpk8hBjE03Y+JB1nMXTfCi52N8aZdJUG/KDhiYrQ==',key_name='tempest-TestEncryptedCinderVolumes-5346949',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-hei4n0yp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:21:47Z,user_data=None,user_id='26147ad59e2d4763b8edc27d80567b09',uuid=1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.011 253465 DEBUG nova.network.os_vif_util [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.012 253465 DEBUG nova.network.os_vif_util [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:bd:00,bridge_name='br-int',has_traffic_filtering=True,id=3cbea320-13ce-4711-bb63-09fb1a3cf673,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbea320-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.014 253465 DEBUG nova.objects.instance [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'pci_devices' on Instance uuid 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.034 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <uuid>1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c</uuid>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <name>instance-0000001d</name>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1766491265</nova:name>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:21:52</nova:creationTime>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <nova:user uuid="26147ad59e2d4763b8edc27d80567b09">tempest-TestEncryptedCinderVolumes-230639986-project-member</nova:user>
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <nova:project uuid="c9d01ebd7e4f4251b66172a246b8a08f">tempest-TestEncryptedCinderVolumes-230639986</nova:project>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <nova:port uuid="3cbea320-13ce-4711-bb63-09fb1a3cf673">
Nov 22 04:21:54 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <system>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <entry name="serial">1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c</entry>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <entry name="uuid">1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c</entry>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </system>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <os>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   </os>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <features>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   </features>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c_disk.config">
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       </source>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-b094100f-6218-4825-9b5a-871bcc060d02">
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       </source>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <serial>b094100f-6218-4825-9b5a-871bcc060d02</serial>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <encryption format="luks">
Nov 22 04:21:54 compute-0 nova_compute[253461]:         <secret type="passphrase" uuid="eb8ae9c4-8e5f-4f51-8ae6-2667a3092b1d"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       </encryption>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:0f:bd:00"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <target dev="tap3cbea320-13"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c/console.log" append="off"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <video>
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </video>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:21:54 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:21:54 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:21:54 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:21:54 compute-0 nova_compute[253461]: </domain>
Nov 22 04:21:54 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.036 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Preparing to wait for external event network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.037 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.037 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.038 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.040 253465 DEBUG nova.virt.libvirt.vif [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:21:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1766491265',display_name='tempest-TestEncryptedCinderVolumes-server-1766491265',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1766491265',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9UKFMy9ycYw2Hm5en4L27DNwg0er6Lb0HRsD7AiMiSCvtvdx7izIV74D1MmE18lnPG59cKz/vp+1MZkJaUaik+lgJpk8hBjE03Y+JB1nMXTfCi52N8aZdJUG/KDhiYrQ==',key_name='tempest-TestEncryptedCinderVolumes-5346949',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-hei4n0yp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:21:47Z,user_data=None,user_id='26147ad59e2d4763b8edc27d80567b09',uuid=1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.040 253465 DEBUG nova.network.os_vif_util [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.042 253465 DEBUG nova.network.os_vif_util [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0f:bd:00,bridge_name='br-int',has_traffic_filtering=True,id=3cbea320-13ce-4711-bb63-09fb1a3cf673,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbea320-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.043 253465 DEBUG os_vif [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:bd:00,bridge_name='br-int',has_traffic_filtering=True,id=3cbea320-13ce-4711-bb63-09fb1a3cf673,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbea320-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.044 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.045 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.046 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.052 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.053 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cbea320-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.053 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cbea320-13, col_values=(('external_ids', {'iface-id': '3cbea320-13ce-4711-bb63-09fb1a3cf673', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0f:bd:00', 'vm-uuid': '1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.056 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:54 compute-0 NetworkManager[48916]: <info>  [1763785314.0576] manager: (tap3cbea320-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.060 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.070 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.072 253465 INFO os_vif [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0f:bd:00,bridge_name='br-int',has_traffic_filtering=True,id=3cbea320-13ce-4711-bb63-09fb1a3cf673,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbea320-13')
Nov 22 04:21:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.173 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.173 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.173 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No VIF found with MAC fa:16:3e:0f:bd:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.174 253465 INFO nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Using config drive
Nov 22 04:21:54 compute-0 podman[306927]: 2025-11-22 04:21:54.175851099 +0000 UTC m=+0.063511912 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2)
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.199 253465 DEBUG nova.storage.rbd_utils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:21:54 compute-0 podman[306929]: 2025-11-22 04:21:54.253650321 +0000 UTC m=+0.133771664 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251118, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.264 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.293 253465 DEBUG nova.network.neutron [req-39d9f221-04bb-4567-9485-b853d2b81ae0 req-a8be1224-0455-499e-b090-9a0023e17d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updated VIF entry in instance network info cache for port 3cbea320-13ce-4711-bb63-09fb1a3cf673. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.294 253465 DEBUG nova.network.neutron [req-39d9f221-04bb-4567-9485-b853d2b81ae0 req-a8be1224-0455-499e-b090-9a0023e17d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updating instance_info_cache with network_info: [{"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.333 253465 DEBUG oslo_concurrency.lockutils [req-39d9f221-04bb-4567-9485-b853d2b81ae0 req-a8be1224-0455-499e-b090-9a0023e17d75 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.505 253465 INFO nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Creating config drive at /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c/disk.config
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.512 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptd02ws05 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.658 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptd02ws05" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.685 253465 DEBUG nova.storage.rbd_utils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:21:54 compute-0 nova_compute[253461]: 2025-11-22 04:21:54.688 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c/disk.config 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 4.8 MiB/s wr, 10 op/s
Nov 22 04:21:55 compute-0 ceph-mon[75011]: pgmap v2232: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 6.8 KiB/s rd, 4.8 MiB/s wr, 10 op/s
Nov 22 04:21:55 compute-0 nova_compute[253461]: 2025-11-22 04:21:55.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:55 compute-0 nova_compute[253461]: 2025-11-22 04:21:55.761 253465 DEBUG oslo_concurrency.processutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c/disk.config 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:55 compute-0 nova_compute[253461]: 2025-11-22 04:21:55.762 253465 INFO nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Deleting local config drive /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c/disk.config because it was imported into RBD.
Nov 22 04:21:55 compute-0 kernel: tap3cbea320-13: entered promiscuous mode
Nov 22 04:21:55 compute-0 NetworkManager[48916]: <info>  [1763785315.8390] manager: (tap3cbea320-13): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Nov 22 04:21:55 compute-0 ovn_controller[152691]: 2025-11-22T04:21:55Z|00299|binding|INFO|Claiming lport 3cbea320-13ce-4711-bb63-09fb1a3cf673 for this chassis.
Nov 22 04:21:55 compute-0 ovn_controller[152691]: 2025-11-22T04:21:55Z|00300|binding|INFO|3cbea320-13ce-4711-bb63-09fb1a3cf673: Claiming fa:16:3e:0f:bd:00 10.100.0.4
Nov 22 04:21:55 compute-0 nova_compute[253461]: 2025-11-22 04:21:55.840 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.869 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:bd:00 10.100.0.4'], port_security=['fa:16:3e:0f:bd:00 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '395e72d5-eb27-430c-9253-638d58d94891', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d096b3-2344-4434-a488-92084cb46974, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=3cbea320-13ce-4711-bb63-09fb1a3cf673) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:21:55 compute-0 ovn_controller[152691]: 2025-11-22T04:21:55Z|00301|binding|INFO|Setting lport 3cbea320-13ce-4711-bb63-09fb1a3cf673 ovn-installed in OVS
Nov 22 04:21:55 compute-0 ovn_controller[152691]: 2025-11-22T04:21:55Z|00302|binding|INFO|Setting lport 3cbea320-13ce-4711-bb63-09fb1a3cf673 up in Southbound
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.872 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 3cbea320-13ce-4711-bb63-09fb1a3cf673 in datapath bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 bound to our chassis
Nov 22 04:21:55 compute-0 nova_compute[253461]: 2025-11-22 04:21:55.874 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.875 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:21:55 compute-0 systemd-machined[215728]: New machine qemu-29-instance-0000001d.
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.891 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9935ea09-4aef-42e3-b639-95531c9a8a81]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.893 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd550fd2-d1 in ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.896 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd550fd2-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.897 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[793d4290-d9dd-4be3-b55a-ee95c3a0479d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.898 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[7f2f6709-6b1e-4438-8c9e-5fd6674d616c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:55 compute-0 systemd-udevd[307043]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:21:55 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.912 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc5920d-bcea-48ec-9573-88b9fd3d8209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:55 compute-0 NetworkManager[48916]: <info>  [1763785315.9240] device (tap3cbea320-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:21:55 compute-0 NetworkManager[48916]: <info>  [1763785315.9253] device (tap3cbea320-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.926 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb4e070-be01-4a90-b767-09bc3e29afdd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.956 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[43464e60-9caa-4a99-b2e7-2ca1c68b9fb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:55 compute-0 systemd-udevd[307047]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:21:55 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.964 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[367425e5-2bd2-4c37-9d20-c06acebd695f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:55 compute-0 NetworkManager[48916]: <info>  [1763785315.9651] manager: (tapbd550fd2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:55.999 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[1f9ca003-7ac7-412f-b653-acca8e84fe37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.004 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[e70e1aa4-625e-402c-96d5-c2f1ceb32b54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 NetworkManager[48916]: <info>  [1763785316.0336] device (tapbd550fd2-d0): carrier: link connected
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.045 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[70ce16c1-fa70-4f73-80cf-76531c123e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.070 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8b27f613-aec7-4811-8b2f-b111e7f10901]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd550fd2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:cb:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556863, 'reachable_time': 42906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307075, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.095 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b48ae0e8-0ac5-4bf5-a746-0d47cb8299b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:cb6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556863, 'tstamp': 556863}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307076, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.119 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[73c515f6-4fee-4399-90fb-7fbc85103383]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd550fd2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:cb:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556863, 'reachable_time': 42906, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307077, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.171 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[83205e0e-31c7-4ef8-a125-16233436d020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.250 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[ec07efc2-5bea-4e80-b6c2-3767341a9837]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.252 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd550fd2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.252 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.253 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd550fd2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.256 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:56 compute-0 NetworkManager[48916]: <info>  [1763785316.2578] manager: (tapbd550fd2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Nov 22 04:21:56 compute-0 kernel: tapbd550fd2-d0: entered promiscuous mode
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.260 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.262 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd550fd2-d0, col_values=(('external_ids', {'iface-id': '1cfe38fd-445a-4e2d-9728-1f7ee0085422'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.263 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:56 compute-0 ovn_controller[152691]: 2025-11-22T04:21:56Z|00303|binding|INFO|Releasing lport 1cfe38fd-445a-4e2d-9728-1f7ee0085422 from this chassis (sb_readonly=0)
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.289 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.292 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.293 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.295 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ad8abf-77fb-46c7-af60-a176ecc943ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.296 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.298 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'env', 'PROCESS_TAG=haproxy-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.481 253465 DEBUG nova.compute.manager [req-468f838b-ef52-45a5-b259-914962036da1 req-3f40036f-16ef-488b-9aeb-168beb08c3b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received event network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.483 253465 DEBUG oslo_concurrency.lockutils [req-468f838b-ef52-45a5-b259-914962036da1 req-3f40036f-16ef-488b-9aeb-168beb08c3b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.483 253465 DEBUG oslo_concurrency.lockutils [req-468f838b-ef52-45a5-b259-914962036da1 req-3f40036f-16ef-488b-9aeb-168beb08c3b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.484 253465 DEBUG oslo_concurrency.lockutils [req-468f838b-ef52-45a5-b259-914962036da1 req-3f40036f-16ef-488b-9aeb-168beb08c3b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.484 253465 DEBUG nova.compute.manager [req-468f838b-ef52-45a5-b259-914962036da1 req-3f40036f-16ef-488b-9aeb-168beb08c3b7 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Processing event network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.549 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.550 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.550 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.551 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:21:56 compute-0 nova_compute[253461]: 2025-11-22 04:21:56.552 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:56 compute-0 podman[307110]: 2025-11-22 04:21:56.754143659 +0000 UTC m=+0.091687232 container create 2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 22 04:21:56 compute-0 podman[307110]: 2025-11-22 04:21:56.690791108 +0000 UTC m=+0.028334691 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:21:56 compute-0 systemd[1]: Started libpod-conmon-2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe.scope.
Nov 22 04:21:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:21:56.816 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:21:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:21:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7015dc348172e310bd5805b2e6894230a713c60e79205fedd2bec7977812c31e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:21:56 compute-0 podman[307110]: 2025-11-22 04:21:56.880575561 +0000 UTC m=+0.218119134 container init 2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:21:56 compute-0 podman[307110]: 2025-11-22 04:21:56.888293269 +0000 UTC m=+0.225836832 container start 2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:21:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 852 KiB/s wr, 9 op/s
Nov 22 04:21:56 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[307177]: [NOTICE]   (307184) : New worker (307186) forked
Nov 22 04:21:56 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[307177]: [NOTICE]   (307184) : Loading success.
Nov 22 04:21:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:21:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2701209449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.056 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.181 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.182 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.365 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.367 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4293MB free_disk=59.9882698059082GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.367 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.368 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.470 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.470 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.471 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.517 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:21:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:21:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1494711746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.948 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.956 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:21:57 compute-0 ceph-mon[75011]: pgmap v2233: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 852 KiB/s wr, 9 op/s
Nov 22 04:21:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2701209449' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:21:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1494711746' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:21:57 compute-0 nova_compute[253461]: 2025-11-22 04:21:57.975 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:21:58 compute-0 nova_compute[253461]: 2025-11-22 04:21:58.007 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:21:58 compute-0 nova_compute[253461]: 2025-11-22 04:21:58.007 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:58 compute-0 nova_compute[253461]: 2025-11-22 04:21:58.556 253465 DEBUG nova.compute.manager [req-fe89cdaa-18f7-4887-8985-d4366a13c6f5 req-c4b11810-31f1-4c49-80a8-7cb43832aa22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received event network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:21:58 compute-0 nova_compute[253461]: 2025-11-22 04:21:58.557 253465 DEBUG oslo_concurrency.lockutils [req-fe89cdaa-18f7-4887-8985-d4366a13c6f5 req-c4b11810-31f1-4c49-80a8-7cb43832aa22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:21:58 compute-0 nova_compute[253461]: 2025-11-22 04:21:58.558 253465 DEBUG oslo_concurrency.lockutils [req-fe89cdaa-18f7-4887-8985-d4366a13c6f5 req-c4b11810-31f1-4c49-80a8-7cb43832aa22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:21:58 compute-0 nova_compute[253461]: 2025-11-22 04:21:58.558 253465 DEBUG oslo_concurrency.lockutils [req-fe89cdaa-18f7-4887-8985-d4366a13c6f5 req-c4b11810-31f1-4c49-80a8-7cb43832aa22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:58 compute-0 nova_compute[253461]: 2025-11-22 04:21:58.558 253465 DEBUG nova.compute.manager [req-fe89cdaa-18f7-4887-8985-d4366a13c6f5 req-c4b11810-31f1-4c49-80a8-7cb43832aa22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] No waiting events found dispatching network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:21:58 compute-0 nova_compute[253461]: 2025-11-22 04:21:58.559 253465 WARNING nova.compute.manager [req-fe89cdaa-18f7-4887-8985-d4366a13c6f5 req-c4b11810-31f1-4c49-80a8-7cb43832aa22 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received unexpected event network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 for instance with vm_state building and task_state spawning.
Nov 22 04:21:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.058 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.265 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.317 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.319 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785319.3168504, 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.319 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] VM Started (Lifecycle Event)
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.322 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.326 253465 INFO nova.virt.libvirt.driver [-] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Instance spawned successfully.
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.327 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.346 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.354 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.359 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.360 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.360 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.361 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.362 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.362 253465 DEBUG nova.virt.libvirt.driver [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.389 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.390 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785319.3170009, 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.390 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] VM Paused (Lifecycle Event)
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.437 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.444 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785319.3202753, 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.444 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] VM Resumed (Lifecycle Event)
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.448 253465 INFO nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Took 10.40 seconds to spawn the instance on the hypervisor.
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.449 253465 DEBUG nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.505 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.509 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.529 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.538 253465 INFO nova.compute.manager [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Took 12.84 seconds to build instance.
Nov 22 04:21:59 compute-0 nova_compute[253461]: 2025-11-22 04:21:59.552 253465 DEBUG oslo_concurrency.lockutils [None req-5b242820-8ea9-4f2a-94b8-31f3b48fc0ed 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:21:59 compute-0 ceph-mon[75011]: pgmap v2234: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Nov 22 04:22:00 compute-0 nova_compute[253461]: 2025-11-22 04:22:00.008 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:00 compute-0 nova_compute[253461]: 2025-11-22 04:22:00.009 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:00 compute-0 nova_compute[253461]: 2025-11-22 04:22:00.010 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:22:00 compute-0 nova_compute[253461]: 2025-11-22 04:22:00.010 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:22:00 compute-0 nova_compute[253461]: 2025-11-22 04:22:00.261 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:22:00 compute-0 nova_compute[253461]: 2025-11-22 04:22:00.261 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:22:00 compute-0 nova_compute[253461]: 2025-11-22 04:22:00.262 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:22:00 compute-0 nova_compute[253461]: 2025-11-22 04:22:00.263 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:22:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:22:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3189514745' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:22:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:22:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3189514745' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:22:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 12 KiB/s wr, 23 op/s
Nov 22 04:22:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3189514745' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:22:00 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3189514745' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:22:02 compute-0 ceph-mon[75011]: pgmap v2235: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 449 KiB/s rd, 12 KiB/s wr, 23 op/s
Nov 22 04:22:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 12 KiB/s wr, 39 op/s
Nov 22 04:22:03 compute-0 nova_compute[253461]: 2025-11-22 04:22:03.420 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updating instance_info_cache with network_info: [{"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:22:03 compute-0 nova_compute[253461]: 2025-11-22 04:22:03.527 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:22:03 compute-0 nova_compute[253461]: 2025-11-22 04:22:03.527 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:22:03 compute-0 nova_compute[253461]: 2025-11-22 04:22:03.528 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:03 compute-0 nova_compute[253461]: 2025-11-22 04:22:03.528 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:03 compute-0 nova_compute[253461]: 2025-11-22 04:22:03.529 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:03 compute-0 nova_compute[253461]: 2025-11-22 04:22:03.529 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:03 compute-0 nova_compute[253461]: 2025-11-22 04:22:03.529 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:22:04 compute-0 nova_compute[253461]: 2025-11-22 04:22:04.062 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:04 compute-0 ceph-mon[75011]: pgmap v2236: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 964 KiB/s rd, 12 KiB/s wr, 39 op/s
Nov 22 04:22:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:04 compute-0 nova_compute[253461]: 2025-11-22 04:22:04.268 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 47 op/s
Nov 22 04:22:05 compute-0 ceph-mon[75011]: pgmap v2237: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 12 KiB/s wr, 47 op/s
Nov 22 04:22:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:22:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:22:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:22:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:22:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:22:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:22:06 compute-0 sshd-session[307226]: Connection closed by authenticating user operator 27.79.46.85 port 56860 [preauth]
Nov 22 04:22:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:22:07 compute-0 nova_compute[253461]: 2025-11-22 04:22:07.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:08 compute-0 ceph-mon[75011]: pgmap v2238: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Nov 22 04:22:08 compute-0 nova_compute[253461]: 2025-11-22 04:22:08.329 253465 DEBUG nova.compute.manager [req-da1f168b-1f94-4005-8aaa-1b8576cc8f60 req-2e11bae7-b607-4013-9022-41e6230de0f3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received event network-changed-3cbea320-13ce-4711-bb63-09fb1a3cf673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:22:08 compute-0 nova_compute[253461]: 2025-11-22 04:22:08.330 253465 DEBUG nova.compute.manager [req-da1f168b-1f94-4005-8aaa-1b8576cc8f60 req-2e11bae7-b607-4013-9022-41e6230de0f3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Refreshing instance network info cache due to event network-changed-3cbea320-13ce-4711-bb63-09fb1a3cf673. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:22:08 compute-0 nova_compute[253461]: 2025-11-22 04:22:08.330 253465 DEBUG oslo_concurrency.lockutils [req-da1f168b-1f94-4005-8aaa-1b8576cc8f60 req-2e11bae7-b607-4013-9022-41e6230de0f3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:22:08 compute-0 nova_compute[253461]: 2025-11-22 04:22:08.330 253465 DEBUG oslo_concurrency.lockutils [req-da1f168b-1f94-4005-8aaa-1b8576cc8f60 req-2e11bae7-b607-4013-9022-41e6230de0f3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:22:08 compute-0 nova_compute[253461]: 2025-11-22 04:22:08.330 253465 DEBUG nova.network.neutron [req-da1f168b-1f94-4005-8aaa-1b8576cc8f60 req-2e11bae7-b607-4013-9022-41e6230de0f3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Refreshing network info cache for port 3cbea320-13ce-4711-bb63-09fb1a3cf673 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:22:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 22 04:22:09 compute-0 nova_compute[253461]: 2025-11-22 04:22:09.064 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:09 compute-0 nova_compute[253461]: 2025-11-22 04:22:09.269 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:10 compute-0 ceph-mon[75011]: pgmap v2239: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Nov 22 04:22:10 compute-0 podman[307228]: 2025-11-22 04:22:10.425395288 +0000 UTC m=+0.097556010 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:22:10 compute-0 nova_compute[253461]: 2025-11-22 04:22:10.577 253465 DEBUG nova.network.neutron [req-da1f168b-1f94-4005-8aaa-1b8576cc8f60 req-2e11bae7-b607-4013-9022-41e6230de0f3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updated VIF entry in instance network info cache for port 3cbea320-13ce-4711-bb63-09fb1a3cf673. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:22:10 compute-0 nova_compute[253461]: 2025-11-22 04:22:10.579 253465 DEBUG nova.network.neutron [req-da1f168b-1f94-4005-8aaa-1b8576cc8f60 req-2e11bae7-b607-4013-9022-41e6230de0f3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updating instance_info_cache with network_info: [{"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:22:10 compute-0 nova_compute[253461]: 2025-11-22 04:22:10.599 253465 DEBUG oslo_concurrency.lockutils [req-da1f168b-1f94-4005-8aaa-1b8576cc8f60 req-2e11bae7-b607-4013-9022-41e6230de0f3 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:22:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 22 04:22:11 compute-0 ovn_controller[152691]: 2025-11-22T04:22:11Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0f:bd:00 10.100.0.4
Nov 22 04:22:11 compute-0 ovn_controller[152691]: 2025-11-22T04:22:11Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0f:bd:00 10.100.0.4
Nov 22 04:22:12 compute-0 ceph-mon[75011]: pgmap v2240: 305 pgs: 305 active+clean; 202 MiB data, 610 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Nov 22 04:22:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 206 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 589 KiB/s wr, 57 op/s
Nov 22 04:22:14 compute-0 nova_compute[253461]: 2025-11-22 04:22:14.068 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:14 compute-0 ceph-mon[75011]: pgmap v2241: 305 pgs: 305 active+clean; 206 MiB data, 615 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 589 KiB/s wr, 57 op/s
Nov 22 04:22:14 compute-0 nova_compute[253461]: 2025-11-22 04:22:14.272 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 211 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 68 op/s
Nov 22 04:22:15 compute-0 ceph-mon[75011]: pgmap v2242: 305 pgs: 305 active+clean; 211 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 68 op/s
Nov 22 04:22:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.8 MiB/s wr, 103 op/s
Nov 22 04:22:18 compute-0 ceph-mon[75011]: pgmap v2243: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.8 MiB/s wr, 103 op/s
Nov 22 04:22:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 580 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Nov 22 04:22:19 compute-0 nova_compute[253461]: 2025-11-22 04:22:19.071 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:19 compute-0 nova_compute[253461]: 2025-11-22 04:22:19.275 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:20 compute-0 ceph-mon[75011]: pgmap v2244: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 580 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Nov 22 04:22:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 580 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Nov 22 04:22:21 compute-0 ceph-mon[75011]: pgmap v2245: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 580 KiB/s rd, 5.8 MiB/s wr, 77 op/s
Nov 22 04:22:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Nov 22 04:22:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:23.040 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:23.041 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:23.042 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:24 compute-0 nova_compute[253461]: 2025-11-22 04:22:24.075 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:24 compute-0 ceph-mon[75011]: pgmap v2246: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 5.8 MiB/s wr, 76 op/s
Nov 22 04:22:24 compute-0 nova_compute[253461]: 2025-11-22 04:22:24.277 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:24 compute-0 podman[307249]: 2025-11-22 04:22:24.382193847 +0000 UTC m=+0.062956051 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:22:24 compute-0 podman[307250]: 2025-11-22 04:22:24.426282623 +0000 UTC m=+0.093134375 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:22:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 5.2 MiB/s wr, 70 op/s
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.253 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.253 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.254 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.254 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.254 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.256 253465 INFO nova.compute.manager [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Terminating instance
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.257 253465 DEBUG nova.compute.manager [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:22:25 compute-0 ceph-mon[75011]: pgmap v2247: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 5.2 MiB/s wr, 70 op/s
Nov 22 04:22:25 compute-0 kernel: tap3cbea320-13 (unregistering): left promiscuous mode
Nov 22 04:22:25 compute-0 NetworkManager[48916]: <info>  [1763785345.3118] device (tap3cbea320-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.321 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:25 compute-0 ovn_controller[152691]: 2025-11-22T04:22:25Z|00304|binding|INFO|Releasing lport 3cbea320-13ce-4711-bb63-09fb1a3cf673 from this chassis (sb_readonly=0)
Nov 22 04:22:25 compute-0 ovn_controller[152691]: 2025-11-22T04:22:25Z|00305|binding|INFO|Setting lport 3cbea320-13ce-4711-bb63-09fb1a3cf673 down in Southbound
Nov 22 04:22:25 compute-0 ovn_controller[152691]: 2025-11-22T04:22:25Z|00306|binding|INFO|Removing iface tap3cbea320-13 ovn-installed in OVS
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.324 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.328 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:bd:00 10.100.0.4'], port_security=['fa:16:3e:0f:bd:00 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '395e72d5-eb27-430c-9253-638d58d94891', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.229'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d096b3-2344-4434-a488-92084cb46974, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=3cbea320-13ce-4711-bb63-09fb1a3cf673) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.329 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 3cbea320-13ce-4711-bb63-09fb1a3cf673 in datapath bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 unbound from our chassis
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.330 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.331 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2177dfe5-447c-400c-b8e9-fb6649f75d9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.332 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 namespace which is not needed anymore
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.343 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:25 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Nov 22 04:22:25 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 16.706s CPU time.
Nov 22 04:22:25 compute-0 systemd-machined[215728]: Machine qemu-29-instance-0000001d terminated.
Nov 22 04:22:25 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[307177]: [NOTICE]   (307184) : haproxy version is 2.8.14-c23fe91
Nov 22 04:22:25 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[307177]: [NOTICE]   (307184) : path to executable is /usr/sbin/haproxy
Nov 22 04:22:25 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[307177]: [WARNING]  (307184) : Exiting Master process...
Nov 22 04:22:25 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[307177]: [WARNING]  (307184) : Exiting Master process...
Nov 22 04:22:25 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[307177]: [ALERT]    (307184) : Current worker (307186) exited with code 143 (Terminated)
Nov 22 04:22:25 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[307177]: [WARNING]  (307184) : All workers exited. Exiting... (0)
Nov 22 04:22:25 compute-0 systemd[1]: libpod-2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe.scope: Deactivated successfully.
Nov 22 04:22:25 compute-0 podman[307317]: 2025-11-22 04:22:25.485739283 +0000 UTC m=+0.051891897 container died 2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.499 253465 INFO nova.virt.libvirt.driver [-] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Instance destroyed successfully.
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.499 253465 DEBUG nova.objects.instance [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'resources' on Instance uuid 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.512 253465 DEBUG nova.virt.libvirt.vif [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:21:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1766491265',display_name='tempest-TestEncryptedCinderVolumes-server-1766491265',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1766491265',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9UKFMy9ycYw2Hm5en4L27DNwg0er6Lb0HRsD7AiMiSCvtvdx7izIV74D1MmE18lnPG59cKz/vp+1MZkJaUaik+lgJpk8hBjE03Y+JB1nMXTfCi52N8aZdJUG/KDhiYrQ==',key_name='tempest-TestEncryptedCinderVolumes-5346949',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:21:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-hei4n0yp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:21:59Z,user_data=None,user_id='26147ad59e2d4763b8edc27d80567b09',uuid=1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.514 253465 DEBUG nova.network.os_vif_util [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "address": "fa:16:3e:0f:bd:00", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbea320-13", "ovs_interfaceid": "3cbea320-13ce-4711-bb63-09fb1a3cf673", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.514 253465 DEBUG nova.network.os_vif_util [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0f:bd:00,bridge_name='br-int',has_traffic_filtering=True,id=3cbea320-13ce-4711-bb63-09fb1a3cf673,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbea320-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.515 253465 DEBUG os_vif [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:bd:00,bridge_name='br-int',has_traffic_filtering=True,id=3cbea320-13ce-4711-bb63-09fb1a3cf673,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbea320-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.517 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.517 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbea320-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.519 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.522 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.524 253465 INFO os_vif [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0f:bd:00,bridge_name='br-int',has_traffic_filtering=True,id=3cbea320-13ce-4711-bb63-09fb1a3cf673,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbea320-13')
Nov 22 04:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe-userdata-shm.mount: Deactivated successfully.
Nov 22 04:22:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7015dc348172e310bd5805b2e6894230a713c60e79205fedd2bec7977812c31e-merged.mount: Deactivated successfully.
Nov 22 04:22:25 compute-0 podman[307317]: 2025-11-22 04:22:25.548747562 +0000 UTC m=+0.114900145 container cleanup 2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true)
Nov 22 04:22:25 compute-0 systemd[1]: libpod-conmon-2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe.scope: Deactivated successfully.
Nov 22 04:22:25 compute-0 podman[307372]: 2025-11-22 04:22:25.627961585 +0000 UTC m=+0.058621644 container remove 2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.636 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2c434e5d-cb44-4bf1-8f17-32fd0dff8a23]: (4, ('Sat Nov 22 04:22:25 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 (2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe)\n2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe\nSat Nov 22 04:22:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 (2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe)\n2e623370751c9558a67a1d903d4b98ebe2052f0c7101f3c7a77765cc30cd2abe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.639 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[38e6874f-97e3-4d35-9b9c-dd32a6681855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.642 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd550fd2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:25 compute-0 kernel: tapbd550fd2-d0: left promiscuous mode
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.646 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.659 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.663 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[4f76f25a-586b-4ea8-a638-398f68a94c95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.681 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e53071bc-ee15-4392-b50b-29582903d706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.683 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[afbe98fb-22f9-4c2e-a22e-28e3ca4f4bdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.706 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9a62f030-9454-4f78-9b80-1c99b8234b6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556854, 'reachable_time': 18336, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307390, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.709 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:22:25 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:25.710 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[ffe21ad1-c84d-454f-a88a-b5f2e2760a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dbd550fd2\x2dd0e4\x2d4f32\x2d84d1\x2db7eca9fc7e07.mount: Deactivated successfully.
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.778 253465 INFO nova.virt.libvirt.driver [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Deleting instance files /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c_del
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.779 253465 INFO nova.virt.libvirt.driver [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Deletion of /var/lib/nova/instances/1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c_del complete
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.830 253465 INFO nova.compute.manager [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Took 0.57 seconds to destroy the instance on the hypervisor.
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.830 253465 DEBUG oslo.service.loopingcall [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.831 253465 DEBUG nova.compute.manager [-] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:22:25 compute-0 nova_compute[253461]: 2025-11-22 04:22:25.831 253465 DEBUG nova.network.neutron [-] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:22:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 4.0 MiB/s wr, 49 op/s
Nov 22 04:22:28 compute-0 ceph-mon[75011]: pgmap v2248: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 4.0 MiB/s wr, 49 op/s
Nov 22 04:22:28 compute-0 nova_compute[253461]: 2025-11-22 04:22:28.275 253465 DEBUG nova.compute.manager [req-15e71fcb-0774-4b9a-beb7-f5caf03abc6a req-c99fe202-3a24-4e26-ba6d-06c7a1679299 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received event network-vif-unplugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:22:28 compute-0 nova_compute[253461]: 2025-11-22 04:22:28.275 253465 DEBUG oslo_concurrency.lockutils [req-15e71fcb-0774-4b9a-beb7-f5caf03abc6a req-c99fe202-3a24-4e26-ba6d-06c7a1679299 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:28 compute-0 nova_compute[253461]: 2025-11-22 04:22:28.275 253465 DEBUG oslo_concurrency.lockutils [req-15e71fcb-0774-4b9a-beb7-f5caf03abc6a req-c99fe202-3a24-4e26-ba6d-06c7a1679299 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:28 compute-0 nova_compute[253461]: 2025-11-22 04:22:28.275 253465 DEBUG oslo_concurrency.lockutils [req-15e71fcb-0774-4b9a-beb7-f5caf03abc6a req-c99fe202-3a24-4e26-ba6d-06c7a1679299 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:28 compute-0 nova_compute[253461]: 2025-11-22 04:22:28.276 253465 DEBUG nova.compute.manager [req-15e71fcb-0774-4b9a-beb7-f5caf03abc6a req-c99fe202-3a24-4e26-ba6d-06c7a1679299 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] No waiting events found dispatching network-vif-unplugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:22:28 compute-0 nova_compute[253461]: 2025-11-22 04:22:28.276 253465 DEBUG nova.compute.manager [req-15e71fcb-0774-4b9a-beb7-f5caf03abc6a req-c99fe202-3a24-4e26-ba6d-06c7a1679299 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received event network-vif-unplugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:22:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 16 KiB/s wr, 6 op/s
Nov 22 04:22:28 compute-0 nova_compute[253461]: 2025-11-22 04:22:28.978 253465 DEBUG nova.network.neutron [-] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.072 253465 INFO nova.compute.manager [-] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Took 3.24 seconds to deallocate network for instance.
Nov 22 04:22:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.279 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.332 253465 INFO nova.compute.manager [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Took 0.26 seconds to detach 1 volumes for instance.
Nov 22 04:22:29 compute-0 sudo[307392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:29 compute-0 sudo[307392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:29 compute-0 sudo[307392]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:29 compute-0 sudo[307417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:22:29 compute-0 sudo[307417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:29 compute-0 sudo[307417]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.456 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.457 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.503 253465 DEBUG oslo_concurrency.processutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:29 compute-0 sudo[307442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:29 compute-0 sudo[307442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:29 compute-0 sudo[307442]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:29 compute-0 sudo[307468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:22:29 compute-0 sudo[307468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:22:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1133834719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.928 253465 DEBUG oslo_concurrency.processutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.937 253465 DEBUG nova.compute.provider_tree [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:22:29 compute-0 nova_compute[253461]: 2025-11-22 04:22:29.991 253465 DEBUG nova.scheduler.client.report [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.050 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:30 compute-0 ceph-mon[75011]: pgmap v2249: 305 pgs: 305 active+clean; 271 MiB data, 658 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 16 KiB/s wr, 6 op/s
Nov 22 04:22:30 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1133834719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.118 253465 INFO nova.scheduler.client.report [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Deleted allocations for instance 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c
Nov 22 04:22:30 compute-0 sudo[307468]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:22:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:22:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:22:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:22:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:22:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:22:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a2b97b8d-173e-4d80-be91-98dc4dadedbb does not exist
Nov 22 04:22:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c9f11ab3-f4b4-48a3-8f6f-53d45a914bc0 does not exist
Nov 22 04:22:30 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c536ff9a-9b6e-4342-b76d-a847fa0e143d does not exist
Nov 22 04:22:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:22:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:22:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:22:30 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:22:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:22:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:22:30 compute-0 sudo[307545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:30 compute-0 sudo[307545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.379 253465 DEBUG nova.compute.manager [req-1e00612e-8121-48ad-b12b-61c8a1d2f70d req-32bb5040-7c0f-431c-bbbd-e7cedfb931bb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received event network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.381 253465 DEBUG oslo_concurrency.lockutils [req-1e00612e-8121-48ad-b12b-61c8a1d2f70d req-32bb5040-7c0f-431c-bbbd-e7cedfb931bb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.381 253465 DEBUG oslo_concurrency.lockutils [req-1e00612e-8121-48ad-b12b-61c8a1d2f70d req-32bb5040-7c0f-431c-bbbd-e7cedfb931bb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.381 253465 DEBUG oslo_concurrency.lockutils [req-1e00612e-8121-48ad-b12b-61c8a1d2f70d req-32bb5040-7c0f-431c-bbbd-e7cedfb931bb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.381 253465 DEBUG nova.compute.manager [req-1e00612e-8121-48ad-b12b-61c8a1d2f70d req-32bb5040-7c0f-431c-bbbd-e7cedfb931bb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] No waiting events found dispatching network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.381 253465 WARNING nova.compute.manager [req-1e00612e-8121-48ad-b12b-61c8a1d2f70d req-32bb5040-7c0f-431c-bbbd-e7cedfb931bb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received unexpected event network-vif-plugged-3cbea320-13ce-4711-bb63-09fb1a3cf673 for instance with vm_state deleted and task_state None.
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.381 253465 DEBUG nova.compute.manager [req-1e00612e-8121-48ad-b12b-61c8a1d2f70d req-32bb5040-7c0f-431c-bbbd-e7cedfb931bb f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Received event network-vif-deleted-3cbea320-13ce-4711-bb63-09fb1a3cf673 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:22:30 compute-0 sudo[307545]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.385 253465 DEBUG oslo_concurrency.lockutils [None req-9e462daf-4acf-4427-b44c-6c940e466e5c 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:30 compute-0 sudo[307570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:22:30 compute-0 sudo[307570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:30 compute-0 sudo[307570]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:30 compute-0 nova_compute[253461]: 2025-11-22 04:22:30.519 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:30 compute-0 sudo[307595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:30 compute-0 sudo[307595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:30 compute-0 sudo[307595]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:30 compute-0 sudo[307620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:22:30 compute-0 sudo[307620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 15 op/s
Nov 22 04:22:31 compute-0 podman[307685]: 2025-11-22 04:22:31.096384642 +0000 UTC m=+0.106476451 container create cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Nov 22 04:22:31 compute-0 podman[307685]: 2025-11-22 04:22:31.016340482 +0000 UTC m=+0.026432341 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:22:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:22:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:22:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:22:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:22:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:22:31 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:22:31 compute-0 systemd[1]: Started libpod-conmon-cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49.scope.
Nov 22 04:22:31 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:22:31 compute-0 podman[307685]: 2025-11-22 04:22:31.573889575 +0000 UTC m=+0.583981464 container init cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Nov 22 04:22:31 compute-0 podman[307685]: 2025-11-22 04:22:31.586657885 +0000 UTC m=+0.596749734 container start cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:22:31 compute-0 kind_engelbart[307701]: 167 167
Nov 22 04:22:31 compute-0 systemd[1]: libpod-cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49.scope: Deactivated successfully.
Nov 22 04:22:31 compute-0 podman[307685]: 2025-11-22 04:22:31.603910213 +0000 UTC m=+0.614002042 container attach cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:22:31 compute-0 podman[307685]: 2025-11-22 04:22:31.604934112 +0000 UTC m=+0.615025921 container died cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:22:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-13d096b4a66bca3f8f619fae43c06a8f29f24fe9721f6cb7d70a500071e38e20-merged.mount: Deactivated successfully.
Nov 22 04:22:32 compute-0 podman[307685]: 2025-11-22 04:22:32.144732564 +0000 UTC m=+1.154824383 container remove cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_engelbart, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:22:32 compute-0 ceph-mon[75011]: pgmap v2250: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 15 op/s
Nov 22 04:22:32 compute-0 systemd[1]: libpod-conmon-cbe1bdc7473e3dfc8fcb897d904c4e32d2fdc4e7f494f52e996b6ae8edebeb49.scope: Deactivated successfully.
Nov 22 04:22:32 compute-0 podman[307726]: 2025-11-22 04:22:32.426875501 +0000 UTC m=+0.108059799 container create 4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:22:32 compute-0 podman[307726]: 2025-11-22 04:22:32.37227658 +0000 UTC m=+0.053460958 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:22:32 compute-0 systemd[1]: Started libpod-conmon-4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3.scope.
Nov 22 04:22:32 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae1e38f67d3176bb370fc79ed9fffe0628f1ca060d2dfa9c9333e2bc01273e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae1e38f67d3176bb370fc79ed9fffe0628f1ca060d2dfa9c9333e2bc01273e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae1e38f67d3176bb370fc79ed9fffe0628f1ca060d2dfa9c9333e2bc01273e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae1e38f67d3176bb370fc79ed9fffe0628f1ca060d2dfa9c9333e2bc01273e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aae1e38f67d3176bb370fc79ed9fffe0628f1ca060d2dfa9c9333e2bc01273e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:32 compute-0 podman[307726]: 2025-11-22 04:22:32.527257023 +0000 UTC m=+0.208441381 container init 4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:22:32 compute-0 podman[307726]: 2025-11-22 04:22:32.540548734 +0000 UTC m=+0.221733052 container start 4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:22:32 compute-0 podman[307726]: 2025-11-22 04:22:32.544803502 +0000 UTC m=+0.225987830 container attach 4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Nov 22 04:22:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 15 op/s
Nov 22 04:22:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e523 do_prune osdmap full prune enabled
Nov 22 04:22:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 e524: 3 total, 3 up, 3 in
Nov 22 04:22:33 compute-0 ceph-mon[75011]: log_channel(cluster) log [DBG] : osdmap e524: 3 total, 3 up, 3 in
Nov 22 04:22:33 compute-0 ceph-mon[75011]: pgmap v2251: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 17 KiB/s wr, 15 op/s
Nov 22 04:22:33 compute-0 happy_austin[307742]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:22:33 compute-0 happy_austin[307742]: --> relative data size: 1.0
Nov 22 04:22:33 compute-0 happy_austin[307742]: --> All data devices are unavailable
Nov 22 04:22:33 compute-0 systemd[1]: libpod-4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3.scope: Deactivated successfully.
Nov 22 04:22:33 compute-0 systemd[1]: libpod-4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3.scope: Consumed 1.095s CPU time.
Nov 22 04:22:33 compute-0 podman[307726]: 2025-11-22 04:22:33.689952082 +0000 UTC m=+1.371136380 container died 4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:22:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-aae1e38f67d3176bb370fc79ed9fffe0628f1ca060d2dfa9c9333e2bc01273e1-merged.mount: Deactivated successfully.
Nov 22 04:22:33 compute-0 podman[307726]: 2025-11-22 04:22:33.952809921 +0000 UTC m=+1.633994209 container remove 4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_austin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Nov 22 04:22:33 compute-0 systemd[1]: libpod-conmon-4f12e7aea7fd6639005c12463113fb3a7b3568ce79ad2a90cd1af2de4d7c24c3.scope: Deactivated successfully.
Nov 22 04:22:33 compute-0 sudo[307620]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:34 compute-0 sudo[307785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:34 compute-0 sudo[307785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:34 compute-0 sudo[307785]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:34 compute-0 sudo[307810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:22:34 compute-0 sudo[307810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:34 compute-0 sudo[307810]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:34 compute-0 sudo[307835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:34 compute-0 sudo[307835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:34 compute-0 sudo[307835]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:34 compute-0 nova_compute[253461]: 2025-11-22 04:22:34.282 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:34 compute-0 sudo[307860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:22:34 compute-0 sudo[307860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:34 compute-0 ceph-mon[75011]: osdmap e524: 3 total, 3 up, 3 in
Nov 22 04:22:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:22:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3329836206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:22:34 compute-0 podman[307927]: 2025-11-22 04:22:34.737320638 +0000 UTC m=+0.042707555 container create a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_boyd, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:22:34 compute-0 systemd[1]: Started libpod-conmon-a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2.scope.
Nov 22 04:22:34 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:22:34 compute-0 podman[307927]: 2025-11-22 04:22:34.718522112 +0000 UTC m=+0.023909059 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:22:34 compute-0 podman[307927]: 2025-11-22 04:22:34.829471097 +0000 UTC m=+0.134858064 container init a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:22:34 compute-0 podman[307927]: 2025-11-22 04:22:34.841396738 +0000 UTC m=+0.146783685 container start a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_boyd, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Nov 22 04:22:34 compute-0 sweet_boyd[307943]: 167 167
Nov 22 04:22:34 compute-0 systemd[1]: libpod-a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2.scope: Deactivated successfully.
Nov 22 04:22:34 compute-0 podman[307927]: 2025-11-22 04:22:34.845879683 +0000 UTC m=+0.151266620 container attach a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:22:34 compute-0 podman[307927]: 2025-11-22 04:22:34.847207683 +0000 UTC m=+0.152594630 container died a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_boyd, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Nov 22 04:22:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c18e296499eec2ab61a4fda24bc4c9e0913092f11fe4d5793d3ff895f0d341e2-merged.mount: Deactivated successfully.
Nov 22 04:22:34 compute-0 podman[307927]: 2025-11-22 04:22:34.909647885 +0000 UTC m=+0.215034863 container remove a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_boyd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Nov 22 04:22:34 compute-0 systemd[1]: libpod-conmon-a170d236bf9e389df14a5019f639bd2b6bc41fde56ed78560c64b694139682d2.scope: Deactivated successfully.
Nov 22 04:22:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 21 KiB/s wr, 19 op/s
Nov 22 04:22:35 compute-0 podman[307966]: 2025-11-22 04:22:35.20397614 +0000 UTC m=+0.121631947 container create bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:22:35 compute-0 podman[307966]: 2025-11-22 04:22:35.123295567 +0000 UTC m=+0.040951445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:22:35 compute-0 systemd[1]: Started libpod-conmon-bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0.scope.
Nov 22 04:22:35 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d62fdac466610c242f3a3e5f6f7da840af737e4f0e06501e3a9a3ab47c0de2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d62fdac466610c242f3a3e5f6f7da840af737e4f0e06501e3a9a3ab47c0de2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d62fdac466610c242f3a3e5f6f7da840af737e4f0e06501e3a9a3ab47c0de2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d62fdac466610c242f3a3e5f6f7da840af737e4f0e06501e3a9a3ab47c0de2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:35 compute-0 podman[307966]: 2025-11-22 04:22:35.350083405 +0000 UTC m=+0.267739192 container init bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leavitt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Nov 22 04:22:35 compute-0 podman[307966]: 2025-11-22 04:22:35.362078296 +0000 UTC m=+0.279734103 container start bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leavitt, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Nov 22 04:22:35 compute-0 podman[307966]: 2025-11-22 04:22:35.372830796 +0000 UTC m=+0.290486604 container attach bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:22:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3329836206' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:22:35 compute-0 ceph-mon[75011]: pgmap v2253: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 21 KiB/s wr, 19 op/s
Nov 22 04:22:35 compute-0 nova_compute[253461]: 2025-11-22 04:22:35.522 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:36 compute-0 charming_leavitt[307983]: {
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:     "0": [
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:         {
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "devices": [
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "/dev/loop3"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             ],
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_name": "ceph_lv0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_size": "21470642176",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "name": "ceph_lv0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "tags": {
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cluster_name": "ceph",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.crush_device_class": "",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.encrypted": "0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osd_id": "0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.type": "block",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.vdo": "0"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             },
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "type": "block",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "vg_name": "ceph_vg0"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:         }
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:     ],
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:     "1": [
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:         {
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "devices": [
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "/dev/loop4"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             ],
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_name": "ceph_lv1",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_size": "21470642176",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "name": "ceph_lv1",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "tags": {
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cluster_name": "ceph",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.crush_device_class": "",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.encrypted": "0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osd_id": "1",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.type": "block",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.vdo": "0"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             },
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "type": "block",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "vg_name": "ceph_vg1"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:         }
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:     ],
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:     "2": [
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:         {
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "devices": [
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "/dev/loop5"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             ],
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_name": "ceph_lv2",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_size": "21470642176",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "name": "ceph_lv2",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "tags": {
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.cluster_name": "ceph",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.crush_device_class": "",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.encrypted": "0",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osd_id": "2",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.type": "block",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:                 "ceph.vdo": "0"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             },
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "type": "block",
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:             "vg_name": "ceph_vg2"
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:         }
Nov 22 04:22:36 compute-0 charming_leavitt[307983]:     ]
Nov 22 04:22:36 compute-0 charming_leavitt[307983]: }
Nov 22 04:22:36 compute-0 systemd[1]: libpod-bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0.scope: Deactivated successfully.
Nov 22 04:22:36 compute-0 podman[307966]: 2025-11-22 04:22:36.149403771 +0000 UTC m=+1.067059548 container died bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leavitt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:22:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8d62fdac466610c242f3a3e5f6f7da840af737e4f0e06501e3a9a3ab47c0de2-merged.mount: Deactivated successfully.
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:22:36
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'default.rgw.log', 'vms', '.mgr', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.control']
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:22:36 compute-0 podman[307966]: 2025-11-22 04:22:36.443522012 +0000 UTC m=+1.361177819 container remove bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_leavitt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:22:36 compute-0 systemd[1]: libpod-conmon-bfd3206a5b793e86c934239943ee38d7a0300a780dfaf10b05ccfd893aac58f0.scope: Deactivated successfully.
Nov 22 04:22:36 compute-0 sudo[307860]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:36 compute-0 sudo[308006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:36 compute-0 sudo[308006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:36 compute-0 sudo[308006]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:36 compute-0 sudo[308031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:22:36 compute-0 sudo[308031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:36 compute-0 sudo[308031]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:22:36 compute-0 sudo[308056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:36 compute-0 sudo[308056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:36 compute-0 sudo[308056]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:36 compute-0 sudo[308081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:22:36 compute-0 sudo[308081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 38 op/s
Nov 22 04:22:37 compute-0 podman[308146]: 2025-11-22 04:22:37.231763765 +0000 UTC m=+0.042688220 container create a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bose, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:22:37 compute-0 systemd[1]: Started libpod-conmon-a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097.scope.
Nov 22 04:22:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:22:37 compute-0 podman[308146]: 2025-11-22 04:22:37.213698043 +0000 UTC m=+0.024622518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:22:37 compute-0 podman[308146]: 2025-11-22 04:22:37.315681278 +0000 UTC m=+0.126605763 container init a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:22:37 compute-0 podman[308146]: 2025-11-22 04:22:37.323556823 +0000 UTC m=+0.134481278 container start a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bose, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:22:37 compute-0 suspicious_bose[308162]: 167 167
Nov 22 04:22:37 compute-0 systemd[1]: libpod-a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097.scope: Deactivated successfully.
Nov 22 04:22:37 compute-0 podman[308146]: 2025-11-22 04:22:37.329028292 +0000 UTC m=+0.139952757 container attach a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bose, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:22:37 compute-0 podman[308146]: 2025-11-22 04:22:37.329758161 +0000 UTC m=+0.140682636 container died a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bose, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:22:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed2ba12d9b3c0b80469947111d54dafa48b4f4c83f411698f94c013bd689c74e-merged.mount: Deactivated successfully.
Nov 22 04:22:37 compute-0 podman[308146]: 2025-11-22 04:22:37.385014314 +0000 UTC m=+0.195938809 container remove a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_bose, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Nov 22 04:22:37 compute-0 systemd[1]: libpod-conmon-a597cc75361d9c37e660fd0df337bc1bf9c77d45835ff2d30481048b6a546097.scope: Deactivated successfully.
Nov 22 04:22:37 compute-0 podman[308184]: 2025-11-22 04:22:37.640607852 +0000 UTC m=+0.064420617 container create 19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wozniak, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:22:37 compute-0 systemd[1]: Started libpod-conmon-19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260.scope.
Nov 22 04:22:37 compute-0 podman[308184]: 2025-11-22 04:22:37.6133205 +0000 UTC m=+0.037133335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:22:37 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1721fc706ac828515bd176db23e79be80298de1fdc09017627663f8004dd507/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1721fc706ac828515bd176db23e79be80298de1fdc09017627663f8004dd507/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1721fc706ac828515bd176db23e79be80298de1fdc09017627663f8004dd507/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1721fc706ac828515bd176db23e79be80298de1fdc09017627663f8004dd507/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:37 compute-0 podman[308184]: 2025-11-22 04:22:37.745623042 +0000 UTC m=+0.169435887 container init 19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wozniak, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Nov 22 04:22:37 compute-0 podman[308184]: 2025-11-22 04:22:37.755345832 +0000 UTC m=+0.179158627 container start 19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 04:22:37 compute-0 podman[308184]: 2025-11-22 04:22:37.75915282 +0000 UTC m=+0.182965655 container attach 19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:22:38 compute-0 ceph-mon[75011]: pgmap v2254: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 38 op/s
Nov 22 04:22:38 compute-0 nova_compute[253461]: 2025-11-22 04:22:38.492 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "29f4ebae-3730-44d8-99f0-08b20268863c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:38 compute-0 nova_compute[253461]: 2025-11-22 04:22:38.494 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:38 compute-0 nova_compute[253461]: 2025-11-22 04:22:38.541 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Nov 22 04:22:38 compute-0 nova_compute[253461]: 2025-11-22 04:22:38.618 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:38 compute-0 nova_compute[253461]: 2025-11-22 04:22:38.619 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:38 compute-0 nova_compute[253461]: 2025-11-22 04:22:38.629 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Nov 22 04:22:38 compute-0 nova_compute[253461]: 2025-11-22 04:22:38.630 253465 INFO nova.compute.claims [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Claim successful on node compute-0.ctlplane.example.com
Nov 22 04:22:38 compute-0 nova_compute[253461]: 2025-11-22 04:22:38.729 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]: {
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "osd_id": 1,
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "type": "bluestore"
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:     },
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "osd_id": 0,
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "type": "bluestore"
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:     },
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "osd_id": 2,
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:         "type": "bluestore"
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]:     }
Nov 22 04:22:38 compute-0 youthful_wozniak[308200]: }
Nov 22 04:22:38 compute-0 systemd[1]: libpod-19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260.scope: Deactivated successfully.
Nov 22 04:22:38 compute-0 podman[308184]: 2025-11-22 04:22:38.804190586 +0000 UTC m=+1.228003341 container died 19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:22:38 compute-0 systemd[1]: libpod-19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260.scope: Consumed 1.058s CPU time.
Nov 22 04:22:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1721fc706ac828515bd176db23e79be80298de1fdc09017627663f8004dd507-merged.mount: Deactivated successfully.
Nov 22 04:22:38 compute-0 podman[308184]: 2025-11-22 04:22:38.875541537 +0000 UTC m=+1.299354292 container remove 19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Nov 22 04:22:38 compute-0 systemd[1]: libpod-conmon-19801d339e6aede85f58fa2ed0de7c4aa9ba9131a863050858745bad5e524260.scope: Deactivated successfully.
Nov 22 04:22:38 compute-0 sudo[308081]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:22:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:22:38 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:22:38 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:22:38 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev bf0ccbf5-01a6-4c07-89b6-1518be4fe84b does not exist
Nov 22 04:22:38 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 90bd8b70-7a79-4bd5-98c1-2878305817ff does not exist
Nov 22 04:22:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 38 op/s
Nov 22 04:22:39 compute-0 sudo[308268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:22:39 compute-0 sudo[308268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:39 compute-0 sudo[308268]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:39 compute-0 sudo[308293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:22:39 compute-0 sudo[308293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:22:39 compute-0 sudo[308293]: pam_unix(sudo:session): session closed for user root
Nov 22 04:22:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:22:39 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4222371796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.171 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.182 253465 DEBUG nova.compute.provider_tree [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:22:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.323 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.348 253465 DEBUG nova.scheduler.client.report [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.378 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.380 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.442 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.443 253465 DEBUG nova.network.neutron [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.467 253465 INFO nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.487 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.541 253465 INFO nova.virt.block_device [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Booting with volume 8ca6c420-59dc-493c-b7e0-2ab273f3c454 at /dev/vda
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.654 253465 DEBUG os_brick.utils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.655 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.673 261287 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.674 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[518b1ccf-f94e-47a6-891c-eb44207647e5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.675 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.688 261287 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.688 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[e1f37737-9b98-43e1-953e-43a3fa7ae48e]: (4, ('InitiatorName=iqn.1994-05.com.redhat:ac7b1cf28df9', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.690 261287 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.704 261287 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.705 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[ce392b8e-92fa-4289-9ee6-cbbcff8b3db9]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.706 261287 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8395b0-2727-4538-ae5e-3b3ed58f842b]: (4, 'cc28b99b-cca8-4899-a39d-03c6a80b1444') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.707 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.741 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.744 253465 DEBUG os_brick.initiator.connectors.lightos [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.744 253465 DEBUG os_brick.initiator.connectors.lightos [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.745 253465 DEBUG os_brick.initiator.connectors.lightos [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.745 253465 DEBUG os_brick.utils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] <== get_connector_properties: return (90ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:ac7b1cf28df9', 'do_local_attach': False, 'nvme_hostid': 'b41e453c-5c3a-4251-9262-f13d5e000e9b', 'system uuid': 'cc28b99b-cca8-4899-a39d-03c6a80b1444', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:b41e453c-5c3a-4251-9262-f13d5e000e9b', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Nov 22 04:22:39 compute-0 nova_compute[253461]: 2025-11-22 04:22:39.745 253465 DEBUG nova.virt.block_device [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updating existing volume attachment record: 979ac2e5-2bc8-4023-8b28-52c7b11696cc _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Nov 22 04:22:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:22:39 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:22:39 compute-0 ceph-mon[75011]: pgmap v2255: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 2.7 KiB/s wr, 38 op/s
Nov 22 04:22:39 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4222371796' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:22:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:22:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3432985923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.382 253465 DEBUG nova.policy [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '26147ad59e2d4763b8edc27d80567b09', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.497 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763785345.4968016, 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.498 253465 INFO nova.compute.manager [-] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] VM Stopped (Lifecycle Event)
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.519 253465 DEBUG nova.compute.manager [None req-a46b48de-ff33-421a-951c-50962426f7de - - - - - -] [instance: 1e6f29c3-14d2-44ea-8a1c-2727c30a1d3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.525 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.620 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.622 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.622 253465 INFO nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Creating image(s)
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.622 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.622 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Ensure instance console log exists: /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.623 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.623 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:40 compute-0 nova_compute[253461]: 2025-11-22 04:22:40.623 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:40 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3432985923' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:22:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Nov 22 04:22:41 compute-0 nova_compute[253461]: 2025-11-22 04:22:41.075 253465 DEBUG nova.network.neutron [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Successfully created port: 36300144-f97d-4cb4-a77a-621c764db174 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Nov 22 04:22:41 compute-0 podman[308327]: 2025-11-22 04:22:41.439752285 +0000 UTC m=+0.107082037 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 22 04:22:41 compute-0 ceph-mon[75011]: pgmap v2256: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Nov 22 04:22:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Nov 22 04:22:43 compute-0 nova_compute[253461]: 2025-11-22 04:22:43.465 253465 DEBUG nova.network.neutron [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Successfully updated port: 36300144-f97d-4cb4-a77a-621c764db174 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Nov 22 04:22:43 compute-0 nova_compute[253461]: 2025-11-22 04:22:43.485 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:22:43 compute-0 nova_compute[253461]: 2025-11-22 04:22:43.486 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquired lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:22:43 compute-0 nova_compute[253461]: 2025-11-22 04:22:43.486 253465 DEBUG nova.network.neutron [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Nov 22 04:22:43 compute-0 nova_compute[253461]: 2025-11-22 04:22:43.587 253465 DEBUG nova.compute.manager [req-06cc9e18-5079-4dcd-b1bc-8a38b8a0c0bb req-6bfef664-9001-4441-b1c6-db0c3947963b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received event network-changed-36300144-f97d-4cb4-a77a-621c764db174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:22:43 compute-0 nova_compute[253461]: 2025-11-22 04:22:43.587 253465 DEBUG nova.compute.manager [req-06cc9e18-5079-4dcd-b1bc-8a38b8a0c0bb req-6bfef664-9001-4441-b1c6-db0c3947963b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Refreshing instance network info cache due to event network-changed-36300144-f97d-4cb4-a77a-621c764db174. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:22:43 compute-0 nova_compute[253461]: 2025-11-22 04:22:43.588 253465 DEBUG oslo_concurrency.lockutils [req-06cc9e18-5079-4dcd-b1bc-8a38b8a0c0bb req-6bfef664-9001-4441-b1c6-db0c3947963b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:22:43 compute-0 ceph-mon[75011]: pgmap v2257: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.6 KiB/s wr, 28 op/s
Nov 22 04:22:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:44 compute-0 nova_compute[253461]: 2025-11-22 04:22:44.333 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:44 compute-0 nova_compute[253461]: 2025-11-22 04:22:44.339 253465 DEBUG nova.network.neutron [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Nov 22 04:22:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 22 04:22:45 compute-0 nova_compute[253461]: 2025-11-22 04:22:45.526 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:45 compute-0 ceph-mon[75011]: pgmap v2258: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.374 253465 DEBUG nova.network.neutron [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updating instance_info_cache with network_info: [{"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.400 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Releasing lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.401 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Instance network_info: |[{"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.402 253465 DEBUG oslo_concurrency.lockutils [req-06cc9e18-5079-4dcd-b1bc-8a38b8a0c0bb req-6bfef664-9001-4441-b1c6-db0c3947963b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.402 253465 DEBUG nova.network.neutron [req-06cc9e18-5079-4dcd-b1bc-8a38b8a0c0bb req-6bfef664-9001-4441-b1c6-db0c3947963b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Refreshing network info cache for port 36300144-f97d-4cb4-a77a-621c764db174 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.408 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Start _get_guest_xml network_info=[{"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'disk_bus': 'virtio', 'boot_index': 0, 'attachment_id': '979ac2e5-2bc8-4023-8b28-52c7b11696cc', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8ca6c420-59dc-493c-b7e0-2ab273f3c454', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8ca6c420-59dc-493c-b7e0-2ab273f3c454', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '29f4ebae-3730-44d8-99f0-08b20268863c', 'attached_at': '', 'detached_at': '', 'volume_id': '8ca6c420-59dc-493c-b7e0-2ab273f3c454', 'serial': '8ca6c420-59dc-493c-b7e0-2ab273f3c454'}, 'device_type': 'disk', 'mount_device': '/dev/vda', 'delete_on_termination': False, 'guest_format': None, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.417 253465 WARNING nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.426 253465 DEBUG nova.virt.libvirt.host [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.427 253465 DEBUG nova.virt.libvirt.host [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.430 253465 DEBUG nova.virt.libvirt.host [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.431 253465 DEBUG nova.virt.libvirt.host [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.432 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.433 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-22T03:49:22Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='8cbd30f8-fcd6-4139-9352-6e217c8df103',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.434 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.434 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.435 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.435 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.436 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.436 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.437 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.437 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.438 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.438 253465 DEBUG nova.virt.hardware [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.473 253465 DEBUG nova.storage.rbd_utils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image 29f4ebae-3730-44d8-99f0-08b20268863c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.477 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894585429283063 of space, bias 1.0, pg target 0.8683756287849189 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:22:46 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Nov 22 04:22:46 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/211525165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:22:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 22 op/s
Nov 22 04:22:46 compute-0 nova_compute[253461]: 2025-11-22 04:22:46.959 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:46 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/211525165' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.134 253465 DEBUG os_brick.encryptors [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Using volume encryption metadata '{'encryption_key_id': '19b9b8fc-4ccd-42c5-a747-c80a2cc60a86', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-8ca6c420-59dc-493c-b7e0-2ab273f3c454', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '8ca6c420-59dc-493c-b7e0-2ab273f3c454', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '29f4ebae-3730-44d8-99f0-08b20268863c', 'attached_at': '', 'detached_at': '', 'volume_id': '8ca6c420-59dc-493c-b7e0-2ab273f3c454', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.137 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.154 253465 DEBUG barbicanclient.v1.secrets [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.155 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.182 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.183 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.208 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.208 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.239 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.239 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.266 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.268 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.292 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.293 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.319 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.320 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.355 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.356 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.394 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.395 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.422 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.422 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.465 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.466 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.493 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.494 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.513 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.514 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.588 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.589 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.620 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.620 253465 INFO barbicanclient.base [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Calculated Secrets uuid ref: secrets/19b9b8fc-4ccd-42c5-a747-c80a2cc60a86
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.641 253465 DEBUG barbicanclient.client [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.642 253465 DEBUG nova.virt.libvirt.host [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Secret XML: <secret ephemeral="no" private="no">
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <usage type="volume">
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <volume>8ca6c420-59dc-493c-b7e0-2ab273f3c454</volume>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   </usage>
Nov 22 04:22:47 compute-0 nova_compute[253461]: </secret>
Nov 22 04:22:47 compute-0 nova_compute[253461]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.671 253465 DEBUG nova.virt.libvirt.vif [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:22:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1287385981',display_name='tempest-TestEncryptedCinderVolumes-server-1287385981',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1287385981',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9UKFMy9ycYw2Hm5en4L27DNwg0er6Lb0HRsD7AiMiSCvtvdx7izIV74D1MmE18lnPG59cKz/vp+1MZkJaUaik+lgJpk8hBjE03Y+JB1nMXTfCi52N8aZdJUG/KDhiYrQ==',key_name='tempest-TestEncryptedCinderVolumes-5346949',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-sr01ynko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:22:39Z,user_data=None,user_id='26147ad59e2d4763b8edc27d80567b09',uuid=29f4ebae-3730-44d8-99f0-08b20268863c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.671 253465 DEBUG nova.network.os_vif_util [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.672 253465 DEBUG nova.network.os_vif_util [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:58:43,bridge_name='br-int',has_traffic_filtering=True,id=36300144-f97d-4cb4-a77a-621c764db174,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36300144-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.674 253465 DEBUG nova.objects.instance [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'pci_devices' on Instance uuid 29f4ebae-3730-44d8-99f0-08b20268863c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.686 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] End _get_guest_xml xml=<domain type="kvm">
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <uuid>29f4ebae-3730-44d8-99f0-08b20268863c</uuid>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <name>instance-0000001e</name>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <memory>131072</memory>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <vcpu>1</vcpu>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <metadata>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1287385981</nova:name>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <nova:creationTime>2025-11-22 04:22:46</nova:creationTime>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <nova:flavor name="m1.nano">
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <nova:memory>128</nova:memory>
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <nova:disk>1</nova:disk>
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <nova:swap>0</nova:swap>
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <nova:ephemeral>0</nova:ephemeral>
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <nova:vcpus>1</nova:vcpus>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       </nova:flavor>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <nova:owner>
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <nova:user uuid="26147ad59e2d4763b8edc27d80567b09">tempest-TestEncryptedCinderVolumes-230639986-project-member</nova:user>
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <nova:project uuid="c9d01ebd7e4f4251b66172a246b8a08f">tempest-TestEncryptedCinderVolumes-230639986</nova:project>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       </nova:owner>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <nova:ports>
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <nova:port uuid="36300144-f97d-4cb4-a77a-621c764db174">
Nov 22 04:22:47 compute-0 nova_compute[253461]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:         </nova:port>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       </nova:ports>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </nova:instance>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   </metadata>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <sysinfo type="smbios">
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <system>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <entry name="manufacturer">RDO</entry>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <entry name="product">OpenStack Compute</entry>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <entry name="serial">29f4ebae-3730-44d8-99f0-08b20268863c</entry>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <entry name="uuid">29f4ebae-3730-44d8-99f0-08b20268863c</entry>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <entry name="family">Virtual Machine</entry>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </system>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   </sysinfo>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <os>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <type arch="x86_64" machine="q35">hvm</type>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <boot dev="hd"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <smbios mode="sysinfo"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   </os>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <features>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <acpi/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <apic/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <vmcoreinfo/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   </features>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <clock offset="utc">
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <timer name="pit" tickpolicy="delay"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <timer name="rtc" tickpolicy="catchup"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <timer name="hpet" present="no"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   </clock>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <cpu mode="host-model" match="exact">
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <topology sockets="1" cores="1" threads="1"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   </cpu>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   <devices>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <disk type="network" device="cdrom">
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <driver type="raw" cache="none"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <source protocol="rbd" name="vms/29f4ebae-3730-44d8-99f0-08b20268863c_disk.config">
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       </source>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <target dev="sda" bus="sata"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <disk type="network" device="disk">
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <source protocol="rbd" name="volumes/volume-8ca6c420-59dc-493c-b7e0-2ab273f3c454">
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <host name="192.168.122.100" port="6789"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       </source>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <auth username="openstack">
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <secret type="ceph" uuid="7adcc38b-6484-5de6-b879-33a0309153df"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       </auth>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <target dev="vda" bus="virtio"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <serial>8ca6c420-59dc-493c-b7e0-2ab273f3c454</serial>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <encryption format="luks">
Nov 22 04:22:47 compute-0 nova_compute[253461]:         <secret type="passphrase" uuid="dc7b76a7-4890-4171-a744-404436396bd0"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       </encryption>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </disk>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <interface type="ethernet">
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <mac address="fa:16:3e:02:58:43"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <driver name="vhost" rx_queue_size="512"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <mtu size="1442"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <target dev="tap36300144-f9"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </interface>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <serial type="pty">
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <log file="/var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c/console.log" append="off"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </serial>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <video>
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <model type="virtio"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </video>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <input type="tablet" bus="usb"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <rng model="virtio">
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <backend model="random">/dev/urandom</backend>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </rng>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="pci" model="pcie-root-port"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <controller type="usb" index="0"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     <memballoon model="virtio">
Nov 22 04:22:47 compute-0 nova_compute[253461]:       <stats period="10"/>
Nov 22 04:22:47 compute-0 nova_compute[253461]:     </memballoon>
Nov 22 04:22:47 compute-0 nova_compute[253461]:   </devices>
Nov 22 04:22:47 compute-0 nova_compute[253461]: </domain>
Nov 22 04:22:47 compute-0 nova_compute[253461]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.688 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Preparing to wait for external event network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.688 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.688 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.689 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.689 253465 DEBUG nova.virt.libvirt.vif [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-22T04:22:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1287385981',display_name='tempest-TestEncryptedCinderVolumes-server-1287385981',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1287385981',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9UKFMy9ycYw2Hm5en4L27DNwg0er6Lb0HRsD7AiMiSCvtvdx7izIV74D1MmE18lnPG59cKz/vp+1MZkJaUaik+lgJpk8hBjE03Y+JB1nMXTfCi52N8aZdJUG/KDhiYrQ==',key_name='tempest-TestEncryptedCinderVolumes-5346949',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-sr01ynko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-22T04:22:39Z,user_data=None,user_id='26147ad59e2d4763b8edc27d80567b09',uuid=29f4ebae-3730-44d8-99f0-08b20268863c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.690 253465 DEBUG nova.network.os_vif_util [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.690 253465 DEBUG nova.network.os_vif_util [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:58:43,bridge_name='br-int',has_traffic_filtering=True,id=36300144-f97d-4cb4-a77a-621c764db174,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36300144-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.691 253465 DEBUG os_vif [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:58:43,bridge_name='br-int',has_traffic_filtering=True,id=36300144-f97d-4cb4-a77a-621c764db174,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36300144-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.692 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.692 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.693 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.697 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.698 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36300144-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.699 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36300144-f9, col_values=(('external_ids', {'iface-id': '36300144-f97d-4cb4-a77a-621c764db174', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:58:43', 'vm-uuid': '29f4ebae-3730-44d8-99f0-08b20268863c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.701 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:47 compute-0 NetworkManager[48916]: <info>  [1763785367.7024] manager: (tap36300144-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.707 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.711 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.712 253465 INFO os_vif [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:58:43,bridge_name='br-int',has_traffic_filtering=True,id=36300144-f97d-4cb4-a77a-621c764db174,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36300144-f9')
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.760 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.761 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.761 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] No VIF found with MAC fa:16:3e:02:58:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.761 253465 INFO nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Using config drive
Nov 22 04:22:47 compute-0 nova_compute[253461]: 2025-11-22 04:22:47.785 253465 DEBUG nova.storage.rbd_utils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image 29f4ebae-3730-44d8-99f0-08b20268863c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.348 253465 INFO nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Creating config drive at /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c/disk.config
Nov 22 04:22:48 compute-0 ceph-mon[75011]: pgmap v2259: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.1 KiB/s wr, 22 op/s
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.355 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpszugtpuj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.393 253465 DEBUG nova.network.neutron [req-06cc9e18-5079-4dcd-b1bc-8a38b8a0c0bb req-6bfef664-9001-4441-b1c6-db0c3947963b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updated VIF entry in instance network info cache for port 36300144-f97d-4cb4-a77a-621c764db174. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.394 253465 DEBUG nova.network.neutron [req-06cc9e18-5079-4dcd-b1bc-8a38b8a0c0bb req-6bfef664-9001-4441-b1c6-db0c3947963b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updating instance_info_cache with network_info: [{"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.408 253465 DEBUG oslo_concurrency.lockutils [req-06cc9e18-5079-4dcd-b1bc-8a38b8a0c0bb req-6bfef664-9001-4441-b1c6-db0c3947963b f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.488 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpszugtpuj" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.521 253465 DEBUG nova.storage.rbd_utils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] rbd image 29f4ebae-3730-44d8-99f0-08b20268863c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.525 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c/disk.config 29f4ebae-3730-44d8-99f0-08b20268863c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.688 253465 DEBUG oslo_concurrency.processutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c/disk.config 29f4ebae-3730-44d8-99f0-08b20268863c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.689 253465 INFO nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Deleting local config drive /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c/disk.config because it was imported into RBD.
Nov 22 04:22:48 compute-0 NetworkManager[48916]: <info>  [1763785368.7622] manager: (tap36300144-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Nov 22 04:22:48 compute-0 kernel: tap36300144-f9: entered promiscuous mode
Nov 22 04:22:48 compute-0 ovn_controller[152691]: 2025-11-22T04:22:48Z|00307|binding|INFO|Claiming lport 36300144-f97d-4cb4-a77a-621c764db174 for this chassis.
Nov 22 04:22:48 compute-0 ovn_controller[152691]: 2025-11-22T04:22:48Z|00308|binding|INFO|36300144-f97d-4cb4-a77a-621c764db174: Claiming fa:16:3e:02:58:43 10.100.0.5
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.765 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.781 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:58:43 10.100.0.5'], port_security=['fa:16:3e:02:58:43 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '29f4ebae-3730-44d8-99f0-08b20268863c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '395e72d5-eb27-430c-9253-638d58d94891', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d096b3-2344-4434-a488-92084cb46974, chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=36300144-f97d-4cb4-a77a-621c764db174) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.783 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 36300144-f97d-4cb4-a77a-621c764db174 in datapath bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 bound to our chassis
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.785 162689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.800 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[6109ec79-37e4-4a4a-98ff-e5f7c203183f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 ovn_controller[152691]: 2025-11-22T04:22:48Z|00309|binding|INFO|Setting lport 36300144-f97d-4cb4-a77a-621c764db174 ovn-installed in OVS
Nov 22 04:22:48 compute-0 ovn_controller[152691]: 2025-11-22T04:22:48Z|00310|binding|INFO|Setting lport 36300144-f97d-4cb4-a77a-621c764db174 up in Southbound
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.801 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbd550fd2-d1 in ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.803 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.806 261050 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbd550fd2-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.807 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[eaf7d917-693a-4ce5-8298-236169a95709]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.809 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1d0d4d0a-3220-4a6e-9996-7608a7b8a17f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 systemd-machined[215728]: New machine qemu-30-instance-0000001e.
Nov 22 04:22:48 compute-0 systemd-udevd[308463]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.828 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[4dac01fa-ee75-4788-b3b1-7a2a6ad11d35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Nov 22 04:22:48 compute-0 NetworkManager[48916]: <info>  [1763785368.8473] device (tap36300144-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 22 04:22:48 compute-0 NetworkManager[48916]: <info>  [1763785368.8502] device (tap36300144-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.857 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[1fab366a-3b84-45f8-8fc8-f8f3c2aed162]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.885 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[62d4b1d7-c3bd-4684-ae59-1656c0147c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 systemd-udevd[308466]: Network interface NamePolicy= disabled on kernel command line.
Nov 22 04:22:48 compute-0 NetworkManager[48916]: <info>  [1763785368.8925] manager: (tapbd550fd2-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/153)
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.891 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[eb2e3423-9c16-4230-91f6-7ca0bf38518d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.919 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[281cd6ca-0f85-4f98-94d4-00452b3a25a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.922 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[38805e82-ccba-40dd-a3c9-dbb1a2e06ffb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 NetworkManager[48916]: <info>  [1763785368.9430] device (tapbd550fd2-d0): carrier: link connected
Nov 22 04:22:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.950 261069 DEBUG oslo.privsep.daemon [-] privsep: reply[4964ebc6-5674-4b0f-b062-3b8170387730]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.968 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b0b733-be22-4938-b32f-8c068fc7cc51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd550fd2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:cb:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562154, 'reachable_time': 32565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308494, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:48.985 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[955b4bad-2d32-4ccb-a752-0d6c14e883f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea1:cb6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 562154, 'tstamp': 562154}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308495, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.990 253465 DEBUG nova.compute.manager [req-7259df28-2cee-415d-9451-40f9bb99e6cb req-13d47691-055d-4a77-886f-992ee3875525 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received event network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.990 253465 DEBUG oslo_concurrency.lockutils [req-7259df28-2cee-415d-9451-40f9bb99e6cb req-13d47691-055d-4a77-886f-992ee3875525 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.990 253465 DEBUG oslo_concurrency.lockutils [req-7259df28-2cee-415d-9451-40f9bb99e6cb req-13d47691-055d-4a77-886f-992ee3875525 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.990 253465 DEBUG oslo_concurrency.lockutils [req-7259df28-2cee-415d-9451-40f9bb99e6cb req-13d47691-055d-4a77-886f-992ee3875525 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:48 compute-0 nova_compute[253461]: 2025-11-22 04:22:48.990 253465 DEBUG nova.compute.manager [req-7259df28-2cee-415d-9451-40f9bb99e6cb req-13d47691-055d-4a77-886f-992ee3875525 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Processing event network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.002 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb00aea-639b-4ab3-9542-c6e3316bff23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbd550fd2-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a1:cb:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562154, 'reachable_time': 32565, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308496, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.032 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8a58aaa1-9cb2-47f7-a12b-d884ebdfa48b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.094 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[39a9fa1c-0832-42f8-8133-86b88a16cd56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.095 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd550fd2-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.096 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.096 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbd550fd2-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:49 compute-0 nova_compute[253461]: 2025-11-22 04:22:49.098 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:49 compute-0 kernel: tapbd550fd2-d0: entered promiscuous mode
Nov 22 04:22:49 compute-0 NetworkManager[48916]: <info>  [1763785369.1016] manager: (tapbd550fd2-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.101 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbd550fd2-d0, col_values=(('external_ids', {'iface-id': '1cfe38fd-445a-4e2d-9728-1f7ee0085422'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:49 compute-0 nova_compute[253461]: 2025-11-22 04:22:49.102 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:49 compute-0 ovn_controller[152691]: 2025-11-22T04:22:49Z|00311|binding|INFO|Releasing lport 1cfe38fd-445a-4e2d-9728-1f7ee0085422 from this chassis (sb_readonly=0)
Nov 22 04:22:49 compute-0 nova_compute[253461]: 2025-11-22 04:22:49.118 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.118 162689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.120 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[e6deb6d7-b03b-48f2-ad06-97620544a978]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.121 162689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: global
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     log         /dev/log local0 debug
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     log-tag     haproxy-metadata-proxy-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     user        root
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     group       root
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     maxconn     1024
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     pidfile     /var/lib/neutron/external/pids/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.pid.haproxy
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     daemon
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: defaults
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     log global
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     mode http
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     option httplog
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     option dontlognull
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     option http-server-close
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     option forwardfor
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     retries                 3
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     timeout http-request    30s
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     timeout connect         30s
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     timeout client          32s
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     timeout server          32s
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     timeout http-keep-alive 30s
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: listen listener
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     bind 169.254.169.254:80
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     server metadata /var/lib/neutron/metadata_proxy
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:     http-request add-header X-OVN-Network-ID bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Nov 22 04:22:49 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:49.122 162689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'env', 'PROCESS_TAG=haproxy-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Nov 22 04:22:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:49 compute-0 nova_compute[253461]: 2025-11-22 04:22:49.335 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:49 compute-0 ceph-mon[75011]: pgmap v2260: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 0 B/s wr, 0 op/s
Nov 22 04:22:49 compute-0 podman[308564]: 2025-11-22 04:22:49.489357709 +0000 UTC m=+0.044493735 container create fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:22:49 compute-0 podman[308564]: 2025-11-22 04:22:49.464857663 +0000 UTC m=+0.019993659 image pull 1579eb8af8e4bc6d332a87a6e64650b1ebece1e7fc815782917ed57a649216c9 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 22 04:22:49 compute-0 systemd[1]: Started libpod-conmon-fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b.scope.
Nov 22 04:22:49 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:22:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99356188f33f2fba71afe77fcd013cb0a42f7d5559e3bb99fea33a2bfb4b8f23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 22 04:22:49 compute-0 podman[308564]: 2025-11-22 04:22:49.629209687 +0000 UTC m=+0.184345723 container init fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:22:49 compute-0 podman[308564]: 2025-11-22 04:22:49.636867024 +0000 UTC m=+0.192003030 container start fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 22 04:22:49 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[308579]: [NOTICE]   (308583) : New worker (308585) forked
Nov 22 04:22:49 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[308579]: [NOTICE]   (308583) : Loading success.
Nov 22 04:22:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 511 B/s wr, 12 op/s
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.076 253465 DEBUG nova.compute.manager [req-97c8c7c5-f15e-42a4-a587-66c727f2d7b2 req-4c51d067-d09a-4bd4-a649-c13c49865493 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received event network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.077 253465 DEBUG oslo_concurrency.lockutils [req-97c8c7c5-f15e-42a4-a587-66c727f2d7b2 req-4c51d067-d09a-4bd4-a649-c13c49865493 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.078 253465 DEBUG oslo_concurrency.lockutils [req-97c8c7c5-f15e-42a4-a587-66c727f2d7b2 req-4c51d067-d09a-4bd4-a649-c13c49865493 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.078 253465 DEBUG oslo_concurrency.lockutils [req-97c8c7c5-f15e-42a4-a587-66c727f2d7b2 req-4c51d067-d09a-4bd4-a649-c13c49865493 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.078 253465 DEBUG nova.compute.manager [req-97c8c7c5-f15e-42a4-a587-66c727f2d7b2 req-4c51d067-d09a-4bd4-a649-c13c49865493 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] No waiting events found dispatching network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.079 253465 WARNING nova.compute.manager [req-97c8c7c5-f15e-42a4-a587-66c727f2d7b2 req-4c51d067-d09a-4bd4-a649-c13c49865493 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received unexpected event network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 for instance with vm_state building and task_state spawning.
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.757 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785371.7574608, 29f4ebae-3730-44d8-99f0-08b20268863c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.758 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] VM Started (Lifecycle Event)
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.760 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.764 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.769 253465 INFO nova.virt.libvirt.driver [-] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Instance spawned successfully.
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.770 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.786 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.793 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.798 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.798 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.799 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.799 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.800 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.800 253465 DEBUG nova.virt.libvirt.driver [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.828 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.828 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785371.7576003, 29f4ebae-3730-44d8-99f0-08b20268863c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.828 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] VM Paused (Lifecycle Event)
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.854 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.858 253465 DEBUG nova.virt.driver [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] Emitting event <LifecycleEvent: 1763785371.7637177, 29f4ebae-3730-44d8-99f0-08b20268863c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.859 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] VM Resumed (Lifecycle Event)
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.864 253465 INFO nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Took 11.24 seconds to spawn the instance on the hypervisor.
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.865 253465 DEBUG nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.878 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.881 253465 DEBUG nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.905 253465 INFO nova.compute.manager [None req-e3b867c3-b4c4-415d-adab-c523858b9077 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] During sync_power_state the instance has a pending task (spawning). Skip.
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.943 253465 INFO nova.compute.manager [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Took 13.36 seconds to build instance.
Nov 22 04:22:51 compute-0 nova_compute[253461]: 2025-11-22 04:22:51.963 253465 DEBUG oslo_concurrency.lockutils [None req-a8afd26e-645e-4d7d-8282-71e9f43be918 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:52 compute-0 ceph-mon[75011]: pgmap v2261: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 511 B/s wr, 12 op/s
Nov 22 04:22:52 compute-0 nova_compute[253461]: 2025-11-22 04:22:52.701 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 13 KiB/s wr, 12 op/s
Nov 22 04:22:54 compute-0 ceph-mon[75011]: pgmap v2262: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 9.4 KiB/s rd, 13 KiB/s wr, 12 op/s
Nov 22 04:22:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:54 compute-0 nova_compute[253461]: 2025-11-22 04:22:54.336 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 13 KiB/s wr, 20 op/s
Nov 22 04:22:55 compute-0 podman[308600]: 2025-11-22 04:22:55.41133904 +0000 UTC m=+0.077212150 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 22 04:22:55 compute-0 podman[308601]: 2025-11-22 04:22:55.468555369 +0000 UTC m=+0.137836360 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_controller, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 22 04:22:56 compute-0 ceph-mon[75011]: pgmap v2263: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 13 KiB/s wr, 20 op/s
Nov 22 04:22:56 compute-0 nova_compute[253461]: 2025-11-22 04:22:56.360 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:56.360 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:a0:3a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '9e:5d:40:6b:64:71'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:22:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:56.363 162689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Nov 22 04:22:56 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:22:56.365 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7d76f7df-fc3b-449d-b505-65b8b0ef9c3a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:22:56 compute-0 nova_compute[253461]: 2025-11-22 04:22:56.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:56 compute-0 nova_compute[253461]: 2025-11-22 04:22:56.450 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:56 compute-0 nova_compute[253461]: 2025-11-22 04:22:56.451 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:56 compute-0 nova_compute[253461]: 2025-11-22 04:22:56.451 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:56 compute-0 nova_compute[253461]: 2025-11-22 04:22:56.451 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:22:56 compute-0 nova_compute[253461]: 2025-11-22 04:22:56.452 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:22:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2172462459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:22:56 compute-0 nova_compute[253461]: 2025-11-22 04:22:56.944 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.009 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.009 253465 DEBUG nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Nov 22 04:22:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2172462459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.192 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.193 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4154MB free_disk=59.98813247680664GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.193 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.193 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.276 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Instance 29f4ebae-3730-44d8-99f0-08b20268863c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.276 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.277 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.309 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.704 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:22:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664561246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.764 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.771 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.789 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.818 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.819 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.893 253465 DEBUG nova.compute.manager [req-326c6e83-fc91-47d8-9173-5c3d05cd5d75 req-0bd2b081-1a80-4c9c-8bb9-381d9ecc386f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received event network-changed-36300144-f97d-4cb4-a77a-621c764db174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.894 253465 DEBUG nova.compute.manager [req-326c6e83-fc91-47d8-9173-5c3d05cd5d75 req-0bd2b081-1a80-4c9c-8bb9-381d9ecc386f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Refreshing instance network info cache due to event network-changed-36300144-f97d-4cb4-a77a-621c764db174. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.894 253465 DEBUG oslo_concurrency.lockutils [req-326c6e83-fc91-47d8-9173-5c3d05cd5d75 req-0bd2b081-1a80-4c9c-8bb9-381d9ecc386f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.895 253465 DEBUG oslo_concurrency.lockutils [req-326c6e83-fc91-47d8-9173-5c3d05cd5d75 req-0bd2b081-1a80-4c9c-8bb9-381d9ecc386f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquired lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:22:57 compute-0 nova_compute[253461]: 2025-11-22 04:22:57.895 253465 DEBUG nova.network.neutron [req-326c6e83-fc91-47d8-9173-5c3d05cd5d75 req-0bd2b081-1a80-4c9c-8bb9-381d9ecc386f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Refreshing network info cache for port 36300144-f97d-4cb4-a77a-621c764db174 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Nov 22 04:22:58 compute-0 ceph-mon[75011]: pgmap v2264: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 22 04:22:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2664561246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:22:58 compute-0 nova_compute[253461]: 2025-11-22 04:22:58.819 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 22 04:22:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:22:59 compute-0 nova_compute[253461]: 2025-11-22 04:22:59.338 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:22:59 compute-0 nova_compute[253461]: 2025-11-22 04:22:59.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:59 compute-0 nova_compute[253461]: 2025-11-22 04:22:59.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:22:59 compute-0 nova_compute[253461]: 2025-11-22 04:22:59.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:22:59 compute-0 nova_compute[253461]: 2025-11-22 04:22:59.686 253465 DEBUG nova.network.neutron [req-326c6e83-fc91-47d8-9173-5c3d05cd5d75 req-0bd2b081-1a80-4c9c-8bb9-381d9ecc386f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updated VIF entry in instance network info cache for port 36300144-f97d-4cb4-a77a-621c764db174. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Nov 22 04:22:59 compute-0 nova_compute[253461]: 2025-11-22 04:22:59.687 253465 DEBUG nova.network.neutron [req-326c6e83-fc91-47d8-9173-5c3d05cd5d75 req-0bd2b081-1a80-4c9c-8bb9-381d9ecc386f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updating instance_info_cache with network_info: [{"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:22:59 compute-0 nova_compute[253461]: 2025-11-22 04:22:59.714 253465 DEBUG oslo_concurrency.lockutils [req-326c6e83-fc91-47d8-9173-5c3d05cd5d75 req-0bd2b081-1a80-4c9c-8bb9-381d9ecc386f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Releasing lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:23:00 compute-0 ceph-mon[75011]: pgmap v2265: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 22 04:23:00 compute-0 nova_compute[253461]: 2025-11-22 04:23:00.426 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:23:00 compute-0 nova_compute[253461]: 2025-11-22 04:23:00.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:23:00 compute-0 nova_compute[253461]: 2025-11-22 04:23:00.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:23:00 compute-0 nova_compute[253461]: 2025-11-22 04:23:00.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:23:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:23:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1369847327' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:23:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:23:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1369847327' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:23:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 22 04:23:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1369847327' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:23:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/1369847327' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:23:01 compute-0 nova_compute[253461]: 2025-11-22 04:23:01.375 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Nov 22 04:23:01 compute-0 nova_compute[253461]: 2025-11-22 04:23:01.376 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquired lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Nov 22 04:23:01 compute-0 nova_compute[253461]: 2025-11-22 04:23:01.376 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Nov 22 04:23:01 compute-0 nova_compute[253461]: 2025-11-22 04:23:01.377 253465 DEBUG nova.objects.instance [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lazy-loading 'info_cache' on Instance uuid 29f4ebae-3730-44d8-99f0-08b20268863c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:23:02 compute-0 ceph-mon[75011]: pgmap v2266: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Nov 22 04:23:02 compute-0 nova_compute[253461]: 2025-11-22 04:23:02.706 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 65 op/s
Nov 22 04:23:04 compute-0 ceph-mon[75011]: pgmap v2267: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 65 op/s
Nov 22 04:23:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:04 compute-0 nova_compute[253461]: 2025-11-22 04:23:04.341 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:04 compute-0 ovn_controller[152691]: 2025-11-22T04:23:04Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.5
Nov 22 04:23:04 compute-0 ovn_controller[152691]: 2025-11-22T04:23:04Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:02:58:43 10.100.0.5
Nov 22 04:23:04 compute-0 nova_compute[253461]: 2025-11-22 04:23:04.675 253465 DEBUG nova.network.neutron [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updating instance_info_cache with network_info: [{"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:23:04 compute-0 nova_compute[253461]: 2025-11-22 04:23:04.691 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Releasing lock "refresh_cache-29f4ebae-3730-44d8-99f0-08b20268863c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Nov 22 04:23:04 compute-0 nova_compute[253461]: 2025-11-22 04:23:04.691 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Nov 22 04:23:04 compute-0 nova_compute[253461]: 2025-11-22 04:23:04.692 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:23:04 compute-0 nova_compute[253461]: 2025-11-22 04:23:04.693 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:23:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 71 op/s
Nov 22 04:23:06 compute-0 ceph-mon[75011]: pgmap v2268: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 511 B/s wr, 71 op/s
Nov 22 04:23:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:23:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:23:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:23:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:23:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:23:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:23:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 283 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 96 op/s
Nov 22 04:23:07 compute-0 nova_compute[253461]: 2025-11-22 04:23:07.709 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:08 compute-0 ceph-mon[75011]: pgmap v2269: 305 pgs: 305 active+clean; 283 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 96 op/s
Nov 22 04:23:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 283 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.0 MiB/s wr, 40 op/s
Nov 22 04:23:09 compute-0 ovn_controller[152691]: 2025-11-22T04:23:09Z|00076|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.4 does not match offer 10.100.0.5
Nov 22 04:23:09 compute-0 ovn_controller[152691]: 2025-11-22T04:23:09Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:02:58:43 10.100.0.5
Nov 22 04:23:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:09 compute-0 nova_compute[253461]: 2025-11-22 04:23:09.375 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:09 compute-0 nova_compute[253461]: 2025-11-22 04:23:09.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:23:09 compute-0 nova_compute[253461]: 2025-11-22 04:23:09.450 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:23:09 compute-0 ovn_controller[152691]: 2025-11-22T04:23:09Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:58:43 10.100.0.5
Nov 22 04:23:09 compute-0 ovn_controller[152691]: 2025-11-22T04:23:09Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:58:43 10.100.0.5
Nov 22 04:23:10 compute-0 ceph-mon[75011]: pgmap v2270: 305 pgs: 305 active+clean; 283 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.0 MiB/s wr, 40 op/s
Nov 22 04:23:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 22 04:23:11 compute-0 ceph-mon[75011]: pgmap v2271: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 22 04:23:12 compute-0 podman[308686]: 2025-11-22 04:23:12.453482575 +0000 UTC m=+0.114370151 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:23:12 compute-0 nova_compute[253461]: 2025-11-22 04:23:12.711 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 22 04:23:14 compute-0 ceph-mon[75011]: pgmap v2272: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 22 04:23:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:14 compute-0 nova_compute[253461]: 2025-11-22 04:23:14.379 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 22 04:23:16 compute-0 ceph-mon[75011]: pgmap v2273: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 50 op/s
Nov 22 04:23:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.3 MiB/s wr, 43 op/s
Nov 22 04:23:17 compute-0 nova_compute[253461]: 2025-11-22 04:23:17.714 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:18 compute-0 ceph-mon[75011]: pgmap v2274: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.3 MiB/s wr, 43 op/s
Nov 22 04:23:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 357 KiB/s wr, 9 op/s
Nov 22 04:23:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:19 compute-0 nova_compute[253461]: 2025-11-22 04:23:19.381 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:20 compute-0 ceph-mon[75011]: pgmap v2275: 305 pgs: 305 active+clean; 287 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 490 KiB/s rd, 357 KiB/s wr, 9 op/s
Nov 22 04:23:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 832 KiB/s rd, 793 KiB/s wr, 14 op/s
Nov 22 04:23:22 compute-0 ceph-mon[75011]: pgmap v2276: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 832 KiB/s rd, 793 KiB/s wr, 14 op/s
Nov 22 04:23:22 compute-0 nova_compute[253461]: 2025-11-22 04:23:22.716 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 440 KiB/s wr, 4 op/s
Nov 22 04:23:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:23.042 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:23:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:23.042 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:23:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:23.043 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:23:24 compute-0 ceph-mon[75011]: pgmap v2277: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 771 KiB/s rd, 440 KiB/s wr, 4 op/s
Nov 22 04:23:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:24 compute-0 nova_compute[253461]: 2025-11-22 04:23:24.383 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Nov 22 04:23:26 compute-0 ceph-mon[75011]: pgmap v2278: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Nov 22 04:23:26 compute-0 podman[308712]: 2025-11-22 04:23:26.381669812 +0000 UTC m=+0.059587591 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 22 04:23:26 compute-0 podman[308713]: 2025-11-22 04:23:26.41800461 +0000 UTC m=+0.095552571 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251118)
Nov 22 04:23:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Nov 22 04:23:27 compute-0 ceph-mon[75011]: pgmap v2279: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 440 KiB/s wr, 4 op/s
Nov 22 04:23:27 compute-0 ovn_controller[152691]: 2025-11-22T04:23:27Z|00312|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 22 04:23:27 compute-0 nova_compute[253461]: 2025-11-22 04:23:27.717 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:28 compute-0 sshd-session[308710]: Invalid user support from 27.79.46.85 port 57280
Nov 22 04:23:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 437 KiB/s wr, 4 op/s
Nov 22 04:23:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:29 compute-0 nova_compute[253461]: 2025-11-22 04:23:29.418 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:29 compute-0 nova_compute[253461]: 2025-11-22 04:23:29.986 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "29f4ebae-3730-44d8-99f0-08b20268863c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:23:29 compute-0 nova_compute[253461]: 2025-11-22 04:23:29.987 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:23:29 compute-0 nova_compute[253461]: 2025-11-22 04:23:29.987 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:23:29 compute-0 nova_compute[253461]: 2025-11-22 04:23:29.987 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:23:29 compute-0 nova_compute[253461]: 2025-11-22 04:23:29.987 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:23:29 compute-0 nova_compute[253461]: 2025-11-22 04:23:29.989 253465 INFO nova.compute.manager [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Terminating instance
Nov 22 04:23:29 compute-0 nova_compute[253461]: 2025-11-22 04:23:29.990 253465 DEBUG nova.compute.manager [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Nov 22 04:23:30 compute-0 ceph-mon[75011]: pgmap v2280: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 430 KiB/s rd, 437 KiB/s wr, 4 op/s
Nov 22 04:23:30 compute-0 kernel: tap36300144-f9 (unregistering): left promiscuous mode
Nov 22 04:23:30 compute-0 NetworkManager[48916]: <info>  [1763785410.0602] device (tap36300144-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 22 04:23:30 compute-0 ovn_controller[152691]: 2025-11-22T04:23:30Z|00313|binding|INFO|Releasing lport 36300144-f97d-4cb4-a77a-621c764db174 from this chassis (sb_readonly=0)
Nov 22 04:23:30 compute-0 ovn_controller[152691]: 2025-11-22T04:23:30Z|00314|binding|INFO|Setting lport 36300144-f97d-4cb4-a77a-621c764db174 down in Southbound
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.071 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 ovn_controller[152691]: 2025-11-22T04:23:30Z|00315|binding|INFO|Removing iface tap36300144-f9 ovn-installed in OVS
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.074 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.081 162689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:58:43 10.100.0.5'], port_security=['fa:16:3e:02:58:43 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '29f4ebae-3730-44d8-99f0-08b20268863c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c9d01ebd7e4f4251b66172a246b8a08f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '395e72d5-eb27-430c-9253-638d58d94891', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.198'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d096b3-2344-4434-a488-92084cb46974, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>], logical_port=36300144-f97d-4cb4-a77a-621c764db174) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f48cd3a9f40>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.082 162689 INFO neutron.agent.ovn.metadata.agent [-] Port 36300144-f97d-4cb4-a77a-621c764db174 in datapath bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 unbound from our chassis
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.084 162689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.085 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cae523-254b-4d53-8834-f0e23152619c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.087 162689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 namespace which is not needed anymore
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.095 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Nov 22 04:23:30 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 17.103s CPU time.
Nov 22 04:23:30 compute-0 systemd-machined[215728]: Machine qemu-30-instance-0000001e terminated.
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.222 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.232 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[308579]: [NOTICE]   (308583) : haproxy version is 2.8.14-c23fe91
Nov 22 04:23:30 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[308579]: [NOTICE]   (308583) : path to executable is /usr/sbin/haproxy
Nov 22 04:23:30 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[308579]: [WARNING]  (308583) : Exiting Master process...
Nov 22 04:23:30 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[308579]: [WARNING]  (308583) : Exiting Master process...
Nov 22 04:23:30 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[308579]: [ALERT]    (308583) : Current worker (308585) exited with code 143 (Terminated)
Nov 22 04:23:30 compute-0 neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07[308579]: [WARNING]  (308583) : All workers exited. Exiting... (0)
Nov 22 04:23:30 compute-0 systemd[1]: libpod-fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b.scope: Deactivated successfully.
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.243 253465 INFO nova.virt.libvirt.driver [-] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Instance destroyed successfully.
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.244 253465 DEBUG nova.objects.instance [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lazy-loading 'resources' on Instance uuid 29f4ebae-3730-44d8-99f0-08b20268863c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Nov 22 04:23:30 compute-0 podman[308776]: 2025-11-22 04:23:30.249731686 +0000 UTC m=+0.058132584 container died fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.258 253465 DEBUG nova.virt.libvirt.vif [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-22T04:22:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1287385981',display_name='tempest-TestEncryptedCinderVolumes-server-1287385981',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1287385981',id=30,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBP9UKFMy9ycYw2Hm5en4L27DNwg0er6Lb0HRsD7AiMiSCvtvdx7izIV74D1MmE18lnPG59cKz/vp+1MZkJaUaik+lgJpk8hBjE03Y+JB1nMXTfCi52N8aZdJUG/KDhiYrQ==',key_name='tempest-TestEncryptedCinderVolumes-5346949',keypairs=<?>,launch_index=0,launched_at=2025-11-22T04:22:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c9d01ebd7e4f4251b66172a246b8a08f',ramdisk_id='',reservation_id='r-sr01ynko',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-230639986',owner_user_name='tempest-TestEncryptedCinderVolumes-230639986-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-22T04:22:51Z,user_data=None,user_id='26147ad59e2d4763b8edc27d80567b09',uuid=29f4ebae-3730-44d8-99f0-08b20268863c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.258 253465 DEBUG nova.network.os_vif_util [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converting VIF {"id": "36300144-f97d-4cb4-a77a-621c764db174", "address": "fa:16:3e:02:58:43", "network": {"id": "bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-501495820-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c9d01ebd7e4f4251b66172a246b8a08f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36300144-f9", "ovs_interfaceid": "36300144-f97d-4cb4-a77a-621c764db174", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.259 253465 DEBUG nova.network.os_vif_util [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:58:43,bridge_name='br-int',has_traffic_filtering=True,id=36300144-f97d-4cb4-a77a-621c764db174,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36300144-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.259 253465 DEBUG os_vif [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:58:43,bridge_name='br-int',has_traffic_filtering=True,id=36300144-f97d-4cb4-a77a-621c764db174,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36300144-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.262 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.262 253465 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36300144-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.264 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.266 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.268 253465 INFO os_vif [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:58:43,bridge_name='br-int',has_traffic_filtering=True,id=36300144-f97d-4cb4-a77a-621c764db174,network=Network(bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36300144-f9')
Nov 22 04:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b-userdata-shm.mount: Deactivated successfully.
Nov 22 04:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-99356188f33f2fba71afe77fcd013cb0a42f7d5559e3bb99fea33a2bfb4b8f23-merged.mount: Deactivated successfully.
Nov 22 04:23:30 compute-0 podman[308776]: 2025-11-22 04:23:30.312098343 +0000 UTC m=+0.120499232 container cleanup fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:23:30 compute-0 systemd[1]: libpod-conmon-fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b.scope: Deactivated successfully.
Nov 22 04:23:30 compute-0 podman[308833]: 2025-11-22 04:23:30.396815728 +0000 UTC m=+0.051469131 container remove fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251118)
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.402 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8920b6ca-c50d-4e3b-bb60-06cb377032f3]: (4, ('Sat Nov 22 04:23:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 (fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b)\nfe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b\nSat Nov 22 04:23:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 (fe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b)\nfe3f3e330656ac9fc32b58c3e747e8648f69a8a9684d381d2de01baaaf9f284b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.403 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[fa4176c6-3601-45d2-aa92-579a333a3522]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.404 162689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbd550fd2-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.407 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 kernel: tapbd550fd2-d0: left promiscuous mode
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.419 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.423 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2d77fee9-98a5-4c78-89cb-cd61e4bfae2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.438 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[d4fb470b-c73e-4d8e-bd66-36c557241a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.439 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc6de9d-755d-4018-816d-e96f2b2a56b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.454 261050 DEBUG oslo.privsep.daemon [-] privsep: reply[2467604d-c779-4baa-aed9-f5bc10cbe47d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 562147, 'reachable_time': 42860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308848, 'error': None, 'target': 'ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.456 162806 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bd550fd2-d0e4-4f32-84d1-b7eca9fc7e07 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Nov 22 04:23:30 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:23:30.456 162806 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f60fd8-e014-4264-a168-1e6453d0a5c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Nov 22 04:23:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dbd550fd2\x2dd0e4\x2d4f32\x2d84d1\x2db7eca9fc7e07.mount: Deactivated successfully.
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.475 253465 INFO nova.virt.libvirt.driver [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Deleting instance files /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c_del
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.476 253465 INFO nova.virt.libvirt.driver [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Deletion of /var/lib/nova/instances/29f4ebae-3730-44d8-99f0-08b20268863c_del complete
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.557 253465 INFO nova.compute.manager [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Took 0.57 seconds to destroy the instance on the hypervisor.
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.558 253465 DEBUG oslo.service.loopingcall [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.558 253465 DEBUG nova.compute.manager [-] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.559 253465 DEBUG nova.network.neutron [-] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.723 253465 DEBUG nova.compute.manager [req-7aa9e687-6b59-417f-a633-eba3e1822ee6 req-6d366d17-bbd0-46e3-bc6b-694936339956 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received event network-vif-unplugged-36300144-f97d-4cb4-a77a-621c764db174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.723 253465 DEBUG oslo_concurrency.lockutils [req-7aa9e687-6b59-417f-a633-eba3e1822ee6 req-6d366d17-bbd0-46e3-bc6b-694936339956 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.724 253465 DEBUG oslo_concurrency.lockutils [req-7aa9e687-6b59-417f-a633-eba3e1822ee6 req-6d366d17-bbd0-46e3-bc6b-694936339956 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.724 253465 DEBUG oslo_concurrency.lockutils [req-7aa9e687-6b59-417f-a633-eba3e1822ee6 req-6d366d17-bbd0-46e3-bc6b-694936339956 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.725 253465 DEBUG nova.compute.manager [req-7aa9e687-6b59-417f-a633-eba3e1822ee6 req-6d366d17-bbd0-46e3-bc6b-694936339956 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] No waiting events found dispatching network-vif-unplugged-36300144-f97d-4cb4-a77a-621c764db174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:23:30 compute-0 nova_compute[253461]: 2025-11-22 04:23:30.725 253465 DEBUG nova.compute.manager [req-7aa9e687-6b59-417f-a633-eba3e1822ee6 req-6d366d17-bbd0-46e3-bc6b-694936339956 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received event network-vif-unplugged-36300144-f97d-4cb4-a77a-621c764db174 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Nov 22 04:23:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 437 KiB/s wr, 10 op/s
Nov 22 04:23:31 compute-0 nova_compute[253461]: 2025-11-22 04:23:31.700 253465 DEBUG nova.network.neutron [-] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Nov 22 04:23:31 compute-0 nova_compute[253461]: 2025-11-22 04:23:31.718 253465 INFO nova.compute.manager [-] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Took 1.16 seconds to deallocate network for instance.
Nov 22 04:23:31 compute-0 nova_compute[253461]: 2025-11-22 04:23:31.835 253465 DEBUG nova.compute.manager [req-74e88334-8882-4fc3-902a-820060bb9815 req-d291043b-f35a-4a75-80cf-2476c114ea8f f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received event network-vif-deleted-36300144-f97d-4cb4-a77a-621c764db174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:23:31 compute-0 sshd-session[308707]: Connection closed by authenticating user root 27.79.46.85 port 57268 [preauth]
Nov 22 04:23:31 compute-0 nova_compute[253461]: 2025-11-22 04:23:31.978 253465 INFO nova.compute.manager [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Took 0.26 seconds to detach 1 volumes for instance.
Nov 22 04:23:32 compute-0 ceph-mon[75011]: pgmap v2281: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 437 KiB/s wr, 10 op/s
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.040 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.041 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.094 253465 DEBUG oslo_concurrency.processutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:23:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:23:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3951532176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.514 253465 DEBUG oslo_concurrency.processutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.527 253465 DEBUG nova.compute.provider_tree [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.558 253465 DEBUG nova.scheduler.client.report [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.592 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.551s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.639 253465 INFO nova.scheduler.client.report [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Deleted allocations for instance 29f4ebae-3730-44d8-99f0-08b20268863c
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.743 253465 DEBUG oslo_concurrency.lockutils [None req-cb2f2216-792f-41b7-9aee-3125977923db 26147ad59e2d4763b8edc27d80567b09 c9d01ebd7e4f4251b66172a246b8a08f - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.891 253465 DEBUG nova.compute.manager [req-a3eb2887-0f6f-44a7-bf3e-affb7d425f3e req-3a90e1ab-407f-467c-b949-9f709e939142 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received event network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.891 253465 DEBUG oslo_concurrency.lockutils [req-a3eb2887-0f6f-44a7-bf3e-affb7d425f3e req-3a90e1ab-407f-467c-b949-9f709e939142 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Acquiring lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.892 253465 DEBUG oslo_concurrency.lockutils [req-a3eb2887-0f6f-44a7-bf3e-affb7d425f3e req-3a90e1ab-407f-467c-b949-9f709e939142 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.892 253465 DEBUG oslo_concurrency.lockutils [req-a3eb2887-0f6f-44a7-bf3e-affb7d425f3e req-3a90e1ab-407f-467c-b949-9f709e939142 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] Lock "29f4ebae-3730-44d8-99f0-08b20268863c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.892 253465 DEBUG nova.compute.manager [req-a3eb2887-0f6f-44a7-bf3e-affb7d425f3e req-3a90e1ab-407f-467c-b949-9f709e939142 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] No waiting events found dispatching network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Nov 22 04:23:32 compute-0 nova_compute[253461]: 2025-11-22 04:23:32.893 253465 WARNING nova.compute.manager [req-a3eb2887-0f6f-44a7-bf3e-affb7d425f3e req-3a90e1ab-407f-467c-b949-9f709e939142 f0ced811f3a342bca2a3a0d1d2fc1f37 ecc7d72255774014bdfa2a9e64af1ec3 - - default default] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Received unexpected event network-vif-plugged-36300144-f97d-4cb4-a77a-621c764db174 for instance with vm_state deleted and task_state None.
Nov 22 04:23:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 0 B/s wr, 7 op/s
Nov 22 04:23:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3951532176' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:23:34 compute-0 ceph-mon[75011]: pgmap v2282: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 0 B/s wr, 7 op/s
Nov 22 04:23:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:34 compute-0 nova_compute[253461]: 2025-11-22 04:23:34.421 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:23:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803011990' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:23:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:23:34 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3803011990' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:23:34 compute-0 sshd-session[308710]: Connection closed by invalid user support 27.79.46.85 port 57280 [preauth]
Nov 22 04:23:34 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 255 B/s wr, 13 op/s
Nov 22 04:23:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3803011990' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:23:35 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3803011990' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:23:35 compute-0 nova_compute[253461]: 2025-11-22 04:23:35.310 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:36 compute-0 ceph-mon[75011]: pgmap v2283: 305 pgs: 305 active+clean; 295 MiB data, 675 MiB used, 59 GiB / 60 GiB avail; 217 KiB/s rd, 255 B/s wr, 13 op/s
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:23:36
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'volumes', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta']
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:23:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:23:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2662945973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:23:36 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:23:36 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2662945973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:23:36 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 279 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 1023 B/s wr, 34 op/s
Nov 22 04:23:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2662945973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:23:37 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/2662945973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:23:38 compute-0 ceph-mon[75011]: pgmap v2284: 305 pgs: 305 active+clean; 279 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 1023 B/s wr, 34 op/s
Nov 22 04:23:38 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 279 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 1023 B/s wr, 34 op/s
Nov 22 04:23:39 compute-0 sudo[308872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:39 compute-0 sudo[308872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:39 compute-0 sudo[308872]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:39 compute-0 sudo[308897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:23:39 compute-0 sudo[308897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:39 compute-0 sudo[308897]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:39 compute-0 ceph-mon[75011]: pgmap v2285: 305 pgs: 305 active+clean; 279 MiB data, 659 MiB used, 59 GiB / 60 GiB avail; 232 KiB/s rd, 1023 B/s wr, 34 op/s
Nov 22 04:23:39 compute-0 sudo[308922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:39 compute-0 sudo[308922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:39 compute-0 sudo[308922]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:39 compute-0 nova_compute[253461]: 2025-11-22 04:23:39.422 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:39 compute-0 sudo[308947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:23:39 compute-0 sudo[308947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:39 compute-0 sudo[308947]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:23:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:23:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:23:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:23:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:23:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:23:40 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 7c97af41-f329-4fb2-adf4-532ecc194b4c does not exist
Nov 22 04:23:40 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 13c1cb57-859d-4dd5-98d5-ee4d31dbd0f0 does not exist
Nov 22 04:23:40 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev a4535da6-4f47-41da-933d-1124eaea9ed3 does not exist
Nov 22 04:23:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:23:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:23:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:23:40 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:23:40 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:23:40 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:23:40 compute-0 sudo[309003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:40 compute-0 sudo[309003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:40 compute-0 sudo[309003]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:40 compute-0 sudo[309028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:23:40 compute-0 sudo[309028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:40 compute-0 sudo[309028]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:40 compute-0 nova_compute[253461]: 2025-11-22 04:23:40.312 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:40 compute-0 sudo[309053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:40 compute-0 sudo[309053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:40 compute-0 sudo[309053]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:23:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:23:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:23:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:23:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:23:40 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:23:40 compute-0 sudo[309078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:23:40 compute-0 sudo[309078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:40 compute-0 podman[309143]: 2025-11-22 04:23:40.778108138 +0000 UTC m=+0.049312559 container create 009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Nov 22 04:23:40 compute-0 systemd[1]: Started libpod-conmon-009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5.scope.
Nov 22 04:23:40 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:23:40 compute-0 podman[309143]: 2025-11-22 04:23:40.845851675 +0000 UTC m=+0.117056157 container init 009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Nov 22 04:23:40 compute-0 podman[309143]: 2025-11-22 04:23:40.75345931 +0000 UTC m=+0.024663731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:23:40 compute-0 podman[309143]: 2025-11-22 04:23:40.852239061 +0000 UTC m=+0.123443462 container start 009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Nov 22 04:23:40 compute-0 podman[309143]: 2025-11-22 04:23:40.856084795 +0000 UTC m=+0.127289277 container attach 009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 22 04:23:40 compute-0 dazzling_kilby[309159]: 167 167
Nov 22 04:23:40 compute-0 systemd[1]: libpod-009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5.scope: Deactivated successfully.
Nov 22 04:23:40 compute-0 conmon[309159]: conmon 009e04bcde6b2b30a34a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5.scope/container/memory.events
Nov 22 04:23:40 compute-0 podman[309143]: 2025-11-22 04:23:40.861352544 +0000 UTC m=+0.132556945 container died 009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:23:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb991b684d8dbf5d27ad74c153a1797654cd90e9afeb7bd8d3a6c324145fa336-merged.mount: Deactivated successfully.
Nov 22 04:23:40 compute-0 podman[309143]: 2025-11-22 04:23:40.897403577 +0000 UTC m=+0.168607978 container remove 009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_kilby, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Nov 22 04:23:40 compute-0 systemd[1]: libpod-conmon-009e04bcde6b2b30a34a915a27f3f5b03a39b0240af0224b37030a167da03cf5.scope: Deactivated successfully.
Nov 22 04:23:40 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Nov 22 04:23:41 compute-0 podman[309183]: 2025-11-22 04:23:41.072589025 +0000 UTC m=+0.059239878 container create ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Nov 22 04:23:41 compute-0 systemd[1]: Started libpod-conmon-ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b.scope.
Nov 22 04:23:41 compute-0 podman[309183]: 2025-11-22 04:23:41.042658419 +0000 UTC m=+0.029309262 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:23:41 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5948b46fc8f6a47a0ec62663d7c0ba5d8ee088eb4d4f29d0d6020c3b5e66c95e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5948b46fc8f6a47a0ec62663d7c0ba5d8ee088eb4d4f29d0d6020c3b5e66c95e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5948b46fc8f6a47a0ec62663d7c0ba5d8ee088eb4d4f29d0d6020c3b5e66c95e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5948b46fc8f6a47a0ec62663d7c0ba5d8ee088eb4d4f29d0d6020c3b5e66c95e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5948b46fc8f6a47a0ec62663d7c0ba5d8ee088eb4d4f29d0d6020c3b5e66c95e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:41 compute-0 podman[309183]: 2025-11-22 04:23:41.191299184 +0000 UTC m=+0.177950067 container init ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:23:41 compute-0 podman[309183]: 2025-11-22 04:23:41.204626804 +0000 UTC m=+0.191277637 container start ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Nov 22 04:23:41 compute-0 podman[309183]: 2025-11-22 04:23:41.215213796 +0000 UTC m=+0.201864699 container attach ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:23:41 compute-0 ceph-mon[75011]: pgmap v2286: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 240 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Nov 22 04:23:42 compute-0 gracious_chaplygin[309200]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:23:42 compute-0 gracious_chaplygin[309200]: --> relative data size: 1.0
Nov 22 04:23:42 compute-0 gracious_chaplygin[309200]: --> All data devices are unavailable
Nov 22 04:23:42 compute-0 systemd[1]: libpod-ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b.scope: Deactivated successfully.
Nov 22 04:23:42 compute-0 systemd[1]: libpod-ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b.scope: Consumed 1.074s CPU time.
Nov 22 04:23:42 compute-0 podman[309183]: 2025-11-22 04:23:42.328992511 +0000 UTC m=+1.315643374 container died ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Nov 22 04:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5948b46fc8f6a47a0ec62663d7c0ba5d8ee088eb4d4f29d0d6020c3b5e66c95e-merged.mount: Deactivated successfully.
Nov 22 04:23:42 compute-0 podman[309183]: 2025-11-22 04:23:42.487973418 +0000 UTC m=+1.474624282 container remove ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_chaplygin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:23:42 compute-0 systemd[1]: libpod-conmon-ed29e5783ba683c8f6637a0c4d7d149e4e502d760d7e50ae8553356a1f82684b.scope: Deactivated successfully.
Nov 22 04:23:42 compute-0 sudo[309078]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:42 compute-0 podman[309242]: 2025-11-22 04:23:42.612208471 +0000 UTC m=+0.069101446 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251118, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 22 04:23:42 compute-0 sudo[309248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:42 compute-0 sudo[309248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:42 compute-0 sudo[309248]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:42 compute-0 sudo[309286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:23:42 compute-0 sudo[309286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:42 compute-0 sudo[309286]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:42 compute-0 sudo[309311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:42 compute-0 sudo[309311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:42 compute-0 sudo[309311]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:42 compute-0 sudo[309336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:23:42 compute-0 sudo[309336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:42 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Nov 22 04:23:43 compute-0 podman[309402]: 2025-11-22 04:23:43.382203504 +0000 UTC m=+0.094248929 container create 81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Nov 22 04:23:43 compute-0 podman[309402]: 2025-11-22 04:23:43.320783133 +0000 UTC m=+0.032828628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:23:43 compute-0 systemd[1]: Started libpod-conmon-81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61.scope.
Nov 22 04:23:43 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:23:43 compute-0 podman[309402]: 2025-11-22 04:23:43.531913924 +0000 UTC m=+0.243959369 container init 81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Nov 22 04:23:43 compute-0 podman[309402]: 2025-11-22 04:23:43.539334181 +0000 UTC m=+0.251379596 container start 81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Nov 22 04:23:43 compute-0 wonderful_cray[309418]: 167 167
Nov 22 04:23:43 compute-0 systemd[1]: libpod-81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61.scope: Deactivated successfully.
Nov 22 04:23:43 compute-0 podman[309402]: 2025-11-22 04:23:43.548957386 +0000 UTC m=+0.261002872 container attach 81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Nov 22 04:23:43 compute-0 podman[309402]: 2025-11-22 04:23:43.549614531 +0000 UTC m=+0.261659946 container died 81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Nov 22 04:23:43 compute-0 nova_compute[253461]: 2025-11-22 04:23:43.627 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-48d5c23d7eb8351118d2c60902f10e839fed9bae548679188f3aa695079bcef2-merged.mount: Deactivated successfully.
Nov 22 04:23:43 compute-0 podman[309402]: 2025-11-22 04:23:43.694913203 +0000 UTC m=+0.406958608 container remove 81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_cray, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef)
Nov 22 04:23:43 compute-0 systemd[1]: libpod-conmon-81d8f5560645d515b93fa42f457dbaf13d73aad50890a176f4f01e666dc85d61.scope: Deactivated successfully.
Nov 22 04:23:43 compute-0 nova_compute[253461]: 2025-11-22 04:23:43.767 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:43 compute-0 podman[309444]: 2025-11-22 04:23:43.926524763 +0000 UTC m=+0.084469020 container create b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:23:43 compute-0 podman[309444]: 2025-11-22 04:23:43.886293248 +0000 UTC m=+0.044237485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:23:44 compute-0 systemd[1]: Started libpod-conmon-b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f.scope.
Nov 22 04:23:44 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe099e0cf9957383180bb06c5ae1788dc758203cac1d07afde6b52cf1b83d1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:44 compute-0 ceph-mon[75011]: pgmap v2287: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 234 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Nov 22 04:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe099e0cf9957383180bb06c5ae1788dc758203cac1d07afde6b52cf1b83d1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe099e0cf9957383180bb06c5ae1788dc758203cac1d07afde6b52cf1b83d1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fe099e0cf9957383180bb06c5ae1788dc758203cac1d07afde6b52cf1b83d1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:44 compute-0 podman[309444]: 2025-11-22 04:23:44.105576104 +0000 UTC m=+0.263520382 container init b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:23:44 compute-0 podman[309444]: 2025-11-22 04:23:44.118946158 +0000 UTC m=+0.276890395 container start b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:23:44 compute-0 podman[309444]: 2025-11-22 04:23:44.1281827 +0000 UTC m=+0.286126967 container attach b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Nov 22 04:23:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.331073) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785424331131, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1509, "num_deletes": 250, "total_data_size": 2319222, "memory_usage": 2354576, "flush_reason": "Manual Compaction"}
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785424407371, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 2286197, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43949, "largest_seqno": 45457, "table_properties": {"data_size": 2279206, "index_size": 4062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13708, "raw_average_key_size": 18, "raw_value_size": 2265186, "raw_average_value_size": 3065, "num_data_blocks": 182, "num_entries": 739, "num_filter_entries": 739, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763785270, "oldest_key_time": 1763785270, "file_creation_time": 1763785424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 76401 microseconds, and 7310 cpu microseconds.
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:23:44 compute-0 nova_compute[253461]: 2025-11-22 04:23:44.465 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.407472) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 2286197 bytes OK
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.407502) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.470185) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.470252) EVENT_LOG_v1 {"time_micros": 1763785424470239, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.470280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 2312597, prev total WAL file size 2312597, number of live WAL files 2.
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.471628) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(2232KB)], [92(11MB)]
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785424471695, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 14275534, "oldest_snapshot_seqno": -1}
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 7776 keys, 13562179 bytes, temperature: kUnknown
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785424688972, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 13562179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13502347, "index_size": 39304, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19461, "raw_key_size": 197632, "raw_average_key_size": 25, "raw_value_size": 13355111, "raw_average_value_size": 1717, "num_data_blocks": 1546, "num_entries": 7776, "num_filter_entries": 7776, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763785424, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.689293) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 13562179 bytes
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.698054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 65.7 rd, 62.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 11.4 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.9) OK, records in: 8292, records dropped: 516 output_compression: NoCompression
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.698108) EVENT_LOG_v1 {"time_micros": 1763785424698086, "job": 54, "event": "compaction_finished", "compaction_time_micros": 217370, "compaction_time_cpu_micros": 37443, "output_level": 6, "num_output_files": 1, "total_output_size": 13562179, "num_input_records": 8292, "num_output_records": 7776, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785424699273, "job": 54, "event": "table_file_deletion", "file_number": 94}
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785424704032, "job": 54, "event": "table_file_deletion", "file_number": 92}
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.471499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.704158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.704164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.704166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.704168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:44 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:44.704170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:44 compute-0 optimistic_williams[309461]: {
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:     "0": [
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:         {
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "devices": [
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "/dev/loop3"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             ],
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_name": "ceph_lv0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_size": "21470642176",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "name": "ceph_lv0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "tags": {
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cluster_name": "ceph",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.crush_device_class": "",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.encrypted": "0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osd_id": "0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.type": "block",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.vdo": "0"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             },
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "type": "block",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "vg_name": "ceph_vg0"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:         }
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:     ],
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:     "1": [
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:         {
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "devices": [
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "/dev/loop4"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             ],
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_name": "ceph_lv1",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_size": "21470642176",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "name": "ceph_lv1",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "tags": {
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cluster_name": "ceph",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.crush_device_class": "",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.encrypted": "0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osd_id": "1",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.type": "block",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.vdo": "0"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             },
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "type": "block",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "vg_name": "ceph_vg1"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:         }
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:     ],
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:     "2": [
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:         {
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "devices": [
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "/dev/loop5"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             ],
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_name": "ceph_lv2",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_size": "21470642176",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "name": "ceph_lv2",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "tags": {
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.cluster_name": "ceph",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.crush_device_class": "",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.encrypted": "0",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osd_id": "2",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.type": "block",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:                 "ceph.vdo": "0"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             },
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "type": "block",
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:             "vg_name": "ceph_vg2"
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:         }
Nov 22 04:23:44 compute-0 optimistic_williams[309461]:     ]
Nov 22 04:23:44 compute-0 optimistic_williams[309461]: }
Nov 22 04:23:44 compute-0 systemd[1]: libpod-b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f.scope: Deactivated successfully.
Nov 22 04:23:44 compute-0 podman[309444]: 2025-11-22 04:23:44.970664315 +0000 UTC m=+1.128608612 container died b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:23:44 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 1.5 KiB/s wr, 37 op/s
Nov 22 04:23:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9fe099e0cf9957383180bb06c5ae1788dc758203cac1d07afde6b52cf1b83d1e-merged.mount: Deactivated successfully.
Nov 22 04:23:45 compute-0 nova_compute[253461]: 2025-11-22 04:23:45.241 253465 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1763785410.2389598, 29f4ebae-3730-44d8-99f0-08b20268863c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Nov 22 04:23:45 compute-0 nova_compute[253461]: 2025-11-22 04:23:45.242 253465 INFO nova.compute.manager [-] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] VM Stopped (Lifecycle Event)
Nov 22 04:23:45 compute-0 nova_compute[253461]: 2025-11-22 04:23:45.313 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:45 compute-0 nova_compute[253461]: 2025-11-22 04:23:45.316 253465 DEBUG nova.compute.manager [None req-234d2c15-3253-4de0-9e5e-a989285ba156 - - - - - -] [instance: 29f4ebae-3730-44d8-99f0-08b20268863c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Nov 22 04:23:45 compute-0 podman[309444]: 2025-11-22 04:23:45.330750772 +0000 UTC m=+1.488695019 container remove b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Nov 22 04:23:45 compute-0 systemd[1]: libpod-conmon-b1f559d2081faad3ebcfcb06ef57dfd96dee043de3d72b5209617bd8ebafeb4f.scope: Deactivated successfully.
Nov 22 04:23:45 compute-0 ceph-mon[75011]: pgmap v2288: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 199 KiB/s rd, 1.5 KiB/s wr, 37 op/s
Nov 22 04:23:45 compute-0 sudo[309336]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:45 compute-0 sudo[309482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:45 compute-0 sudo[309482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:45 compute-0 sudo[309482]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:45 compute-0 sudo[309507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:23:45 compute-0 sudo[309507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:45 compute-0 sudo[309507]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:45 compute-0 sudo[309532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:45 compute-0 sudo[309532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:45 compute-0 sudo[309532]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:45 compute-0 sudo[309557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:23:45 compute-0 sudo[309557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:46 compute-0 podman[309622]: 2025-11-22 04:23:46.082393904 +0000 UTC m=+0.042544611 container create c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:23:46 compute-0 systemd[1]: Started libpod-conmon-c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2.scope.
Nov 22 04:23:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:23:46 compute-0 podman[309622]: 2025-11-22 04:23:46.148358544 +0000 UTC m=+0.108509331 container init c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Nov 22 04:23:46 compute-0 podman[309622]: 2025-11-22 04:23:46.157151176 +0000 UTC m=+0.117301873 container start c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Nov 22 04:23:46 compute-0 podman[309622]: 2025-11-22 04:23:46.064705789 +0000 UTC m=+0.024856506 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:23:46 compute-0 podman[309622]: 2025-11-22 04:23:46.162115543 +0000 UTC m=+0.122266300 container attach c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:23:46 compute-0 eager_noether[309638]: 167 167
Nov 22 04:23:46 compute-0 systemd[1]: libpod-c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2.scope: Deactivated successfully.
Nov 22 04:23:46 compute-0 podman[309622]: 2025-11-22 04:23:46.165915642 +0000 UTC m=+0.126066339 container died c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Nov 22 04:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-10c17bade8f1a5c6c8ee51b49b938760dc24cea610038c3c95a652db2cb22579-merged.mount: Deactivated successfully.
Nov 22 04:23:46 compute-0 podman[309622]: 2025-11-22 04:23:46.203715621 +0000 UTC m=+0.163866328 container remove c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_noether, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Nov 22 04:23:46 compute-0 systemd[1]: libpod-conmon-c11de89abb61c3c85f6681060915571c08ffb64c1c8ce4cfc5527dac0dce96f2.scope: Deactivated successfully.
Nov 22 04:23:46 compute-0 podman[309661]: 2025-11-22 04:23:46.3747184 +0000 UTC m=+0.059321246 container create 378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:23:46 compute-0 systemd[1]: Started libpod-conmon-378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9.scope.
Nov 22 04:23:46 compute-0 podman[309661]: 2025-11-22 04:23:46.344173757 +0000 UTC m=+0.028776673 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:23:46 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96c9a7db010d906ad5e9513d0133419912c59a294aa964725ca6b531bf11169/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96c9a7db010d906ad5e9513d0133419912c59a294aa964725ca6b531bf11169/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96c9a7db010d906ad5e9513d0133419912c59a294aa964725ca6b531bf11169/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b96c9a7db010d906ad5e9513d0133419912c59a294aa964725ca6b531bf11169/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:23:46 compute-0 podman[309661]: 2025-11-22 04:23:46.487165596 +0000 UTC m=+0.171768433 container init 378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Nov 22 04:23:46 compute-0 podman[309661]: 2025-11-22 04:23:46.496256452 +0000 UTC m=+0.180859278 container start 378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:23:46 compute-0 podman[309661]: 2025-11-22 04:23:46.500638068 +0000 UTC m=+0.185240894 container attach 378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:23:46 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 04:23:47 compute-0 adoring_herschel[309677]: {
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "osd_id": 1,
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "type": "bluestore"
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:     },
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "osd_id": 0,
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "type": "bluestore"
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:     },
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "osd_id": 2,
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:         "type": "bluestore"
Nov 22 04:23:47 compute-0 adoring_herschel[309677]:     }
Nov 22 04:23:47 compute-0 adoring_herschel[309677]: }
Nov 22 04:23:47 compute-0 systemd[1]: libpod-378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9.scope: Deactivated successfully.
Nov 22 04:23:47 compute-0 systemd[1]: libpod-378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9.scope: Consumed 1.089s CPU time.
Nov 22 04:23:47 compute-0 podman[309661]: 2025-11-22 04:23:47.58250986 +0000 UTC m=+1.267112736 container died 378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Nov 22 04:23:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-b96c9a7db010d906ad5e9513d0133419912c59a294aa964725ca6b531bf11169-merged.mount: Deactivated successfully.
Nov 22 04:23:47 compute-0 podman[309661]: 2025-11-22 04:23:47.651971899 +0000 UTC m=+1.336574756 container remove 378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Nov 22 04:23:47 compute-0 systemd[1]: libpod-conmon-378a266e675cf1e56b6fb73c053c30e55b3e60f9a8ab01caaa2f5a2d723593e9.scope: Deactivated successfully.
Nov 22 04:23:47 compute-0 sudo[309557]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:23:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:23:47 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:23:47 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:23:47 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5e4c72b6-94e3-4009-adcf-d3df1248eaf1 does not exist
Nov 22 04:23:47 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 79e7918c-d774-4383-88a2-22b4af4c2384 does not exist
Nov 22 04:23:47 compute-0 sudo[309723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:23:47 compute-0 sudo[309723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:47 compute-0 sudo[309723]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:47 compute-0 sudo[309748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:23:47 compute-0 sudo[309748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:23:47 compute-0 sudo[309748]: pam_unix(sudo:session): session closed for user root
Nov 22 04:23:48 compute-0 ceph-mon[75011]: pgmap v2289: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.2 KiB/s wr, 31 op/s
Nov 22 04:23:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:23:48 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:23:48 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 10 op/s
Nov 22 04:23:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.302164) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785429302208, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 305, "num_deletes": 256, "total_data_size": 115143, "memory_usage": 122760, "flush_reason": "Manual Compaction"}
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785429305420, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 114891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45458, "largest_seqno": 45762, "table_properties": {"data_size": 112816, "index_size": 239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5032, "raw_average_key_size": 17, "raw_value_size": 108749, "raw_average_value_size": 384, "num_data_blocks": 10, "num_entries": 283, "num_filter_entries": 283, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763785425, "oldest_key_time": 1763785425, "file_creation_time": 1763785429, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 3372 microseconds, and 1595 cpu microseconds.
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.305537) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 114891 bytes OK
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.305560) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.306896) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.306921) EVENT_LOG_v1 {"time_micros": 1763785429306913, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.306941) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 112920, prev total WAL file size 112920, number of live WAL files 2.
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.307481) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353037' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(112KB)], [95(12MB)]
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785429307519, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 13677070, "oldest_snapshot_seqno": -1}
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 7537 keys, 13536965 bytes, temperature: kUnknown
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785429419367, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 13536965, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13478142, "index_size": 38871, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18885, "raw_key_size": 193595, "raw_average_key_size": 25, "raw_value_size": 13334452, "raw_average_value_size": 1769, "num_data_blocks": 1525, "num_entries": 7537, "num_filter_entries": 7537, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763785429, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.419789) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 13536965 bytes
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.421602) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 122.1 rd, 120.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 12.9 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(236.9) write-amplify(117.8) OK, records in: 8059, records dropped: 522 output_compression: NoCompression
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.421643) EVENT_LOG_v1 {"time_micros": 1763785429421626, "job": 56, "event": "compaction_finished", "compaction_time_micros": 111992, "compaction_time_cpu_micros": 32327, "output_level": 6, "num_output_files": 1, "total_output_size": 13536965, "num_input_records": 8059, "num_output_records": 7537, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785429421876, "job": 56, "event": "table_file_deletion", "file_number": 97}
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785429426915, "job": 56, "event": "table_file_deletion", "file_number": 95}
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.307370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.426993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.427001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.427003) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.427005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:49 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:23:49.427007) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:23:49 compute-0 ceph-mon[75011]: pgmap v2290: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 10 op/s
Nov 22 04:23:49 compute-0 nova_compute[253461]: 2025-11-22 04:23:49.468 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:50 compute-0 nova_compute[253461]: 2025-11-22 04:23:50.316 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:23:50 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9911 writes, 45K keys, 9911 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9911 writes, 9911 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1362 writes, 6409 keys, 1362 commit groups, 1.0 writes per commit group, ingest: 8.71 MB, 0.01 MB/s
                                           Interval WAL: 1362 writes, 1362 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     46.6      1.17              0.21        28    0.042       0      0       0.0       0.0
                                             L6      1/0   12.91 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.5    115.2     98.6      2.50              0.82        27    0.092    164K    15K       0.0       0.0
                                            Sum      1/0   12.91 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.5     78.5     82.0      3.66              1.02        55    0.067    164K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.7     89.9     91.2      0.84              0.22        12    0.070     47K   3048       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    115.2     98.6      2.50              0.82        27    0.092    164K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     46.6      1.16              0.21        27    0.043       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     21.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.053, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.29 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 3.7 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5574942991f0#2 capacity: 304.00 MB usage: 29.73 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000253 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2036,28.38 MB,9.33716%) FilterBlock(56,463.86 KB,0.149009%) IndexBlock(56,913.53 KB,0.293461%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Nov 22 04:23:50 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 10 op/s
Nov 22 04:23:52 compute-0 ceph-mon[75011]: pgmap v2291: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 511 B/s wr, 10 op/s
Nov 22 04:23:52 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:23:54 compute-0 ceph-mon[75011]: pgmap v2292: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:23:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:54 compute-0 nova_compute[253461]: 2025-11-22 04:23:54.470 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:54 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:23:55 compute-0 nova_compute[253461]: 2025-11-22 04:23:55.318 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:23:56 compute-0 ceph-mon[75011]: pgmap v2293: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:23:56 compute-0 nova_compute[253461]: 2025-11-22 04:23:56.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:23:56 compute-0 nova_compute[253461]: 2025-11-22 04:23:56.458 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:23:56 compute-0 nova_compute[253461]: 2025-11-22 04:23:56.458 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:23:56 compute-0 nova_compute[253461]: 2025-11-22 04:23:56.459 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:23:56 compute-0 nova_compute[253461]: 2025-11-22 04:23:56.459 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:23:56 compute-0 nova_compute[253461]: 2025-11-22 04:23:56.460 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:23:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:23:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2103680821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:23:56 compute-0 nova_compute[253461]: 2025-11-22 04:23:56.929 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:23:56 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:23:57 compute-0 podman[309796]: 2025-11-22 04:23:57.082743193 +0000 UTC m=+0.108989393 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:23:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2103680821' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:23:57 compute-0 podman[309797]: 2025-11-22 04:23:57.114526165 +0000 UTC m=+0.133257968 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.158 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.159 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4253MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.159 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.160 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.243 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.244 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.263 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:23:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:23:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142046131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.715 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.722 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.737 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.756 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:23:57 compute-0 nova_compute[253461]: 2025-11-22 04:23:57.757 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:23:58 compute-0 ceph-mon[75011]: pgmap v2294: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:23:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4142046131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:23:58 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:23:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:23:59 compute-0 nova_compute[253461]: 2025-11-22 04:23:59.472 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:00 compute-0 ceph-mon[75011]: pgmap v2295: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:00 compute-0 nova_compute[253461]: 2025-11-22 04:24:00.319 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:24:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346804142' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:24:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:24:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346804142' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:24:00 compute-0 nova_compute[253461]: 2025-11-22 04:24:00.757 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:00 compute-0 nova_compute[253461]: 2025-11-22 04:24:00.758 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:00 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/346804142' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:24:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/346804142' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:24:01 compute-0 nova_compute[253461]: 2025-11-22 04:24:01.426 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:01 compute-0 nova_compute[253461]: 2025-11-22 04:24:01.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:01 compute-0 nova_compute[253461]: 2025-11-22 04:24:01.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:01 compute-0 nova_compute[253461]: 2025-11-22 04:24:01.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:01 compute-0 nova_compute[253461]: 2025-11-22 04:24:01.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:24:02 compute-0 ceph-mon[75011]: pgmap v2296: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:02 compute-0 nova_compute[253461]: 2025-11-22 04:24:02.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:02 compute-0 nova_compute[253461]: 2025-11-22 04:24:02.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:24:02 compute-0 nova_compute[253461]: 2025-11-22 04:24:02.431 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:24:02 compute-0 nova_compute[253461]: 2025-11-22 04:24:02.556 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:24:02 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:03 compute-0 ceph-mon[75011]: pgmap v2297: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:04 compute-0 nova_compute[253461]: 2025-11-22 04:24:04.475 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:04 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:05 compute-0 nova_compute[253461]: 2025-11-22 04:24:05.364 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:06 compute-0 ceph-mon[75011]: pgmap v2298: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:06 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:08 compute-0 ceph-mon[75011]: pgmap v2299: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:08 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:09 compute-0 sshd-session[309865]: Connection closed by authenticating user root 27.79.46.85 port 58670 [preauth]
Nov 22 04:24:09 compute-0 nova_compute[253461]: 2025-11-22 04:24:09.476 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:10 compute-0 ceph-mon[75011]: pgmap v2300: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:10 compute-0 nova_compute[253461]: 2025-11-22 04:24:10.365 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:10 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:11 compute-0 nova_compute[253461]: 2025-11-22 04:24:11.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:12 compute-0 ceph-mon[75011]: pgmap v2301: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:12 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:13 compute-0 podman[309867]: 2025-11-22 04:24:13.412386604 +0000 UTC m=+0.092289157 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76)
Nov 22 04:24:14 compute-0 ceph-mon[75011]: pgmap v2302: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.309787) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785454309851, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 453, "num_deletes": 250, "total_data_size": 379849, "memory_usage": 388080, "flush_reason": "Manual Compaction"}
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785454316212, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 283879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45763, "largest_seqno": 46215, "table_properties": {"data_size": 281472, "index_size": 507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6567, "raw_average_key_size": 20, "raw_value_size": 276598, "raw_average_value_size": 853, "num_data_blocks": 23, "num_entries": 324, "num_filter_entries": 324, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763785429, "oldest_key_time": 1763785429, "file_creation_time": 1763785454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 6453 microseconds, and 1534 cpu microseconds.
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.316251) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 283879 bytes OK
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.316268) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.318804) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.318845) EVENT_LOG_v1 {"time_micros": 1763785454318841, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.318861) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 377124, prev total WAL file size 377124, number of live WAL files 2.
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.319302) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(277KB)], [98(12MB)]
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785454319323, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 13820844, "oldest_snapshot_seqno": -1}
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 7361 keys, 10624693 bytes, temperature: kUnknown
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785454381891, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 10624693, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10571627, "index_size": 33562, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 190117, "raw_average_key_size": 25, "raw_value_size": 10435642, "raw_average_value_size": 1417, "num_data_blocks": 1304, "num_entries": 7361, "num_filter_entries": 7361, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763785454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.382238) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 10624693 bytes
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.383834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.5 rd, 169.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.9 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(86.1) write-amplify(37.4) OK, records in: 7861, records dropped: 500 output_compression: NoCompression
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.383867) EVENT_LOG_v1 {"time_micros": 1763785454383853, "job": 58, "event": "compaction_finished", "compaction_time_micros": 62679, "compaction_time_cpu_micros": 25197, "output_level": 6, "num_output_files": 1, "total_output_size": 10624693, "num_input_records": 7861, "num_output_records": 7361, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785454384139, "job": 58, "event": "table_file_deletion", "file_number": 100}
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785454388857, "job": 58, "event": "table_file_deletion", "file_number": 98}
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.319228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.388959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.388965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.388969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.388972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:14 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:14.388974) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:14 compute-0 nova_compute[253461]: 2025-11-22 04:24:14.481 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:14 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:15 compute-0 ceph-mon[75011]: pgmap v2303: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:15 compute-0 nova_compute[253461]: 2025-11-22 04:24:15.368 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:16 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:18 compute-0 ceph-mon[75011]: pgmap v2304: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:18 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:19 compute-0 ovn_controller[152691]: 2025-11-22T04:24:19Z|00316|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Nov 22 04:24:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:19 compute-0 nova_compute[253461]: 2025-11-22 04:24:19.522 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:20 compute-0 ceph-mon[75011]: pgmap v2305: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:20 compute-0 nova_compute[253461]: 2025-11-22 04:24:20.369 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:20 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:21 compute-0 ceph-mon[75011]: pgmap v2306: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.431 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.432 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.433 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.434 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.435 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.435 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.587 253465 DEBUG nova.virt.libvirt.imagecache [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.588 253465 WARNING nova.virt.libvirt.imagecache [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Unknown base file: /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.589 253465 INFO nova.virt.libvirt.imagecache [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Removable base files: /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.589 253465 INFO nova.virt.libvirt.imagecache [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/25c5d46187abbddf047b2aea949ae06d253afe2d
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.590 253465 DEBUG nova.virt.libvirt.imagecache [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.590 253465 DEBUG nova.virt.libvirt.imagecache [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Nov 22 04:24:22 compute-0 nova_compute[253461]: 2025-11-22 04:24:22.591 253465 DEBUG nova.virt.libvirt.imagecache [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Nov 22 04:24:22 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:24:23.043 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:24:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:24:23.044 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:24:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:24:23.044 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:24:24 compute-0 ceph-mon[75011]: pgmap v2307: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:24 compute-0 nova_compute[253461]: 2025-11-22 04:24:24.524 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:24 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:25 compute-0 ceph-mon[75011]: pgmap v2308: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:25 compute-0 nova_compute[253461]: 2025-11-22 04:24:25.418 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:26 compute-0 sshd-session[309888]: Connection reset by 147.185.132.141 port 62732 [preauth]
Nov 22 04:24:26 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:27 compute-0 podman[309891]: 2025-11-22 04:24:27.408375633 +0000 UTC m=+0.081908402 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 22 04:24:27 compute-0 podman[309892]: 2025-11-22 04:24:27.442921936 +0000 UTC m=+0.116677268 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251118, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 22 04:24:28 compute-0 ceph-mon[75011]: pgmap v2309: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:28 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:29 compute-0 nova_compute[253461]: 2025-11-22 04:24:29.526 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:30 compute-0 ceph-mon[75011]: pgmap v2310: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:30 compute-0 nova_compute[253461]: 2025-11-22 04:24:30.422 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:30 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:32 compute-0 ceph-mon[75011]: pgmap v2311: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:32 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:34 compute-0 ceph-mon[75011]: pgmap v2312: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:34 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:34 compute-0 nova_compute[253461]: 2025-11-22 04:24:34.527 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:35 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:35 compute-0 ceph-mon[75011]: pgmap v2313: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:35 compute-0 nova_compute[253461]: 2025-11-22 04:24:35.424 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Optimize plan auto_2025-11-22_04:24:36
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] do_upmap
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.data', '.mgr', 'default.rgw.log', 'images', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'vms']
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [balancer INFO root] prepared 0/10 changes
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: vms, start_after=
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: volumes, start_after=
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: backups, start_after=
Nov 22 04:24:36 compute-0 ceph-mgr[75294]: [rbd_support INFO root] load_schedules: images, start_after=
Nov 22 04:24:37 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:37 compute-0 ceph-mgr[75294]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3005905960
Nov 22 04:24:38 compute-0 ceph-mon[75011]: pgmap v2314: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:39 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:39 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:39 compute-0 nova_compute[253461]: 2025-11-22 04:24:39.530 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:40 compute-0 ceph-mon[75011]: pgmap v2315: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:40 compute-0 nova_compute[253461]: 2025-11-22 04:24:40.425 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:41 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:41 compute-0 ceph-mon[75011]: pgmap v2316: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:43 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:44 compute-0 ceph-mon[75011]: pgmap v2317: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:44 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:44 compute-0 podman[309936]: 2025-11-22 04:24:44.448204197 +0000 UTC m=+0.115795415 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251118, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 22 04:24:44 compute-0 nova_compute[253461]: 2025-11-22 04:24:44.532 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:45 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:45 compute-0 nova_compute[253461]: 2025-11-22 04:24:45.428 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:46 compute-0 ceph-mon[75011]: pgmap v2318: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] _maybe_adjust
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:24:46 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:24:47 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:47 compute-0 sudo[309956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:48 compute-0 sudo[309956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:48 compute-0 sudo[309956]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:48 compute-0 sudo[309981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:24:48 compute-0 sudo[309981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:48 compute-0 sudo[309981]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:48 compute-0 ceph-mon[75011]: pgmap v2319: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:48 compute-0 sudo[310006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:48 compute-0 sudo[310006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:48 compute-0 sudo[310006]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:48 compute-0 sudo[310031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --timeout 895 gather-facts
Nov 22 04:24:48 compute-0 sudo[310031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:48 compute-0 sudo[310031]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:24:48 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:24:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Nov 22 04:24:48 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:24:48 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Nov 22 04:24:49 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:24:49 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev c0c01bb1-104b-4ed3-bf20-10c3a7223c65 does not exist
Nov 22 04:24:49 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 103ecfa1-d4b9-4341-828b-4788a02fb90f does not exist
Nov 22 04:24:49 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 5b4d6b14-4df3-4379-bef7-6af9c97beef4 does not exist
Nov 22 04:24:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Nov 22 04:24:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:24:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Nov 22 04:24:49 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:24:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:24:49 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:24:49 compute-0 sudo[310088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:49 compute-0 sudo[310088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:49 compute-0 sudo[310088]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:24:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Nov 22 04:24:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:24:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Nov 22 04:24:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Nov 22 04:24:49 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:24:49 compute-0 sudo[310113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:24:49 compute-0 sudo[310113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:49 compute-0 sudo[310113]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:49 compute-0 sudo[310138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:49 compute-0 sudo[310138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:49 compute-0 sudo[310138]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:49 compute-0 sudo[310163]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --yes --no-systemd
Nov 22 04:24:49 compute-0 sudo[310163]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:49 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:49 compute-0 nova_compute[253461]: 2025-11-22 04:24:49.533 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:49 compute-0 podman[310230]: 2025-11-22 04:24:49.75039359 +0000 UTC m=+0.040365145 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:49 compute-0 podman[310230]: 2025-11-22 04:24:49.875733309 +0000 UTC m=+0.165704804 container create bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:24:50 compute-0 systemd[1]: Started libpod-conmon-bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389.scope.
Nov 22 04:24:50 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:24:50 compute-0 ceph-mon[75011]: pgmap v2320: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:50 compute-0 podman[310230]: 2025-11-22 04:24:50.259863131 +0000 UTC m=+0.549834676 container init bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:24:50 compute-0 podman[310230]: 2025-11-22 04:24:50.272041069 +0000 UTC m=+0.562012565 container start bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:24:50 compute-0 silly_sammet[310247]: 167 167
Nov 22 04:24:50 compute-0 systemd[1]: libpod-bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389.scope: Deactivated successfully.
Nov 22 04:24:50 compute-0 podman[310230]: 2025-11-22 04:24:50.314287549 +0000 UTC m=+0.604259035 container attach bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Nov 22 04:24:50 compute-0 podman[310230]: 2025-11-22 04:24:50.314965841 +0000 UTC m=+0.604937327 container died bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Nov 22 04:24:50 compute-0 nova_compute[253461]: 2025-11-22 04:24:50.429 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9ceab29f701544c5dc8f27ce8400e6e47eba210e7d960ca55fca5bb6fc145c5-merged.mount: Deactivated successfully.
Nov 22 04:24:50 compute-0 podman[310230]: 2025-11-22 04:24:50.669234483 +0000 UTC m=+0.959205979 container remove bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:24:50 compute-0 systemd[1]: libpod-conmon-bd86cf6b31df582010e84ab995ac902e945d3b4e140683b786c8cbe516c04389.scope: Deactivated successfully.
Nov 22 04:24:51 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:51 compute-0 podman[310273]: 2025-11-22 04:24:50.912903668 +0000 UTC m=+0.040373276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:51 compute-0 podman[310273]: 2025-11-22 04:24:51.178300331 +0000 UTC m=+0.305769960 container create 8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Nov 22 04:24:51 compute-0 ceph-mon[75011]: pgmap v2321: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:51 compute-0 systemd[1]: Started libpod-conmon-8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4.scope.
Nov 22 04:24:51 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e457785982ace62cfefc548976f04d56941112d2c61d47712103b1f4ded2f0bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.418217) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785491418289, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 541, "num_deletes": 251, "total_data_size": 547274, "memory_usage": 556856, "flush_reason": "Manual Compaction"}
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Nov 22 04:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e457785982ace62cfefc548976f04d56941112d2c61d47712103b1f4ded2f0bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e457785982ace62cfefc548976f04d56941112d2c61d47712103b1f4ded2f0bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e457785982ace62cfefc548976f04d56941112d2c61d47712103b1f4ded2f0bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e457785982ace62cfefc548976f04d56941112d2c61d47712103b1f4ded2f0bd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785491429846, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 542215, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 46216, "largest_seqno": 46756, "table_properties": {"data_size": 539205, "index_size": 982, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6962, "raw_average_key_size": 19, "raw_value_size": 533240, "raw_average_value_size": 1456, "num_data_blocks": 45, "num_entries": 366, "num_filter_entries": 366, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763785455, "oldest_key_time": 1763785455, "file_creation_time": 1763785491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 11672 microseconds, and 4013 cpu microseconds.
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.429899) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 542215 bytes OK
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.429921) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.446093) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.446119) EVENT_LOG_v1 {"time_micros": 1763785491446111, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.446141) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 544220, prev total WAL file size 544220, number of live WAL files 2.
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.446951) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(529KB)], [101(10MB)]
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785491446995, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 11166908, "oldest_snapshot_seqno": -1}
Nov 22 04:24:51 compute-0 podman[310273]: 2025-11-22 04:24:51.454562944 +0000 UTC m=+0.582032573 container init 8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:24:51 compute-0 podman[310273]: 2025-11-22 04:24:51.465867633 +0000 UTC m=+0.593337262 container start 8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:24:51 compute-0 podman[310273]: 2025-11-22 04:24:51.474377131 +0000 UTC m=+0.601846750 container attach 8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 7216 keys, 9440056 bytes, temperature: kUnknown
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785491541940, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9440056, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9389054, "index_size": 31882, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 187784, "raw_average_key_size": 26, "raw_value_size": 9256641, "raw_average_value_size": 1282, "num_data_blocks": 1224, "num_entries": 7216, "num_filter_entries": 7216, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1763781827, "oldest_key_time": 0, "file_creation_time": 1763785491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "159d9642-0336-4475-a7ed-37f1dd054c28", "db_session_id": "RO0MN4JG72VR0TZSJMKP", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.542176) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9440056 bytes
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.543881) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.5 rd, 99.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.1 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(38.0) write-amplify(17.4) OK, records in: 7727, records dropped: 511 output_compression: NoCompression
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.543905) EVENT_LOG_v1 {"time_micros": 1763785491543894, "job": 60, "event": "compaction_finished", "compaction_time_micros": 95007, "compaction_time_cpu_micros": 37206, "output_level": 6, "num_output_files": 1, "total_output_size": 9440056, "num_input_records": 7727, "num_output_records": 7216, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785491544113, "job": 60, "event": "table_file_deletion", "file_number": 103}
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: EVENT_LOG_v1 {"time_micros": 1763785491545915, "job": 60, "event": "table_file_deletion", "file_number": 101}
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.446762) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.545955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.545959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.545961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.545962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:51 compute-0 ceph-mon[75011]: rocksdb: (Original Log Time 2025/11/22-04:24:51.545963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Nov 22 04:24:52 compute-0 fervent_meninsky[310290]: --> passed data devices: 0 physical, 3 LVM
Nov 22 04:24:52 compute-0 fervent_meninsky[310290]: --> relative data size: 1.0
Nov 22 04:24:52 compute-0 fervent_meninsky[310290]: --> All data devices are unavailable
Nov 22 04:24:52 compute-0 systemd[1]: libpod-8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4.scope: Deactivated successfully.
Nov 22 04:24:52 compute-0 podman[310273]: 2025-11-22 04:24:52.499934614 +0000 UTC m=+1.627404212 container died 8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:24:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e457785982ace62cfefc548976f04d56941112d2c61d47712103b1f4ded2f0bd-merged.mount: Deactivated successfully.
Nov 22 04:24:52 compute-0 podman[310273]: 2025-11-22 04:24:52.598813127 +0000 UTC m=+1.726282716 container remove 8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Nov 22 04:24:52 compute-0 systemd[1]: libpod-conmon-8b7989e83d77d53d68bb8ea8148e95ad3a96f64f79b6bf468cb5d20174e7b1b4.scope: Deactivated successfully.
Nov 22 04:24:52 compute-0 sudo[310163]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:52 compute-0 sudo[310332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:52 compute-0 sudo[310332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:52 compute-0 sudo[310332]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:52 compute-0 sudo[310357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:24:52 compute-0 sudo[310357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:52 compute-0 sudo[310357]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:52 compute-0 sudo[310382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:52 compute-0 sudo[310382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:52 compute-0 sudo[310382]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:52 compute-0 sudo[310407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- lvm list --format json
Nov 22 04:24:52 compute-0 sudo[310407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:53 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:53 compute-0 podman[310472]: 2025-11-22 04:24:53.278757476 +0000 UTC m=+0.047512643 container create 1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:24:53 compute-0 systemd[1]: Started libpod-conmon-1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9.scope.
Nov 22 04:24:53 compute-0 podman[310472]: 2025-11-22 04:24:53.252696425 +0000 UTC m=+0.021451692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:24:53 compute-0 podman[310472]: 2025-11-22 04:24:53.368658028 +0000 UTC m=+0.137413286 container init 1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:24:53 compute-0 podman[310472]: 2025-11-22 04:24:53.382399474 +0000 UTC m=+0.151154662 container start 1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Nov 22 04:24:53 compute-0 podman[310472]: 2025-11-22 04:24:53.387742146 +0000 UTC m=+0.156497324 container attach 1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Nov 22 04:24:53 compute-0 nifty_babbage[310488]: 167 167
Nov 22 04:24:53 compute-0 systemd[1]: libpod-1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9.scope: Deactivated successfully.
Nov 22 04:24:53 compute-0 podman[310472]: 2025-11-22 04:24:53.391152269 +0000 UTC m=+0.159907516 container died 1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Nov 22 04:24:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-c34db069c4d127670714b0878f965165cbf919d1c4f13eb650bc80184353220a-merged.mount: Deactivated successfully.
Nov 22 04:24:53 compute-0 nova_compute[253461]: 2025-11-22 04:24:53.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:53 compute-0 nova_compute[253461]: 2025-11-22 04:24:53.431 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Nov 22 04:24:53 compute-0 podman[310472]: 2025-11-22 04:24:53.442113066 +0000 UTC m=+0.210868254 container remove 1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Nov 22 04:24:53 compute-0 systemd[1]: libpod-conmon-1cbbc296de939005085578b6681f543bf293c5a9fff67fc4a10b82d76bb711b9.scope: Deactivated successfully.
Nov 22 04:24:53 compute-0 podman[310512]: 2025-11-22 04:24:53.634731404 +0000 UTC m=+0.052938167 container create 13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Nov 22 04:24:53 compute-0 systemd[1]: Started libpod-conmon-13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e.scope.
Nov 22 04:24:53 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:24:53 compute-0 podman[310512]: 2025-11-22 04:24:53.611727439 +0000 UTC m=+0.029934202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389f5e76b085e8dc803e52bfb619fbbf2bd172b0a76508f3b72860502392a74a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389f5e76b085e8dc803e52bfb619fbbf2bd172b0a76508f3b72860502392a74a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389f5e76b085e8dc803e52bfb619fbbf2bd172b0a76508f3b72860502392a74a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/389f5e76b085e8dc803e52bfb619fbbf2bd172b0a76508f3b72860502392a74a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:53 compute-0 podman[310512]: 2025-11-22 04:24:53.725573853 +0000 UTC m=+0.143780676 container init 13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Nov 22 04:24:53 compute-0 podman[310512]: 2025-11-22 04:24:53.736910311 +0000 UTC m=+0.155117034 container start 13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:24:53 compute-0 podman[310512]: 2025-11-22 04:24:53.742187018 +0000 UTC m=+0.160393832 container attach 13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:24:53 compute-0 sshd-session[310533]: Accepted publickey for zuul from 192.168.122.10 port 47382 ssh2: ECDSA SHA256:eXb6Vi/NF7wqMsZCngK2gRYteL64XYn96h7MPdQdSCA
Nov 22 04:24:53 compute-0 systemd-logind[799]: New session 52 of user zuul.
Nov 22 04:24:53 compute-0 systemd[1]: Started Session 52 of User zuul.
Nov 22 04:24:53 compute-0 sshd-session[310533]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Nov 22 04:24:54 compute-0 sudo[310537]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Nov 22 04:24:54 compute-0 sudo[310537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Nov 22 04:24:54 compute-0 ceph-mon[75011]: pgmap v2322: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:54 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]: {
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:     "0": [
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:         {
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "devices": [
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "/dev/loop3"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             ],
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_name": "ceph_lv0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_size": "21470642176",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=8bea6992-7a26-4e04-a61e-1d348ad79289,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "name": "ceph_lv0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "path": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "tags": {
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.block_uuid": "uXgzXf-ozT8-iEz6-Ox7R-tkrp-NJwT-q6K631",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cluster_name": "ceph",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.crush_device_class": "",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.encrypted": "0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osd_fsid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osd_id": "0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.type": "block",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.vdo": "0"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             },
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "type": "block",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "vg_name": "ceph_vg0"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:         }
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:     ],
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:     "1": [
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:         {
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "devices": [
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "/dev/loop4"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             ],
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_name": "ceph_lv1",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_size": "21470642176",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=104ff426-5a1d-4d63-8f77-501ee5d58b1f,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "name": "ceph_lv1",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "path": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "tags": {
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.block_uuid": "LPc7rb-7LE4-CD3y-Tu8D-ffKX-9lan-16fmJp",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cluster_name": "ceph",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.crush_device_class": "",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.encrypted": "0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osd_fsid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osd_id": "1",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.type": "block",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.vdo": "0"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             },
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "type": "block",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "vg_name": "ceph_vg1"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:         }
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:     ],
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:     "2": [
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:         {
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "devices": [
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "/dev/loop5"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             ],
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_name": "ceph_lv2",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_size": "21470642176",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=7adcc38b-6484-5de6-b879-33a0309153df,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=da204276-98db-4558-b1d5-f5821c78e391,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "lv_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "name": "ceph_lv2",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "path": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "tags": {
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.block_uuid": "XRudIS-S1Ut-QuVI-BG0F-2zkL-kOKy-q3KUoE",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cephx_lockbox_secret": "",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cluster_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.cluster_name": "ceph",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.crush_device_class": "",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.encrypted": "0",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osd_fsid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osd_id": "2",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.osdspec_affinity": "default_drive_group",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.type": "block",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:                 "ceph.vdo": "0"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             },
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "type": "block",
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:             "vg_name": "ceph_vg2"
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:         }
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]:     ]
Nov 22 04:24:54 compute-0 hopeful_vaughan[310528]: }
Nov 22 04:24:54 compute-0 nova_compute[253461]: 2025-11-22 04:24:54.534 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:54 compute-0 podman[310512]: 2025-11-22 04:24:54.564402191 +0000 UTC m=+0.982609004 container died 13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Nov 22 04:24:55 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:55 compute-0 systemd[1]: libpod-13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e.scope: Deactivated successfully.
Nov 22 04:24:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-389f5e76b085e8dc803e52bfb619fbbf2bd172b0a76508f3b72860502392a74a-merged.mount: Deactivated successfully.
Nov 22 04:24:55 compute-0 podman[310512]: 2025-11-22 04:24:55.218214565 +0000 UTC m=+1.636421299 container remove 13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Nov 22 04:24:55 compute-0 systemd[1]: libpod-conmon-13ec5a9bf2e252120efe17d26c6ec6cf0d53e72945b597d68413ad7889751b4e.scope: Deactivated successfully.
Nov 22 04:24:55 compute-0 sudo[310407]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:55 compute-0 sudo[310605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:55 compute-0 sudo[310605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:55 compute-0 sudo[310605]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:55 compute-0 sudo[310649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Nov 22 04:24:55 compute-0 sudo[310649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:55 compute-0 sudo[310649]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:55 compute-0 nova_compute[253461]: 2025-11-22 04:24:55.435 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:24:55 compute-0 sudo[310681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:55 compute-0 sudo[310681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:55 compute-0 sudo[310681]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:55 compute-0 sudo[310706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/7adcc38b-6484-5de6-b879-33a0309153df/cephadm.31206ab20142c8051b6384b731ef7ef7af2407447fac35b7291e90720452ed8d --image quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0 --timeout 895 ceph-volume --fsid 7adcc38b-6484-5de6-b879-33a0309153df -- raw list --format json
Nov 22 04:24:55 compute-0 sudo[310706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:55 compute-0 podman[310794]: 2025-11-22 04:24:55.922373053 +0000 UTC m=+0.052582029 container create cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lovelace, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:24:55 compute-0 systemd[1]: Started libpod-conmon-cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae.scope.
Nov 22 04:24:55 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:24:55 compute-0 podman[310794]: 2025-11-22 04:24:55.989801511 +0000 UTC m=+0.120010517 container init cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 22 04:24:55 compute-0 podman[310794]: 2025-11-22 04:24:55.901117225 +0000 UTC m=+0.031326231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:56 compute-0 podman[310794]: 2025-11-22 04:24:56.001606572 +0000 UTC m=+0.131815548 container start cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Nov 22 04:24:56 compute-0 musing_lovelace[310810]: 167 167
Nov 22 04:24:56 compute-0 systemd[1]: libpod-cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae.scope: Deactivated successfully.
Nov 22 04:24:56 compute-0 podman[310794]: 2025-11-22 04:24:56.006257612 +0000 UTC m=+0.136466588 container attach cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Nov 22 04:24:56 compute-0 podman[310794]: 2025-11-22 04:24:56.006896938 +0000 UTC m=+0.137105924 container died cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lovelace, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Nov 22 04:24:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5401599a4eef23416f142bdff5c5a7de7ce5d2a242ef1990eef44f9e03c5bba0-merged.mount: Deactivated successfully.
Nov 22 04:24:56 compute-0 podman[310794]: 2025-11-22 04:24:56.055157383 +0000 UTC m=+0.185366359 container remove cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_lovelace, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Nov 22 04:24:56 compute-0 systemd[1]: libpod-conmon-cf8fa71503883a9ee662ad267ae8957001269d459eb9f64544441d6408bd70ae.scope: Deactivated successfully.
Nov 22 04:24:56 compute-0 ceph-mon[75011]: pgmap v2323: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:56 compute-0 podman[310838]: 2025-11-22 04:24:56.20143365 +0000 UTC m=+0.037894980 container create a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_easley, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Nov 22 04:24:56 compute-0 systemd[1]: Started libpod-conmon-a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72.scope.
Nov 22 04:24:56 compute-0 systemd[1]: Started libcrun container.
Nov 22 04:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd07675d20601917304db75f7541b478e2a7e64263ad5cb686feeb446b9ad78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd07675d20601917304db75f7541b478e2a7e64263ad5cb686feeb446b9ad78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd07675d20601917304db75f7541b478e2a7e64263ad5cb686feeb446b9ad78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abd07675d20601917304db75f7541b478e2a7e64263ad5cb686feeb446b9ad78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Nov 22 04:24:56 compute-0 podman[310838]: 2025-11-22 04:24:56.184890026 +0000 UTC m=+0.021351375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Nov 22 04:24:56 compute-0 podman[310838]: 2025-11-22 04:24:56.284527872 +0000 UTC m=+0.120989202 container init a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_easley, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:24:56 compute-0 podman[310838]: 2025-11-22 04:24:56.290997945 +0000 UTC m=+0.127459264 container start a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:24:56 compute-0 podman[310838]: 2025-11-22 04:24:56.293625355 +0000 UTC m=+0.130086684 container attach a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_easley, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Nov 22 04:24:56 compute-0 nova_compute[253461]: 2025-11-22 04:24:56.444 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:24:56 compute-0 nova_compute[253461]: 2025-11-22 04:24:56.475 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:24:56 compute-0 nova_compute[253461]: 2025-11-22 04:24:56.475 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:24:56 compute-0 nova_compute[253461]: 2025-11-22 04:24:56.475 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:24:56 compute-0 nova_compute[253461]: 2025-11-22 04:24:56.476 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Nov 22 04:24:56 compute-0 nova_compute[253461]: 2025-11-22 04:24:56.476 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:24:56 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:24:56 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/251036428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:56 compute-0 nova_compute[253461]: 2025-11-22 04:24:56.911 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:24:57 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:57 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/251036428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.098 253465 WARNING nova.virt.libvirt.driver [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.100 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4192MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.100 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.102 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:24:57 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19261 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:24:57 compute-0 dreamy_easley[310854]: {
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:     "104ff426-5a1d-4d63-8f77-501ee5d58b1f": {
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "osd_id": 1,
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "osd_uuid": "104ff426-5a1d-4d63-8f77-501ee5d58b1f",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "type": "bluestore"
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:     },
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:     "8bea6992-7a26-4e04-a61e-1d348ad79289": {
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "osd_id": 0,
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "osd_uuid": "8bea6992-7a26-4e04-a61e-1d348ad79289",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "type": "bluestore"
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:     },
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:     "da204276-98db-4558-b1d5-f5821c78e391": {
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "ceph_fsid": "7adcc38b-6484-5de6-b879-33a0309153df",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "osd_id": 2,
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "osd_uuid": "da204276-98db-4558-b1d5-f5821c78e391",
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:         "type": "bluestore"
Nov 22 04:24:57 compute-0 dreamy_easley[310854]:     }
Nov 22 04:24:57 compute-0 dreamy_easley[310854]: }
Nov 22 04:24:57 compute-0 systemd[1]: libpod-a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72.scope: Deactivated successfully.
Nov 22 04:24:57 compute-0 podman[310838]: 2025-11-22 04:24:57.268604142 +0000 UTC m=+1.105065471 container died a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_easley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 22 04:24:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-abd07675d20601917304db75f7541b478e2a7e64263ad5cb686feeb446b9ad78-merged.mount: Deactivated successfully.
Nov 22 04:24:57 compute-0 podman[310838]: 2025-11-22 04:24:57.340467213 +0000 UTC m=+1.176928553 container remove a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 22 04:24:57 compute-0 systemd[1]: libpod-conmon-a437ad88d9629ba3623ecf09dabf9f95a29c4d6d490255bd3c992bdd172e0f72.scope: Deactivated successfully.
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.357 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.357 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Nov 22 04:24:57 compute-0 sudo[310706]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Nov 22 04:24:57 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:24:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Nov 22 04:24:57 compute-0 ceph-mon[75011]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:24:57 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 0dc7e1bf-eec6-49bf-987e-eac7d2c0f4db does not exist
Nov 22 04:24:57 compute-0 ceph-mgr[75294]: [progress WARNING root] complete: ev 2e484dbe-f6a1-4afb-ba43-f0ab1b9a8cdf does not exist
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.433 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Nov 22 04:24:57 compute-0 sudo[311005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Nov 22 04:24:57 compute-0 sudo[311005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:57 compute-0 sudo[311005]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:57 compute-0 podman[311039]: 2025-11-22 04:24:57.580484 +0000 UTC m=+0.060525401 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251118, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 22 04:24:57 compute-0 sudo[311053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Nov 22 04:24:57 compute-0 sudo[311053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Nov 22 04:24:57 compute-0 sudo[311053]: pam_unix(sudo:session): session closed for user root
Nov 22 04:24:57 compute-0 podman[311040]: 2025-11-22 04:24:57.624499344 +0000 UTC m=+0.092471621 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251118)
Nov 22 04:24:57 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19263 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:24:57 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Nov 22 04:24:57 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1778225704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.924 253465 DEBUG oslo_concurrency.processutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.930 253465 DEBUG nova.compute.provider_tree [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed in ProviderTree for provider: 62e18608-eaef-4f09-8e92-06d41e51d580 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.956 253465 DEBUG nova.scheduler.client.report [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Inventory has not changed for provider 62e18608-eaef-4f09-8e92-06d41e51d580 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.959 253465 DEBUG nova.compute.resource_tracker [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Nov 22 04:24:57 compute-0 nova_compute[253461]: 2025-11-22 04:24:57.959 253465 DEBUG oslo_concurrency.lockutils [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:24:58 compute-0 ceph-mon[75011]: pgmap v2324: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:58 compute-0 ceph-mon[75011]: from='client.19261 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:24:58 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:24:58 compute-0 ceph-mon[75011]: from='mgr.14132 192.168.122.100:0/121933341' entity='mgr.compute-0.wbwfxq' 
Nov 22 04:24:58 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1778225704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Nov 22 04:24:58 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 04:24:58 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/27393915' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 04:24:59 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:24:59 compute-0 ceph-mon[75011]: from='client.19263 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:24:59 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/27393915' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 04:24:59 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:24:59 compute-0 nova_compute[253461]: 2025-11-22 04:24:59.537 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:00 compute-0 ceph-mon[75011]: pgmap v2325: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:00 compute-0 nova_compute[253461]: 2025-11-22 04:25:00.436 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Nov 22 04:25:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828714944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:25:00 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Nov 22 04:25:00 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3828714944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:25:00 compute-0 nova_compute[253461]: 2025-11-22 04:25:00.944 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:01 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3828714944' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Nov 22 04:25:01 compute-0 ceph-mon[75011]: from='client.? 192.168.122.10:0/3828714944' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Nov 22 04:25:01 compute-0 ceph-mon[75011]: pgmap v2326: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:01 compute-0 nova_compute[253461]: 2025-11-22 04:25:01.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:02 compute-0 nova_compute[253461]: 2025-11-22 04:25:02.424 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:02 compute-0 nova_compute[253461]: 2025-11-22 04:25:02.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:02 compute-0 nova_compute[253461]: 2025-11-22 04:25:02.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:02 compute-0 nova_compute[253461]: 2025-11-22 04:25:02.429 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Nov 22 04:25:03 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:03 compute-0 nova_compute[253461]: 2025-11-22 04:25:03.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:03 compute-0 nova_compute[253461]: 2025-11-22 04:25:03.430 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:04 compute-0 ceph-mon[75011]: pgmap v2327: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:04 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:04 compute-0 nova_compute[253461]: 2025-11-22 04:25:04.539 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:04 compute-0 nova_compute[253461]: 2025-11-22 04:25:04.838 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:04 compute-0 nova_compute[253461]: 2025-11-22 04:25:04.839 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Nov 22 04:25:04 compute-0 nova_compute[253461]: 2025-11-22 04:25:04.840 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Nov 22 04:25:04 compute-0 nova_compute[253461]: 2025-11-22 04:25:04.937 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Nov 22 04:25:05 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:05 compute-0 ceph-mon[75011]: pgmap v2328: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:05 compute-0 nova_compute[253461]: 2025-11-22 04:25:05.437 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:05 compute-0 ovs-vsctl[311262]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 22 04:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] scanning for idle connections..
Nov 22 04:25:06 compute-0 ceph-mgr[75294]: [volumes INFO mgr_util] cleaning up connections: []
Nov 22 04:25:06 compute-0 nova_compute[253461]: 2025-11-22 04:25:06.524 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:06 compute-0 virtqemud[253186]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 22 04:25:06 compute-0 virtqemud[253186]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 22 04:25:06 compute-0 virtqemud[253186]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 22 04:25:07 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:07 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: cache status {prefix=cache status} (starting...)
Nov 22 04:25:07 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: client ls {prefix=client ls} (starting...)
Nov 22 04:25:07 compute-0 lvm[311595]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Nov 22 04:25:07 compute-0 lvm[311595]: VG ceph_vg1 finished
Nov 22 04:25:07 compute-0 lvm[311634]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Nov 22 04:25:07 compute-0 lvm[311634]: VG ceph_vg2 finished
Nov 22 04:25:07 compute-0 lvm[311637]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Nov 22 04:25:07 compute-0 lvm[311637]: VG ceph_vg0 finished
Nov 22 04:25:07 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19273 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:08 compute-0 ceph-mon[75011]: pgmap v2329: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:08 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: damage ls {prefix=damage ls} (starting...)
Nov 22 04:25:08 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19275 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:08 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: dump loads {prefix=dump loads} (starting...)
Nov 22 04:25:08 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Nov 22 04:25:08 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Nov 22 04:25:08 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Nov 22 04:25:08 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Nov 22 04:25:08 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Nov 22 04:25:08 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/356016141' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 04:25:08 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19281 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:08 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T04:25:08.936+0000 7fd3dfb26640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 04:25:08 compute-0 ceph-mgr[75294]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 04:25:08 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Nov 22 04:25:09 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:09 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: get subtrees {prefix=get subtrees} (starting...)
Nov 22 04:25:09 compute-0 ceph-mon[75011]: from='client.19273 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:09 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/356016141' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Nov 22 04:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Nov 22 04:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/160707368' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Nov 22 04:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2475992161' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 04:25:09 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: ops {prefix=ops} (starting...)
Nov 22 04:25:09 compute-0 nova_compute[253461]: 2025-11-22 04:25:09.540 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Nov 22 04:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1796740422' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 04:25:09 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 04:25:09 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/358513246' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 04:25:09 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: session ls {prefix=session ls} (starting...)
Nov 22 04:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Nov 22 04:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1405901270' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mds[101332]: mds.cephfs.compute-0.fzlata asok_command: status {prefix=status} (starting...)
Nov 22 04:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 04:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650270501' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mon[75011]: from='client.19275 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mon[75011]: from='client.19281 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mon[75011]: pgmap v2330: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/160707368' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2475992161' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1796740422' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/358513246' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1405901270' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19295 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:10 compute-0 nova_compute[253461]: 2025-11-22 04:25:10.438 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 04:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1848904633' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19299 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:10 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 04:25:10 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2458459539' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 04:25:11 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:11 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/650270501' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 04:25:11 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1848904633' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 04:25:11 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2458459539' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 04:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Nov 22 04:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698810050' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 04:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Nov 22 04:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/64354932' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 04:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Nov 22 04:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/888109727' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 04:25:11 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 04:25:11 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/124364205' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 04:25:11 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19311 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:11 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T04:25:11.984+0000 7fd3dfb26640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 04:25:11 compute-0 ceph-mgr[75294]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Nov 22 04:25:12 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mon[75011]: from='client.19295 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mon[75011]: from='client.19299 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mon[75011]: pgmap v2331: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/698810050' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/64354932' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/888109727' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/124364205' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 04:25:12 compute-0 nova_compute[253461]: 2025-11-22 04:25:12.428 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Nov 22 04:25:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2816536575' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19317 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19321 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:12 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Nov 22 04:25:12 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1650990463' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 04:25:13 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:13 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:13 compute-0 ceph-mon[75011]: from='client.19311 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:13 compute-0 ceph-mon[75011]: from='client.19313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2816536575' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Nov 22 04:25:13 compute-0 ceph-mon[75011]: from='client.19317 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:13 compute-0 ceph-mon[75011]: from='client.19321 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:13 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1650990463' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Nov 22 04:25:13 compute-0 ceph-mon[75011]: pgmap v2332: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Nov 22 04:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3672681399' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 269 ms_handle_reset con 0x55936ef75000 session 0x55936fe383c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123871232 unmapped: 36143104 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1955897 data_alloc: 234881024 data_used: 12070912
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 269 heartbeat osd_stat(store_statfs(0x4f9236000/0x0/0x4ffc00000, data 0x1eacbf1/0x2027000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x499f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:14.449718+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 270 ms_handle_reset con 0x55937164d000 session 0x55936d3694a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559372b45800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 270 ms_handle_reset con 0x559372b45800 session 0x559370ec41e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123920384 unmapped: 36093952 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 270 ms_handle_reset con 0x55936d2b6c00 session 0x55936d1ccb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 270 handle_osd_map epochs [271,271], i have 271, src has [1,271]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:15.449864+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 271 ms_handle_reset con 0x55936c9d4c00 session 0x55936bfc65a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122855424 unmapped: 37158912 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 271 ms_handle_reset con 0x55936d2b6c00 session 0x55936f3521e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 271 ms_handle_reset con 0x55936ef75000 session 0x55936ef70f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:16.450071+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f8e22000/0x0/0x4ffc00000, data 0x1eb0319/0x202b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 37150720 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:17.450192+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 37150720 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:18.450370+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 37134336 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1960301 data_alloc: 234881024 data_used: 12075008
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:19.450560+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 37134336 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:20.450755+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f8e1f000/0x0/0x4ffc00000, data 0x1eb3319/0x202e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 271 handle_osd_map epochs [272,272], i have 272, src has [1,272]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.359914780s of 12.012170792s, submitted: 165
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 37126144 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:21.450896+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 272 heartbeat osd_stat(store_statfs(0x4f8e1b000/0x0/0x4ffc00000, data 0x1eb4ece/0x2031000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 273 ms_handle_reset con 0x55936ef75c00 session 0x55936dc1a1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 37117952 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:22.451087+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122896384 unmapped: 37117952 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 273 handle_osd_map epochs [273,274], i have 273, src has [1,274]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559372b45800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:23.451269+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 274 ms_handle_reset con 0x559372b45800 session 0x559370bc94a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 274 ms_handle_reset con 0x55937164d000 session 0x55936d670d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 274 heartbeat osd_stat(store_statfs(0x4f8e10000/0x0/0x4ffc00000, data 0x1ebb580/0x203b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122929152 unmapped: 37085184 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972984 data_alloc: 234881024 data_used: 12079104
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:24.451414+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 274 ms_handle_reset con 0x55936c9d4c00 session 0x55936f3534a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 274 ms_handle_reset con 0x55936d2b6c00 session 0x55936d671c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 37068800 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:25.451592+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 37060608 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:26.451779+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122953728 unmapped: 37060608 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:27.451958+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122970112 unmapped: 37044224 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 275 handle_osd_map epochs [277,277], i have 275, src has [1,277]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 275 handle_osd_map epochs [276,277], i have 275, src has [1,277]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:28.452106+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 277 ms_handle_reset con 0x55936ef75000 session 0x55936ef7fa40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122994688 unmapped: 37019648 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1982333 data_alloc: 234881024 data_used: 12087296
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 277 heartbeat osd_stat(store_statfs(0x4f8e08000/0x0/0x4ffc00000, data 0x1ec082f/0x2045000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:29.452247+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122994688 unmapped: 37019648 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 277 ms_handle_reset con 0x55936ef75c00 session 0x559370ec4960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:30.452468+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.930101395s of 10.071195602s, submitted: 60
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 278 ms_handle_reset con 0x55936c9d4c00 session 0x55936d2285a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123019264 unmapped: 36995072 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:31.452600+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 278 heartbeat osd_stat(store_statfs(0x4f8e06000/0x0/0x4ffc00000, data 0x1ec23d6/0x2047000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123019264 unmapped: 36995072 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:32.452747+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 278 ms_handle_reset con 0x55936d2b6c00 session 0x55936d368b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123019264 unmapped: 36995072 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:33.452891+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 278 ms_handle_reset con 0x55936ef75000 session 0x55936d868780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123019264 unmapped: 36995072 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1982409 data_alloc: 234881024 data_used: 12091392
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:34.453051+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 279 ms_handle_reset con 0x55937164d000 session 0x55936d718b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123043840 unmapped: 36970496 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:35.453184+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7e3000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 279 ms_handle_reset con 0x55936d7e3000 session 0x559370ec5e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123060224 unmapped: 36954112 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:36.453332+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 35807232 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 280 heartbeat osd_stat(store_statfs(0x4f8dfb000/0x0/0x4ffc00000, data 0x1ecaa36/0x2052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:37.453478+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 281 ms_handle_reset con 0x55936c9d4c00 session 0x559370864000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 35782656 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 281 ms_handle_reset con 0x55936ff14000 session 0x55936d7192c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 281 ms_handle_reset con 0x55936ff14c00 session 0x55936d229680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:38.453647+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 282 ms_handle_reset con 0x55936ef75000 session 0x559370bc8d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 282 heartbeat osd_stat(store_statfs(0x4f8df7000/0x0/0x4ffc00000, data 0x1ecc5cf/0x2055000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 282 ms_handle_reset con 0x55936d2b6c00 session 0x559370bc9860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 41041920 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1740712 data_alloc: 218103808 data_used: 1003520
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:39.453823+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 282 ms_handle_reset con 0x55936c9d4c00 session 0x559370864960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 41041920 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:40.454033+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.206699371s of 10.457751274s, submitted: 74
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 283 ms_handle_reset con 0x55936d2b6c00 session 0x559370864b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 118956032 unmapped: 41058304 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:41.454184+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 283 heartbeat osd_stat(store_statfs(0x4fa50e000/0x0/0x4ffc00000, data 0x7b3d53/0x93f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 118956032 unmapped: 41058304 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 283 ms_handle_reset con 0x55936ef75000 session 0x559370865860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:42.454308+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 118964224 unmapped: 41050112 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 284 ms_handle_reset con 0x55936ff14c00 session 0x55936d79ed20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:43.454559+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e422000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 284 ms_handle_reset con 0x55936e422000 session 0x55936d441e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 284 ms_handle_reset con 0x55936c9d4c00 session 0x55936d4405a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 285 ms_handle_reset con 0x55936d2b6c00 session 0x55936d6712c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 285 ms_handle_reset con 0x55937164d000 session 0x55936fe39e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 41033728 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1750975 data_alloc: 218103808 data_used: 1028096
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 285 ms_handle_reset con 0x55936ff14000 session 0x5593708654a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:44.454755+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 285 ms_handle_reset con 0x55936ef75000 session 0x55936f0e10e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 118980608 unmapped: 41033728 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:45.454913+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 286 ms_handle_reset con 0x55936d2b6c00 session 0x55936dbeeb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 39985152 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:46.455060+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 286 ms_handle_reset con 0x55936ff14000 session 0x55936d719a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 286 ms_handle_reset con 0x55936c9d4c00 session 0x55936f0e0f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120029184 unmapped: 39985152 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 286 heartbeat osd_stat(store_statfs(0x4fa505000/0x0/0x4ffc00000, data 0x7b93ce/0x948000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 286 handle_osd_map epochs [287,287], i have 287, src has [1,287]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:47.455176+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 287 ms_handle_reset con 0x55937164d000 session 0x55936f0e1680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119472128 unmapped: 40542208 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 288 ms_handle_reset con 0x55936ff14c00 session 0x55936d873860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:48.455314+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 288 ms_handle_reset con 0x55936c9d4c00 session 0x55936d868960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 40525824 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1757881 data_alloc: 218103808 data_used: 1036288
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 288 ms_handle_reset con 0x55936d2b6c00 session 0x55936f353e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:49.455509+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 40525824 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:50.455784+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 288 heartbeat osd_stat(store_statfs(0x4fa502000/0x0/0x4ffc00000, data 0x7bcae4/0x94c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 40525824 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:51.455975+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 288 heartbeat osd_stat(store_statfs(0x4fa502000/0x0/0x4ffc00000, data 0x7bcae4/0x94c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 40525824 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:52.456158+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 40525824 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:53.456318+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 40525824 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1757009 data_alloc: 218103808 data_used: 1032192
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:54.456484+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 288 ms_handle_reset con 0x55936ff14000 session 0x55936c5885a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 40525824 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:55.456630+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119488512 unmapped: 40525824 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:56.456789+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.093536377s of 15.751332283s, submitted: 98
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 289 ms_handle_reset con 0x55937164d000 session 0x55936ef7fc20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 290 ms_handle_reset con 0x55936f99c000 session 0x55936d867e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119513088 unmapped: 40501248 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:57.456949+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 290 heartbeat osd_stat(store_statfs(0x4fa4f9000/0x0/0x4ffc00000, data 0x7c010c/0x953000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 291 ms_handle_reset con 0x55936ff14800 session 0x55936d369a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119521280 unmapped: 40493056 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 291 ms_handle_reset con 0x55936c9d4c00 session 0x55936d671680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:58.457094+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 40460288 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1772671 data_alloc: 218103808 data_used: 1044480
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:59.457283+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 291 heartbeat osd_stat(store_statfs(0x4fa4f5000/0x0/0x4ffc00000, data 0x7c21df/0x958000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 291 ms_handle_reset con 0x55936d2b6c00 session 0x55936dbeef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 40443904 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 292 ms_handle_reset con 0x55937164d000 session 0x55936d229680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:00.457497+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119595008 unmapped: 40419328 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 293 ms_handle_reset con 0x55936ff14000 session 0x55936d8690e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:01.457625+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 293 heartbeat osd_stat(store_statfs(0x4fa4ed000/0x0/0x4ffc00000, data 0x7c594b/0x960000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4daf9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119603200 unmapped: 40411136 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:02.458006+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 293 ms_handle_reset con 0x55936d2b6c00 session 0x559370bc94a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 40394752 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:03.458183+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 294 ms_handle_reset con 0x55936c9d4c00 session 0x55936d3690e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119619584 unmapped: 40394752 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1783419 data_alloc: 218103808 data_used: 1056768
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:04.458396+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 294 ms_handle_reset con 0x55936ff14800 session 0x55936dc1a1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119635968 unmapped: 40378368 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 295 ms_handle_reset con 0x55937164d000 session 0x55936ef70f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:05.458556+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119660544 unmapped: 40353792 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 296 ms_handle_reset con 0x55936f99c400 session 0x55936ef71860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 296 ms_handle_reset con 0x55936ff14000 session 0x55936ef71c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:06.458717+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.949176788s of 10.118850708s, submitted: 82
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 296 handle_osd_map epochs [297,297], i have 297, src has [1,297]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 40337408 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 297 ms_handle_reset con 0x55936c9d4c00 session 0x55936d2292c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:07.458858+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 297 ms_handle_reset con 0x55936f99c400 session 0x55936f0f7c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 297 ms_handle_reset con 0x55936d2b6c00 session 0x55936c614000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 297 heartbeat osd_stat(store_statfs(0x4fb503000/0x0/0x4ffc00000, data 0x7cc963/0x96a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 40337408 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 297 ms_handle_reset con 0x55936ff14800 session 0x55936f0f6b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:08.459049+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119676928 unmapped: 40337408 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1796206 data_alloc: 218103808 data_used: 1077248
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:09.459188+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119685120 unmapped: 40329216 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 297 ms_handle_reset con 0x55936d2b6c00 session 0x55936ef7ed20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:10.459383+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 298 ms_handle_reset con 0x55936f99c400 session 0x55936d1cc960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 298 heartbeat osd_stat(store_statfs(0x4fb501000/0x0/0x4ffc00000, data 0x7cc9e5/0x96d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119726080 unmapped: 40288256 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:11.459499+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 299 ms_handle_reset con 0x55936ff14000 session 0x55936f0e0960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 299 ms_handle_reset con 0x55937164d000 session 0x559370864b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 299 ms_handle_reset con 0x55936f99c800 session 0x559370865860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 299 ms_handle_reset con 0x55936c9d4c00 session 0x55936cad2960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 40271872 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:12.459621+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 40271872 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:13.459762+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 299 ms_handle_reset con 0x55936d2b6c00 session 0x55936cad2780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119742464 unmapped: 40271872 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1810449 data_alloc: 218103808 data_used: 1089536
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 300 ms_handle_reset con 0x55936f99c400 session 0x55936e6ac780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:14.459865+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119767040 unmapped: 40247296 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 300 ms_handle_reset con 0x55936ff14000 session 0x55936e6ad0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:15.459968+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164d000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 300 ms_handle_reset con 0x55937164d000 session 0x5593708650e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 ms_handle_reset con 0x55936d2b6c00 session 0x55936ef7e000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120496128 unmapped: 39518208 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 ms_handle_reset con 0x55936c9d4c00 session 0x55936f352d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 ms_handle_reset con 0x55936f99c400 session 0x55936c0b2780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 ms_handle_reset con 0x55936ff14000 session 0x55936d6714a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e891000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:16.460124+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 ms_handle_reset con 0x55936e891000 session 0x55936efe0960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 ms_handle_reset con 0x55936c9d4c00 session 0x55936efe0000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 heartbeat osd_stat(store_statfs(0x4fb05b000/0x0/0x4ffc00000, data 0xc6ad92/0xe12000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 ms_handle_reset con 0x55936d2b6c00 session 0x559370ec41e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119980032 unmapped: 40034304 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:17.460272+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119988224 unmapped: 40026112 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:18.460468+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.459031105s of 11.839896202s, submitted: 107
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 ms_handle_reset con 0x55936ff14000 session 0x55936d4414a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 119996416 unmapped: 40017920 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1854715 data_alloc: 218103808 data_used: 1097728
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:19.460611+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e890000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e890400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 302 ms_handle_reset con 0x55936e890000 session 0x55936f0f7e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e890800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 302 ms_handle_reset con 0x55936e890800 session 0x55936d718000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122683392 unmapped: 37330944 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 303 ms_handle_reset con 0x55936e890400 session 0x559370864f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 303 ms_handle_reset con 0x55936c9d4c00 session 0x55936f352000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:20.460788+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 303 ms_handle_reset con 0x55936f99c400 session 0x559370ec4960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 303 ms_handle_reset con 0x55936d2b6c00 session 0x55936bfc6960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120045568 unmapped: 39968768 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:21.460868+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e890000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 304 heartbeat osd_stat(store_statfs(0x4fa6ae000/0x0/0x4ffc00000, data 0x160ff99/0x17bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120053760 unmapped: 39960576 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 305 ms_handle_reset con 0x55936e890000 session 0x55936c6143c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:22.460979+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 305 ms_handle_reset con 0x55936c9d4c00 session 0x55936c614780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e890400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 305 ms_handle_reset con 0x55936e890400 session 0x55936d1cc960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 305 ms_handle_reset con 0x55936f99c400 session 0x55936ef7ed20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120389632 unmapped: 39624704 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:23.461128+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 306 ms_handle_reset con 0x55936e969400 session 0x55936d670d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 306 ms_handle_reset con 0x55936e969800 session 0x559370ec4b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120946688 unmapped: 39067648 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1992826 data_alloc: 218103808 data_used: 1122304
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:24.461281+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 ms_handle_reset con 0x55936e969c00 session 0x55936ef70f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 ms_handle_reset con 0x55936d2b6c00 session 0x55936fe38960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 heartbeat osd_stat(store_statfs(0x4fa176000/0x0/0x4ffc00000, data 0x1b462ee/0x1cf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 39124992 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:25.461453+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 ms_handle_reset con 0x55936c9d4c00 session 0x55936dbe61e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 heartbeat osd_stat(store_statfs(0x4fa175000/0x0/0x4ffc00000, data 0x1b46350/0x1cf8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e890400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 ms_handle_reset con 0x55936f99c400 session 0x55936fe38f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 ms_handle_reset con 0x55936e969400 session 0x55936efe0b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 ms_handle_reset con 0x55936d2b6c00 session 0x55936f3530e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 120913920 unmapped: 39100416 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:26.461579+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 308 ms_handle_reset con 0x55936c9d4c00 session 0x55936f352b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 308 ms_handle_reset con 0x55936e969c00 session 0x55936fe38f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 308 ms_handle_reset con 0x55936f99c400 session 0x55936fe38960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 308 ms_handle_reset con 0x55936e890400 session 0x55936dbee1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 308 ms_handle_reset con 0x55936e969c00 session 0x55936fe38780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 308 ms_handle_reset con 0x55936d2b6c00 session 0x55936bfc6960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7ff400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123068416 unmapped: 36945920 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 308 ms_handle_reset con 0x55936f99c400 session 0x55936e6ac780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:27.461678+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 37126144 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:28.461811+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 309 ms_handle_reset con 0x55936e995400 session 0x559370bc8000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 309 ms_handle_reset con 0x55936d7ff400 session 0x55936c44cd20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 309 ms_handle_reset con 0x55936e969c00 session 0x559371159860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.242660522s of 10.039448738s, submitted: 190
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 309 ms_handle_reset con 0x55936d2b6c00 session 0x55936ef7e1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 309 ms_handle_reset con 0x55936c9d4c00 session 0x55936c614780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 37109760 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2160793 data_alloc: 218103808 data_used: 4816896
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:29.461932+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 37109760 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:30.462111+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 310 ms_handle_reset con 0x55936e995400 session 0x559370bc85a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f9029000/0x0/0x4ffc00000, data 0x2c8e6c7/0x2e44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 37109760 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:31.462224+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936f99c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 311 heartbeat osd_stat(store_statfs(0x4f9026000/0x0/0x4ffc00000, data 0x2c90152/0x2e46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x3d8f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122904576 unmapped: 37109760 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:32.462308+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 312 ms_handle_reset con 0x55936f99c400 session 0x559370864d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 312 ms_handle_reset con 0x55936c9d4c00 session 0x55936c0b2780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 37093376 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:33.462443+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 122920960 unmapped: 37093376 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2166587 data_alloc: 218103808 data_used: 4812800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:34.462555+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 313 ms_handle_reset con 0x55936d2b6c00 session 0x55936d718780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 313 ms_handle_reset con 0x55936e969c00 session 0x55936efe0b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 125599744 unmapped: 34414592 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:35.462673+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 36323328 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:36.462824+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f77f5000/0x0/0x4ffc00000, data 0x33236e9/0x34d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 36323328 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:37.462957+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 36323328 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:38.463122+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f77e5000/0x0/0x4ffc00000, data 0x33336e9/0x34e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 315 heartbeat osd_stat(store_statfs(0x4f77e5000/0x0/0x4ffc00000, data 0x33336e9/0x34e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2240191 data_alloc: 218103808 data_used: 5038080
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 36323328 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:39.463297+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 36323328 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:40.463446+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.686919212s of 12.273730278s, submitted: 153
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 316 ms_handle_reset con 0x55936e995400 session 0x55936dbe61e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123691008 unmapped: 36323328 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:41.463549+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 316 ms_handle_reset con 0x55936e995800 session 0x55936dbeef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 316 ms_handle_reset con 0x55936c9d4c00 session 0x55936d868780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 316 ms_handle_reset con 0x55936d2b6c00 session 0x55936ef7eb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123707392 unmapped: 36306944 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:42.463684+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 316 ms_handle_reset con 0x55936e969c00 session 0x55936d228000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 123707392 unmapped: 36306944 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:43.463819+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e998c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 316 ms_handle_reset con 0x55936e998c00 session 0x55936ef71680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371605400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2309982 data_alloc: 234881024 data_used: 14262272
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f77e0000/0x0/0x4ffc00000, data 0x3335206/0x34ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 128729088 unmapped: 31285248 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:44.463984+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff15800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe3a400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 317 ms_handle_reset con 0x55936ff15800 session 0x55936dbef0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 24838144 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:45.464223+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 318 ms_handle_reset con 0x55936fe3a400 session 0x55936d718000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 318 heartbeat osd_stat(store_statfs(0x4f77dc000/0x0/0x4ffc00000, data 0x3336de5/0x34f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 318 ms_handle_reset con 0x55936c9d4c00 session 0x55936c6143c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 318 ms_handle_reset con 0x559371605400 session 0x55936c5872c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135241728 unmapped: 24772608 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:46.464378+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135258112 unmapped: 24756224 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:47.464543+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 319 ms_handle_reset con 0x55936d2b6c00 session 0x55936dbee3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e998c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 319 ms_handle_reset con 0x55936e998c00 session 0x5593711590e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135299072 unmapped: 24715264 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:48.464750+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 319 ms_handle_reset con 0x55936e969c00 session 0x55936ef7f2c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2376263 data_alloc: 234881024 data_used: 21962752
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:49.464907+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135323648 unmapped: 24690688 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 320 ms_handle_reset con 0x55936c9d4c00 session 0x55936f0e1860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f77d3000/0x0/0x4ffc00000, data 0x333d0be/0x34fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 320 ms_handle_reset con 0x55936d2b6c00 session 0x559370ec54a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e998c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f77d3000/0x0/0x4ffc00000, data 0x333d0be/0x34fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:50.465062+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135397376 unmapped: 24616960 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 321 ms_handle_reset con 0x55936e998c00 session 0x55936d441e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:51.465195+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135405568 unmapped: 24608768 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:52.465378+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 24576000 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe3a400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 321 ms_handle_reset con 0x55936fe3a400 session 0x55936d873860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:53.465540+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 24576000 heap: 160014336 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.237196922s of 12.474047661s, submitted: 86
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 322 ms_handle_reset con 0x55936c9d4c00 session 0x5593708643c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 322 heartbeat osd_stat(store_statfs(0x4f77d0000/0x0/0x4ffc00000, data 0x333ec11/0x34fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 322 ms_handle_reset con 0x55936e969c00 session 0x55936d369c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 322 ms_handle_reset con 0x55936d2b6c00 session 0x55936ef70960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e998c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 322 ms_handle_reset con 0x55936e998c00 session 0x559370865680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371605400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2552968 data_alloc: 251658240 data_used: 29917184
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:54.465703+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148439040 unmapped: 15007744 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 322 ms_handle_reset con 0x559371605400 session 0x55936f353e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:55.465840+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157917184 unmapped: 5529600 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 324 ms_handle_reset con 0x55936c9d4c00 session 0x55936f352d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 324 heartbeat osd_stat(store_statfs(0x4f5c2c000/0x0/0x4ffc00000, data 0x57393df/0x509c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:56.465975+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152526848 unmapped: 10919936 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 325 ms_handle_reset con 0x55936d2b6c00 session 0x55936d3694a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 325 ms_handle_reset con 0x55936e969c00 session 0x55936f352000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e998c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 325 ms_handle_reset con 0x55936e998c00 session 0x55936f352960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:57.466080+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153034752 unmapped: 10412032 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f5bf0000/0x0/0x4ffc00000, data 0x586eac9/0x50c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 325 heartbeat osd_stat(store_statfs(0x4f5bf0000/0x0/0x4ffc00000, data 0x586eac9/0x50c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371605400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 325 ms_handle_reset con 0x559371605400 session 0x55936ef25c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:58.466236+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 10330112 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2746724 data_alloc: 251658240 data_used: 31801344
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:59.466388+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 10330112 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:00.466628+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153116672 unmapped: 10330112 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:01.466751+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153124864 unmapped: 10321920 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f5c07000/0x0/0x4ffc00000, data 0x587053c/0x50c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:02.466921+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153165824 unmapped: 10280960 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936c9d4c00 session 0x55936efe0960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936d2b6c00 session 0x55936d228960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936e969c00 session 0x55936d869c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:03.467078+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153165824 unmapped: 10280960 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e998c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936e998c00 session 0x55936f3521e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371606000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936e995400 session 0x55936d867e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.385339737s of 10.364933014s, submitted: 222
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936e999400 session 0x55936d868960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x559371607400 session 0x55936efe0d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x559371606000 session 0x55936c588780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936c9d4c00 session 0x559370bc9c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936d2b6c00 session 0x55936c44c780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936c9d4c00 session 0x55936c587c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:04.467233+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2741738 data_alloc: 251658240 data_used: 31817728
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f5c08000/0x0/0x4ffc00000, data 0x587053c/0x50c6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 10256384 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936d2b6c00 session 0x55936c0b21e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:05.467368+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153190400 unmapped: 10256384 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 ms_handle_reset con 0x55936e999400 session 0x559370ec45a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:06.467598+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 153223168 unmapped: 10223616 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f5c07000/0x0/0x4ffc00000, data 0x587059e/0x50c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371606000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 327 ms_handle_reset con 0x559371606000 session 0x55936ef7e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:07.467755+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 154304512 unmapped: 9142272 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 327 ms_handle_reset con 0x55936e995400 session 0x55936f0e0f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:08.467859+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f5c03000/0x0/0x4ffc00000, data 0x587217d/0x50cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 154370048 unmapped: 9076736 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 327 ms_handle_reset con 0x55936c9d4c00 session 0x55936d369a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d2b6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f5c03000/0x0/0x4ffc00000, data 0x587217d/0x50cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 327 ms_handle_reset con 0x55936e999400 session 0x55936efe03c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:09.468008+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2748618 data_alloc: 251658240 data_used: 31825920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f5c03000/0x0/0x4ffc00000, data 0x587211b/0x50ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 327 handle_osd_map epochs [327,328], i have 327, src has [1,328]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 handle_osd_map epochs [328,328], i have 328, src has [1,328]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 154402816 unmapped: 9043968 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x55936e995400 session 0x55936fe390e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x559371607400 session 0x55936ef710e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x55936d2b6c00 session 0x55936f0f6000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x55936e999400 session 0x55936c0b2960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x559371607400 session 0x55936f0f6b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x55936e969c00 session 0x559370864000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x55936e995400 session 0x559370bc94a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x55936c9d4c00 session 0x55936ef7f2c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:10.468174+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142688256 unmapped: 20758528 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 ms_handle_reset con 0x55936e995400 session 0x559370ec4000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f7db9000/0x0/0x4ffc00000, data 0x2333dcc/0x24fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:11.468294+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 21454848 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:12.468478+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145104896 unmapped: 18341888 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:13.468615+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145104896 unmapped: 18341888 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:14.468763+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2278209 data_alloc: 234881024 data_used: 19480576
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145104896 unmapped: 18341888 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 heartbeat osd_stat(store_statfs(0x4f87d4000/0x0/0x4ffc00000, data 0x2333d6a/0x24fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 328 handle_osd_map epochs [329,329], i have 329, src has [1,329]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.142401695s of 10.823896408s, submitted: 146
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 329 ms_handle_reset con 0x55936e999400 session 0x55936ef25680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:15.468897+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 18333696 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:16.469026+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 18333696 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 329 handle_osd_map epochs [330,331], i have 329, src has [1,331]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 331 heartbeat osd_stat(store_statfs(0x4f87d0000/0x0/0x4ffc00000, data 0x233593f/0x24fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:17.469146+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 18333696 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:18.469269+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145113088 unmapped: 18333696 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:19.469402+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2292665 data_alloc: 234881024 data_used: 19492864
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145121280 unmapped: 18325504 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 332 ms_handle_reset con 0x559371607400 session 0x55936f0e12c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:20.469568+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145121280 unmapped: 18325504 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:21.469673+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145129472 unmapped: 18317312 heap: 163446784 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 333 heartbeat osd_stat(store_statfs(0x4f87c6000/0x0/0x4ffc00000, data 0x233abec/0x2506000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371606000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 333 ms_handle_reset con 0x559371606000 session 0x55936c0b34a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:22.469829+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 19718144 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e998c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 333 ms_handle_reset con 0x55936e998c00 session 0x559370865a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:23.469963+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 19570688 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:24.470109+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2338481 data_alloc: 234881024 data_used: 21823488
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 333 heartbeat osd_stat(store_statfs(0x4f85af000/0x0/0x4ffc00000, data 0x25517dd/0x271f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 19570688 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:25.470214+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.554904938s of 10.807321548s, submitted: 50
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 334 ms_handle_reset con 0x55936e995400 session 0x55936d8730e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145760256 unmapped: 19587072 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 334 ms_handle_reset con 0x55936e999400 session 0x55936d368000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371606000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:26.470343+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 19570688 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:27.470636+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 19570688 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 336 ms_handle_reset con 0x559371606000 session 0x55936c6143c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:28.470764+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 19439616 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 336 ms_handle_reset con 0x559371607400 session 0x559370ec4f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f85a4000/0x0/0x4ffc00000, data 0x25569da/0x2728000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:29.470862+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 336 ms_handle_reset con 0x559371607800 session 0x559370bc9680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2347858 data_alloc: 234881024 data_used: 22245376
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 18350080 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:30.471028+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146997248 unmapped: 18350080 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 336 ms_handle_reset con 0x55936e995400 session 0x55936ef24000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:31.471159+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 336 heartbeat osd_stat(store_statfs(0x4f85a7000/0x0/0x4ffc00000, data 0x25569ca/0x2727000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147005440 unmapped: 18341888 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:32.471305+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147005440 unmapped: 18341888 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:33.471499+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 337 ms_handle_reset con 0x55936e999400 session 0x5593708641e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147005440 unmapped: 18341888 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 337 heartbeat osd_stat(store_statfs(0x4f85a6000/0x0/0x4ffc00000, data 0x2556a2c/0x2728000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371606000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 337 ms_handle_reset con 0x559371606000 session 0x55936f0e0f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 338 ms_handle_reset con 0x559371607400 session 0x559370bc8f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:34.471633+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2358513 data_alloc: 234881024 data_used: 22257664
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147005440 unmapped: 18341888 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:35.471782+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.920164108s of 10.039581299s, submitted: 101
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147013632 unmapped: 18333696 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 338 ms_handle_reset con 0x559371607c00 session 0x55936d8732c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 338 heartbeat osd_stat(store_statfs(0x4f859d000/0x0/0x4ffc00000, data 0x255a1a6/0x272e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:36.471929+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147038208 unmapped: 18309120 heap: 165347328 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 339 heartbeat osd_stat(store_statfs(0x4f859d000/0x0/0x4ffc00000, data 0x255bbc3/0x2730000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 339 ms_handle_reset con 0x55936e995400 session 0x55936dc1a780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:37.472042+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 17309696 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 340 ms_handle_reset con 0x55936e999400 session 0x55936dc1b860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:38.472197+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148078592 unmapped: 19554304 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:39.472358+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371606000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f836c000/0x0/0x4ffc00000, data 0x27897ce/0x2961000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2384232 data_alloc: 234881024 data_used: 22274048
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 340 ms_handle_reset con 0x559371606000 session 0x55936e6a4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148078592 unmapped: 19554304 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 341 ms_handle_reset con 0x559371607400 session 0x55936e6a5e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:40.472591+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148094976 unmapped: 19537920 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 341 ms_handle_reset con 0x55937164c000 session 0x55936e6a4f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 341 ms_handle_reset con 0x55936e995400 session 0x55936d1c4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 341 heartbeat osd_stat(store_statfs(0x4f836c000/0x0/0x4ffc00000, data 0x278b33d/0x2962000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:41.472698+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148111360 unmapped: 19521536 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:42.472844+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 341 ms_handle_reset con 0x55936e999400 session 0x55936c588d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148111360 unmapped: 19521536 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:43.473009+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148111360 unmapped: 19521536 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371606000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:44.473157+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2390601 data_alloc: 234881024 data_used: 22278144
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148119552 unmapped: 19513344 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 342 ms_handle_reset con 0x559371606000 session 0x55936fe39680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:45.473316+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148135936 unmapped: 19496960 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 342 heartbeat osd_stat(store_statfs(0x4f8326000/0x0/0x4ffc00000, data 0x27ccfab/0x29a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.873856544s of 10.470364571s, submitted: 63
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:46.473490+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 343 ms_handle_reset con 0x55937164c400 session 0x55936fe39a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148144128 unmapped: 19488768 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 343 ms_handle_reset con 0x559371607400 session 0x55936d440960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:47.473641+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148144128 unmapped: 19488768 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:48.473740+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148144128 unmapped: 19488768 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:49.473894+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2402704 data_alloc: 234881024 data_used: 22306816
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148168704 unmapped: 19464192 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 344 ms_handle_reset con 0x55937164dc00 session 0x55936ef712c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:50.474097+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 344 heartbeat osd_stat(store_statfs(0x4f8320000/0x0/0x4ffc00000, data 0x27d05ff/0x29ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148168704 unmapped: 19464192 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:51.474262+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148168704 unmapped: 19464192 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:52.474409+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 344 ms_handle_reset con 0x55936e995400 session 0x55936e6ade00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148168704 unmapped: 19464192 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:53.474546+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148217856 unmapped: 19415040 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:54.474676+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 344 ms_handle_reset con 0x559371607400 session 0x55936d867680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2409112 data_alloc: 234881024 data_used: 23871488
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148226048 unmapped: 19406848 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 344 heartbeat osd_stat(store_statfs(0x4f8322000/0x0/0x4ffc00000, data 0x27d059d/0x29ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:55.474781+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 344 handle_osd_map epochs [344,345], i have 344, src has [1,345]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148234240 unmapped: 19398656 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.352576256s of 10.003747940s, submitted: 43
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 346 ms_handle_reset con 0x55936e999400 session 0x55936d440f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 346 ms_handle_reset con 0x55936c9d4c00 session 0x559370bc9c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 346 ms_handle_reset con 0x55936e969c00 session 0x55936d868960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:56.474906+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 346 ms_handle_reset con 0x55936e995400 session 0x55936e6a4f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148242432 unmapped: 19390464 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 ms_handle_reset con 0x55936e999400 session 0x55936e6a5e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:57.475040+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 heartbeat osd_stat(store_statfs(0x4f8758000/0x0/0x4ffc00000, data 0x239476a/0x2574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x4f2f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149307392 unmapped: 18325504 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 ms_handle_reset con 0x559371607400 session 0x55936dc1a780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 ms_handle_reset con 0x55937164dc00 session 0x55936d8732c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 ms_handle_reset con 0x55936e969c00 session 0x55936f0e0f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:58.475517+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147103744 unmapped: 20529152 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 ms_handle_reset con 0x559371607400 session 0x55936ef7fe00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371606000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 ms_handle_reset con 0x559371606000 session 0x55936ef7f0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:59.475653+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 ms_handle_reset con 0x55937164c400 session 0x55936d3690e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2387902 data_alloc: 234881024 data_used: 23117824
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147103744 unmapped: 20529152 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370d8f400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:00.475832+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147103744 unmapped: 20529152 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 348 ms_handle_reset con 0x559370d8f400 session 0x559370ec4f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:01.476010+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147365888 unmapped: 20267008 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe3b400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 349 ms_handle_reset con 0x55936fe3b400 session 0x55936dbee5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370d8f800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 349 ms_handle_reset con 0x55936d21ec00 session 0x55936d718f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:02.476169+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 349 ms_handle_reset con 0x55936e969c00 session 0x55936f3521e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164cc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142778368 unmapped: 24854528 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x559370d8f800 session 0x55936d441860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55937164cc00 session 0x55936fe390e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55936e995400 session 0x55936d368000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55936e999400 session 0x55936c0b34a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:03.476377+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 heartbeat osd_stat(store_statfs(0x4f92f4000/0x0/0x4ffc00000, data 0x13e4b30/0x15c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164cc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142778368 unmapped: 24854528 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55937164cc00 session 0x55936f0e12c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55936d21ec00 session 0x55936d1c5680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:04.476521+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55936e969c00 session 0x55936f352f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2201464 data_alloc: 234881024 data_used: 13426688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142794752 unmapped: 24838144 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55936ff14000 session 0x55936ef71c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55936e968c00 session 0x55936f0e1c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e980800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:05.476657+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 ms_handle_reset con 0x55936e980800 session 0x55936d872d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134406144 unmapped: 33226752 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:06.476792+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.737846375s of 10.482941628s, submitted: 184
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134422528 unmapped: 33210368 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8bc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 351 ms_handle_reset con 0x55936fc8bc00 session 0x559370864780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:07.476953+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134438912 unmapped: 33193984 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:08.477104+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134438912 unmapped: 33193984 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:09.477234+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 351 heartbeat osd_stat(store_statfs(0x4f9eb2000/0x0/0x4ffc00000, data 0x8294e3/0xa0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2054810 data_alloc: 218103808 data_used: 1310720
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134438912 unmapped: 33193984 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8a000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 351 ms_handle_reset con 0x55936fc8a000 session 0x55936ef7e1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:10.477416+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134479872 unmapped: 33153024 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 351 handle_osd_map epochs [351,352], i have 351, src has [1,352]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 352 ms_handle_reset con 0x55936e968c00 session 0x55936bfc7e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e980800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 352 ms_handle_reset con 0x55936e980800 session 0x55936cad2960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:11.477606+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134479872 unmapped: 33153024 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:12.477743+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134479872 unmapped: 33153024 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:13.477928+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134479872 unmapped: 33153024 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:14.478064+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2057216 data_alloc: 218103808 data_used: 1318912
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134479872 unmapped: 33153024 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 352 heartbeat osd_stat(store_statfs(0x4f9eb1000/0x0/0x4ffc00000, data 0x82af3c/0xa0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:15.478188+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8bc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 352 ms_handle_reset con 0x55936fc8bc00 session 0x55936ef24f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134488064 unmapped: 33144832 heap: 167632896 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 352 ms_handle_reset con 0x55936ff14000 session 0x55936ef25860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8a800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 352 ms_handle_reset con 0x55936fc8a800 session 0x55936ef25e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:16.478326+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 352 heartbeat osd_stat(store_statfs(0x4f93c3000/0x0/0x4ffc00000, data 0x1318f9e/0x14fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 133996544 unmapped: 45703168 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:17.478481+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.751493454s of 11.214600563s, submitted: 86
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134004736 unmapped: 45694976 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 353 ms_handle_reset con 0x55936e968c00 session 0x55936c44cd20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:18.478624+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 44638208 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 353 heartbeat osd_stat(store_statfs(0x4f93be000/0x0/0x4ffc00000, data 0x131ab7d/0x14ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:19.478804+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2143847 data_alloc: 218103808 data_used: 1327104
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 44638208 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:20.478999+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 44638208 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:21.479143+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 44638208 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:22.479287+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e980800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 353 ms_handle_reset con 0x55936e980800 session 0x55936c44c5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 44638208 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:23.479399+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 44638208 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 353 heartbeat osd_stat(store_statfs(0x4f93bd000/0x0/0x4ffc00000, data 0x131ab8d/0x1500000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:24.479566+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8a800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2145675 data_alloc: 218103808 data_used: 1327104
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 44638208 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8bc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:25.479751+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 353 ms_handle_reset con 0x55936fc8bc00 session 0x559370865860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135061504 unmapped: 44638208 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 353 ms_handle_reset con 0x55936d32dc00 session 0x559370bc8780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 353 ms_handle_reset con 0x55936ff14000 session 0x55936d8723c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 354 ms_handle_reset con 0x55936d32dc00 session 0x55936e6ad0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:26.479932+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 44621824 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f93b9000/0x0/0x4ffc00000, data 0x131c71a/0x1504000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 354 ms_handle_reset con 0x55936e968c00 session 0x55936f0f7680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 354 ms_handle_reset con 0x55936fc8a800 session 0x55936f0e1680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:27.480055+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 44621824 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:28.480203+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f93b9000/0x0/0x4ffc00000, data 0x131c71a/0x1504000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 354 handle_osd_map epochs [355,355], i have 355, src has [1,355]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.534790039s of 10.989254951s, submitted: 9
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135077888 unmapped: 44621824 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e980800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936e980800 session 0x55936f0e0780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8bc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936fc8bc00 session 0x55936f0e1a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936d32dc00 session 0x55936c0b2f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:29.480377+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936e968c00 session 0x55936c0b32c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e980800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8a800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936fc8a800 session 0x559370ec5c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936e980800 session 0x55936d1c52c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2162675 data_alloc: 218103808 data_used: 1343488
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 heartbeat osd_stat(store_statfs(0x4f93b4000/0x0/0x4ffc00000, data 0x131e319/0x150a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 44670976 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936d32c400 session 0x55936d1c5a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:30.480579+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936d32dc00 session 0x55936fe38b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135028736 unmapped: 44670976 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 ms_handle_reset con 0x55936e968c00 session 0x55936f0e0d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:31.480697+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 ms_handle_reset con 0x55936d32c400 session 0x55936d1c41e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135348224 unmapped: 44351488 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e980800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8a800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:32.480815+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135348224 unmapped: 44351488 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:33.480927+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135561216 unmapped: 44138496 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 heartbeat osd_stat(store_statfs(0x4f9389000/0x0/0x4ffc00000, data 0x1343f7f/0x1534000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 ms_handle_reset con 0x55936e968800 session 0x55936d368b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:34.481071+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2244975 data_alloc: 234881024 data_used: 11698176
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 137502720 unmapped: 42196992 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 ms_handle_reset con 0x55936e980800 session 0x55936fc1e3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 ms_handle_reset con 0x55936fc8a800 session 0x55936ef25c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 ms_handle_reset con 0x55936d32c400 session 0x55936c587c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:35.481213+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 ms_handle_reset con 0x55936d32dc00 session 0x55936c0b25a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 ms_handle_reset con 0x55936e968800 session 0x55936d1c5680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 137543680 unmapped: 42156032 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:36.481479+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e969000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 ms_handle_reset con 0x55936e969000 session 0x55936d2283c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 ms_handle_reset con 0x55936e968c00 session 0x55936ef7e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 137543680 unmapped: 42156032 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 ms_handle_reset con 0x55936d32dc00 session 0x55936f3521e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 ms_handle_reset con 0x55936d32c400 session 0x55936d719860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:37.481640+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 137543680 unmapped: 42156032 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8a800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 ms_handle_reset con 0x55936fc8a800 session 0x55936fc1e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 ms_handle_reset con 0x55936e968800 session 0x55936c0b34a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 ms_handle_reset con 0x55936d32c400 session 0x55936ef7ef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:38.481815+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 heartbeat osd_stat(store_statfs(0x4f93b3000/0x0/0x4ffc00000, data 0x1321859/0x150b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 137568256 unmapped: 42131456 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.031754494s of 10.359786034s, submitted: 100
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 358 ms_handle_reset con 0x55936d32dc00 session 0x559371158f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 358 heartbeat osd_stat(store_statfs(0x4f93b3000/0x0/0x4ffc00000, data 0x1321859/0x150b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:39.481973+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8a800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2094913 data_alloc: 218103808 data_used: 1368064
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 358 ms_handle_reset con 0x55936fc8a800 session 0x55936ef7f680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 47972352 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 359 ms_handle_reset con 0x55936e968c00 session 0x55936ef7e780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 359 ms_handle_reset con 0x55936d21ec00 session 0x55936fc1eb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:40.482194+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 47972352 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:41.482509+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 47972352 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:42.482726+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131727360 unmapped: 47972352 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 359 ms_handle_reset con 0x55936d21ec00 session 0x559370864780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:43.483063+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 359 ms_handle_reset con 0x55936d32c400 session 0x55936fe39860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131743744 unmapped: 47955968 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f9e9d000/0x0/0x4ffc00000, data 0x836f89/0xa21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:44.483183+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2095495 data_alloc: 218103808 data_used: 1372160
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131743744 unmapped: 47955968 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 359 ms_handle_reset con 0x55936d32dc00 session 0x55936bfc61e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:45.483354+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131743744 unmapped: 47955968 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f9e9d000/0x0/0x4ffc00000, data 0x836f89/0xa21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:46.483505+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131743744 unmapped: 47955968 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f9e99000/0x0/0x4ffc00000, data 0x8389ec/0xa24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:47.483632+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131743744 unmapped: 47955968 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 ms_handle_reset con 0x55936e968c00 session 0x55936bfc7c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:48.483761+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f9cbc000/0x0/0x4ffc00000, data 0xa169ec/0xc02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 47505408 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:49.483964+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2120381 data_alloc: 218103808 data_used: 1380352
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 47505408 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:50.484374+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 47505408 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:51.484576+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 47505408 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:52.484708+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 47505408 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f9cbc000/0x0/0x4ffc00000, data 0xa169ec/0xc02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:53.484894+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 47505408 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:54.485082+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2120381 data_alloc: 218103808 data_used: 1380352
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 47505408 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:55.485248+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132194304 unmapped: 47505408 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8a800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.632558823s of 16.962339401s, submitted: 61
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 ms_handle_reset con 0x55936fc8a800 session 0x55936dbee5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:56.485491+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f9cbc000/0x0/0x4ffc00000, data 0xa169ec/0xc02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 ms_handle_reset con 0x55936d21ec00 session 0x55936e6ada40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55936d32c400 session 0x55936f3532c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55936e968c00 session 0x559370ec41e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e994c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55936e999400 session 0x55936d368d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55936e994c00 session 0x55936d866f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132071424 unmapped: 47628288 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55936d21ec00 session 0x55936f3530e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55936d32c400 session 0x55936cad30e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55936e968c00 session 0x55936bfc7e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55936e999400 session 0x55936f352f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 ms_handle_reset con 0x55937164c400 session 0x55936c614f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:57.485641+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936e995400 session 0x55936d228b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936d32dc00 session 0x55936ef24f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 47579136 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:58.485791+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936d21ec00 session 0x55936cad2000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 47579136 heap: 179699712 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936e968c00 session 0x559370ec4f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936d32c400 session 0x55936cad2f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 heartbeat osd_stat(store_statfs(0x4f7932000/0x0/0x4ffc00000, data 0x2d9a158/0x2f8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,6,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:59.485987+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936e968c00 session 0x55936ef7e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2437744 data_alloc: 218103808 data_used: 1388544
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936e995400 session 0x55936d2283c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132235264 unmapped: 72671232 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936e999400 session 0x55936d1c5680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:00.486159+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164c400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164cc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55937164cc00 session 0x55936f352d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132259840 unmapped: 72646656 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55937164c400 session 0x55936c587c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:01.486402+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140648448 unmapped: 64258048 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:02.486570+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936e999400 session 0x55936dc1a1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164cc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 heartbeat osd_stat(store_statfs(0x4f2934000/0x0/0x4ffc00000, data 0x7d9a158/0x7f8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 72630272 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55937164cc00 session 0x55936d441e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:03.486669+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe3b400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370d8f800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 139649024 unmapped: 65257472 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:04.486767+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3677029 data_alloc: 218103808 data_used: 2023424
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 143908864 unmapped: 60997632 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:05.486941+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 131538944 unmapped: 73367552 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.247469425s of 10.229736328s, submitted: 83
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:06.487115+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 68100096 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936d21ec00 session 0x55936d440960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 ms_handle_reset con 0x55936d32dc00 session 0x55936d441860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:07.487243+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132694016 unmapped: 72212480 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370d91000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 362 handle_osd_map epochs [363,363], i have 363, src has [1,363]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:08.487363+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 363 heartbeat osd_stat(store_statfs(0x4e7905000/0x0/0x4ffc00000, data 0x12dc5d39/0x12fb8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 363 ms_handle_reset con 0x559370d91000 session 0x55936d228f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132726784 unmapped: 72179712 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:09.487482+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4249427 data_alloc: 234881024 data_used: 9912320
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132734976 unmapped: 72171520 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:10.487637+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132734976 unmapped: 72171520 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:11.487753+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 363 ms_handle_reset con 0x55936d21ec00 session 0x559370ec5860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132734976 unmapped: 72171520 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:12.487948+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164cc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 364 ms_handle_reset con 0x55937164cc00 session 0x559370ec4780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132751360 unmapped: 72155136 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:13.488171+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 365 heartbeat osd_stat(store_statfs(0x4e7901000/0x0/0x4ffc00000, data 0x12dc7934/0x12fbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 365 ms_handle_reset con 0x55936e999400 session 0x55936dc012c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 365 ms_handle_reset con 0x55936d32dc00 session 0x559370ec50e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 132775936 unmapped: 72130560 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:14.488292+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4276690 data_alloc: 234881024 data_used: 9957376
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 134356992 unmapped: 70549504 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370d90800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 365 ms_handle_reset con 0x559370d90800 session 0x55936f352d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:15.488473+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 365 handle_osd_map epochs [365,366], i have 365, src has [1,366]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 366 ms_handle_reset con 0x55936d32dc00 session 0x559371159c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 366 handle_osd_map epochs [367,367], i have 367, src has [1,367]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 135438336 unmapped: 69468160 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 367 ms_handle_reset con 0x55936d21ec00 session 0x55936c0b25a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:16.488598+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.786089897s of 10.267209053s, submitted: 114
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 367 heartbeat osd_stat(store_statfs(0x4e6093000/0x0/0x4ffc00000, data 0x1348fc1b/0x13688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 138051584 unmapped: 66854912 heap: 204906496 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:17.488749+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164cc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 367 ms_handle_reset con 0x55937164cc00 session 0x55936dbee1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 367 ms_handle_reset con 0x55936e999400 session 0x5593708643c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcc800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 184524800 unmapped: 37175296 heap: 221700096 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:18.488880+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151298048 unmapped: 91406336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:19.489011+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 367 heartbeat osd_stat(store_statfs(0x4e0912000/0x0/0x4ffc00000, data 0x18c13c1b/0x18e0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,2])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5025984 data_alloc: 234881024 data_used: 10317824
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 143147008 unmapped: 99557376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:20.489248+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 139075584 unmapped: 103628800 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:21.489495+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147800064 unmapped: 94904320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:22.489691+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 367 heartbeat osd_stat(store_statfs(0x4db4f0000/0x0/0x4ffc00000, data 0x1e035c1b/0x1e22e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,3])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 139976704 unmapped: 102727680 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:23.489856+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152838144 unmapped: 89866240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:24.490031+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6214856 data_alloc: 234881024 data_used: 10321920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140607488 unmapped: 102096896 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:25.490202+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bca000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 93347840 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:26.490372+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.478274345s of 10.037839890s, submitted: 110
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 369 ms_handle_reset con 0x559370bcc800 session 0x559370864960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 369 ms_handle_reset con 0x559370bca000 session 0x559370864780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 369 ms_handle_reset con 0x559370bcd800 session 0x559370864b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 101654528 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:27.490588+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 369 heartbeat osd_stat(store_statfs(0x4d28e8000/0x0/0x4ffc00000, data 0x26c39333/0x26e35000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 101654528 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:28.490810+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 370 ms_handle_reset con 0x55936d21ec00 session 0x55936bfc61e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141082624 unmapped: 101621760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:29.490955+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 371 ms_handle_reset con 0x55936d32dc00 session 0x559370bc9c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6541362 data_alloc: 234881024 data_used: 10346496
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141139968 unmapped: 101564416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:30.491269+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141139968 unmapped: 101564416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:31.491521+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141139968 unmapped: 101564416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:32.491661+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 371 ms_handle_reset con 0x55936e999400 session 0x55936dbef0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 371 ms_handle_reset con 0x55936d21ec00 session 0x55936f3532c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141139968 unmapped: 101564416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 371 heartbeat osd_stat(store_statfs(0x4d28de000/0x0/0x4ffc00000, data 0x26df6ad5/0x26e40000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:33.491777+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141139968 unmapped: 101564416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:34.491903+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 371 ms_handle_reset con 0x55936d32dc00 session 0x55936dc01680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6542852 data_alloc: 234881024 data_used: 10350592
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bca000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141139968 unmapped: 101564416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 371 ms_handle_reset con 0x559370bca000 session 0x55936d369c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:35.492022+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141156352 unmapped: 101548032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 372 heartbeat osd_stat(store_statfs(0x4d28d8000/0x0/0x4ffc00000, data 0x26df9ba9/0x26e46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:36.492138+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.077540398s of 10.615500450s, submitted: 75
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141164544 unmapped: 101539840 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:37.492285+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 372 ms_handle_reset con 0x559370bcd800 session 0x55936c6150e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164cc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152748032 unmapped: 89956352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 372 ms_handle_reset con 0x55937164cc00 session 0x55936fe385a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:38.492460+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141746176 unmapped: 100958208 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:39.492623+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 372 ms_handle_reset con 0x55936d21ec00 session 0x55936c94d4a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 372 ms_handle_reset con 0x55936d32dc00 session 0x5593711590e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bca000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 372 ms_handle_reset con 0x559370bca000 session 0x55936dbe7a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6643395 data_alloc: 234881024 data_used: 10358784
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141754368 unmapped: 100950016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:40.492760+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7e3000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 373 ms_handle_reset con 0x55936d7e3000 session 0x55936fc1f0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe14c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 373 ms_handle_reset con 0x55936fe14c00 session 0x5593711585a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 373 heartbeat osd_stat(store_statfs(0x4d1cec000/0x0/0x4ffc00000, data 0x279e2189/0x27a31000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,4])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142639104 unmapped: 100065280 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:41.492890+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 373 ms_handle_reset con 0x55936d32dc00 session 0x55936c615e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 373 ms_handle_reset con 0x55936d21ec00 session 0x559371158b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140500992 unmapped: 102203392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:42.493008+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 374 ms_handle_reset con 0x559370bcd800 session 0x55936d2281e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7e3000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bca000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140509184 unmapped: 102195200 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:43.493749+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 102170624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:44.493856+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6680113 data_alloc: 234881024 data_used: 10428416
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140533760 unmapped: 102170624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:45.494168+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe15000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 374 heartbeat osd_stat(store_statfs(0x4d1a5a000/0x0/0x4ffc00000, data 0x27c72d06/0x27cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 102162432 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:46.494273+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 374 heartbeat osd_stat(store_statfs(0x4d1a5a000/0x0/0x4ffc00000, data 0x27c72d06/0x27cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.201757431s of 10.054559708s, submitted: 62
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 102162432 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 374 heartbeat osd_stat(store_statfs(0x4d1a5a000/0x0/0x4ffc00000, data 0x27c72d06/0x27cc3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:47.494406+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 102162432 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:48.494545+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 374 heartbeat osd_stat(store_statfs(0x4d1a4a000/0x0/0x4ffc00000, data 0x27c80d06/0x27cd3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140541952 unmapped: 102162432 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:49.494696+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe14800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6685700 data_alloc: 234881024 data_used: 10428416
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 102154240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:50.494916+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 374 heartbeat osd_stat(store_statfs(0x4d1a43000/0x0/0x4ffc00000, data 0x27c84d06/0x27cd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 102154240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:51.495094+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 102154240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:52.495266+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140566528 unmapped: 102137856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:53.495462+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140574720 unmapped: 102129664 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:54.495589+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 heartbeat osd_stat(store_statfs(0x4d1a43000/0x0/0x4ffc00000, data 0x27c86883/0x27cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6689481 data_alloc: 234881024 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140632064 unmapped: 102072320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:55.495740+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140697600 unmapped: 102006784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:56.495857+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 heartbeat osd_stat(store_statfs(0x4d1a43000/0x0/0x4ffc00000, data 0x27c86883/0x27cda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1,2,2])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141033472 unmapped: 101670912 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:57.496160+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.585919380s of 10.370415688s, submitted: 25
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 ms_handle_reset con 0x55936fe14800 session 0x55936c44d0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 heartbeat osd_stat(store_statfs(0x4d19fd000/0x0/0x4ffc00000, data 0x27ccd883/0x27d21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140738560 unmapped: 101965824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:58.496307+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 heartbeat osd_stat(store_statfs(0x4d19fd000/0x0/0x4ffc00000, data 0x27ccd883/0x27d21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140738560 unmapped: 101965824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:59.496489+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 heartbeat osd_stat(store_statfs(0x4d19fd000/0x0/0x4ffc00000, data 0x27ccd883/0x27d21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6701237 data_alloc: 234881024 data_used: 12316672
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140779520 unmapped: 101924864 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:00.496743+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140959744 unmapped: 101744640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:01.496932+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 101736448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:02.497053+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 101736448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:03.497213+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 heartbeat osd_stat(store_statfs(0x4d19f3000/0x0/0x4ffc00000, data 0x27cdf883/0x27d2b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 101736448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:04.497353+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6708957 data_alloc: 234881024 data_used: 12595200
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 101736448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:05.497508+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e434000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 ms_handle_reset con 0x55936e434000 session 0x55936f0f7c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141459456 unmapped: 101244928 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 ms_handle_reset con 0x55936d21ec00 session 0x55936f0f61e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe14800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 ms_handle_reset con 0x55936fe14800 session 0x55936c0b2000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:06.497641+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 ms_handle_reset con 0x559370bcd800 session 0x55936bfc7e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141467648 unmapped: 101236736 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:07.497808+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.932460546s of 10.064344406s, submitted: 23
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 ms_handle_reset con 0x55936c9d4800 session 0x55936f0f63c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff15c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 ms_handle_reset con 0x55936ff15c00 session 0x55936d368960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141590528 unmapped: 101113856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 ms_handle_reset con 0x55936c9d4800 session 0x55936d368f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:08.497929+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 ms_handle_reset con 0x55936d32dc00 session 0x55936bfc6780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 ms_handle_reset con 0x55936d21ec00 session 0x559371158960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe14800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 ms_handle_reset con 0x55936fe14800 session 0x55936e6a4000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141213696 unmapped: 101490688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:09.498095+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 heartbeat osd_stat(store_statfs(0x4d14fa000/0x0/0x4ffc00000, data 0x281d7400/0x28224000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6760686 data_alloc: 234881024 data_used: 12615680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141213696 unmapped: 101490688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:10.498288+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141221888 unmapped: 101482496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:11.498460+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142270464 unmapped: 100433920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:12.498609+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142278656 unmapped: 100425728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:13.498793+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142278656 unmapped: 100425728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:14.498962+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 heartbeat osd_stat(store_statfs(0x4d14ec000/0x0/0x4ffc00000, data 0x281eb400/0x28230000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6763420 data_alloc: 234881024 data_used: 12615680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142278656 unmapped: 100425728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:15.499214+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142278656 unmapped: 100425728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:16.499362+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142278656 unmapped: 100425728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:17.499498+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142286848 unmapped: 100417536 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:18.499634+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff15c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.349915504s of 11.485507011s, submitted: 50
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142286848 unmapped: 100417536 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:19.499791+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 ms_handle_reset con 0x55936d7e3000 session 0x55936f3530e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 376 ms_handle_reset con 0x559370bcd800 session 0x55936d79fc20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x559370bca000 session 0x55936d866d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x55936ff15c00 session 0x55936fc1e5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe14800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6766902 data_alloc: 234881024 data_used: 12627968
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142327808 unmapped: 100376576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:20.500023+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 heartbeat osd_stat(store_statfs(0x4d14ea000/0x0/0x4ffc00000, data 0x281ecf7d/0x28233000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x55936d32dc00 session 0x55936dbe7a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x55936fe15000 session 0x55936c0b34a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142336000 unmapped: 100368384 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:21.500202+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145645568 unmapped: 97058816 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:22.500367+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x55936d32dc00 session 0x55936e6ad680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff15c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bca000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x55936ff15c00 session 0x55936f0f74a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x559370bcd800 session 0x559370ec52c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x559370bca000 session 0x55936f0f7680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145653760 unmapped: 97050624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:23.500531+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 heartbeat osd_stat(store_statfs(0x4d15a2000/0x0/0x4ffc00000, data 0x28136f0b/0x2817b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145653760 unmapped: 97050624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff15400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 ms_handle_reset con 0x55936ff15400 session 0x55936ef25680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:24.500657+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6786466 data_alloc: 234881024 data_used: 17252352
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145653760 unmapped: 97050624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:25.500779+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 97034240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:26.500924+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 378 ms_handle_reset con 0x55936e968c00 session 0x55936ef25c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 378 ms_handle_reset con 0x55936e995400 session 0x55936f352b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff15c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 378 ms_handle_reset con 0x55936ff15c00 session 0x559370864f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145670144 unmapped: 97034240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:27.501105+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145702912 unmapped: 97001472 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:28.501250+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 378 heartbeat osd_stat(store_statfs(0x4d15a1000/0x0/0x4ffc00000, data 0x27f83adc/0x2817c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145702912 unmapped: 97001472 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:29.501526+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 378 ms_handle_reset con 0x55936d32dc00 session 0x55936fc1e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bca000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.156374931s of 10.789660454s, submitted: 34
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 378 ms_handle_reset con 0x559370bca000 session 0x55936c5872c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6689548 data_alloc: 234881024 data_used: 14336000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 144252928 unmapped: 98451456 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:30.501725+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 378 ms_handle_reset con 0x55936d32dc00 session 0x55936ef7ed20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 378 handle_osd_map epochs [378,379], i have 378, src has [1,379]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 144261120 unmapped: 98443264 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:31.501862+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 144269312 unmapped: 98435072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:32.502002+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 379 heartbeat osd_stat(store_statfs(0x4d1ea5000/0x0/0x4ffc00000, data 0x2767e53f/0x27878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 144957440 unmapped: 97746944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:33.502157+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145022976 unmapped: 97681408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:34.502268+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 380 ms_handle_reset con 0x55936e968c00 session 0x559370865680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6787076 data_alloc: 234881024 data_used: 14446592
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145465344 unmapped: 97239040 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:35.502479+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146046976 unmapped: 96657408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:36.502610+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 380 heartbeat osd_stat(store_statfs(0x4d1b3d000/0x0/0x4ffc00000, data 0x281c40f2/0x27be0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 380 ms_handle_reset con 0x55936e995400 session 0x55936d8690e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146046976 unmapped: 96657408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:37.502793+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146046976 unmapped: 96657408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:38.502987+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff15c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 380 heartbeat osd_stat(store_statfs(0x4d1b3d000/0x0/0x4ffc00000, data 0x281c40f2/0x27be0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 380 ms_handle_reset con 0x559370bcd800 session 0x55936fc1e780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 380 ms_handle_reset con 0x55936ff15c00 session 0x55936ef252c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145498112 unmapped: 97206272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:39.503172+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.265731812s of 10.004180908s, submitted: 85
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6794768 data_alloc: 234881024 data_used: 14479360
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145637376 unmapped: 97067008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:40.503329+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 380 ms_handle_reset con 0x55936d32dc00 session 0x55936fc1f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936e968c00 session 0x55936ef252c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145653760 unmapped: 97050624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:41.503564+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:42.503766+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145661952 unmapped: 97042432 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x559370bcd800 session 0x55936fc1e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:43.503967+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145694720 unmapped: 97009664 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 heartbeat osd_stat(store_statfs(0x4d1b16000/0x0/0x4ffc00000, data 0x281e7bc7/0x27c07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:44.504163+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145694720 unmapped: 97009664 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7c6c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936e995400 session 0x559370865680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936d7c6c00 session 0x55936ef25c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6799907 data_alloc: 234881024 data_used: 14532608
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:45.504365+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145760256 unmapped: 96944128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936d32dc00 session 0x559370ec52c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936e968c00 session 0x55936bfc6780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936e995400 session 0x55936f0f61e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:46.504591+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147243008 unmapped: 95461376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x559370bcd800 session 0x55936f0f7c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bca000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x559370bca000 session 0x55936ef25a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:47.504822+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147316736 unmapped: 95387648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936d32dc00 session 0x55936f352f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:48.504993+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147349504 unmapped: 95354880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936e968c00 session 0x559370ec4780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:49.505166+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147349504 unmapped: 95354880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 heartbeat osd_stat(store_statfs(0x4d17b7000/0x0/0x4ffc00000, data 0x28548bb7/0x27f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936e995400 session 0x55936ef7e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.517029762s of 10.164605141s, submitted: 84
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x559370bcd800 session 0x55936f352d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6827828 data_alloc: 234881024 data_used: 14528512
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:50.505467+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147349504 unmapped: 95354880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff15800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 ms_handle_reset con 0x55936ff15800 session 0x55936dbef0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:51.505653+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147349504 unmapped: 95354880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 381 handle_osd_map epochs [382,382], i have 382, src has [1,382]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 382 ms_handle_reset con 0x55936d32dc00 session 0x55936c6150e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 382 heartbeat osd_stat(store_statfs(0x4d17b7000/0x0/0x4ffc00000, data 0x28548bb7/0x27f67000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 382 ms_handle_reset con 0x55936e968c00 session 0x55936c44d0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:52.505790+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147365888 unmapped: 95338496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 383 ms_handle_reset con 0x55936e995400 session 0x55936efe01e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:53.505965+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147374080 unmapped: 95330304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 383 ms_handle_reset con 0x559370bcd800 session 0x55936d1cc3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ff14000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:54.506137+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147398656 unmapped: 95305728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936ff14000 session 0x55936d228000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936c9d4800 session 0x55936c615e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936d21ec00 session 0x55936ef24780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936fe14800 session 0x55936c614b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6833206 data_alloc: 234881024 data_used: 14503936
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:55.506255+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 95281152 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936d32dc00 session 0x55936ef24f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e995400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936e995400 session 0x559370bc8780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936e968c00 session 0x55936d229a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936d21ec00 session 0x55936f0e0d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936c9d4800 session 0x559371158b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:56.506477+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146014208 unmapped: 96690176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936d32dc00 session 0x55936d719a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe14800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936fe14800 session 0x559370ec45a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:57.506600+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146046976 unmapped: 96657408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936fe3b400 session 0x55936d441e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x559370d8f800 session 0x55936fc1e3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 heartbeat osd_stat(store_statfs(0x4d1c59000/0x0/0x4ffc00000, data 0x28091f3a/0x27ac5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936d21ec00 session 0x55936fc1ef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:58.506762+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 102670336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:59.506929+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 140034048 unmapped: 102670336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 ms_handle_reset con 0x55936d32dc00 session 0x559370864b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.987246513s of 10.002008438s, submitted: 146
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 385 ms_handle_reset con 0x559370bcd800 session 0x559370bc9860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:00.507076+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6716425 data_alloc: 218103808 data_used: 1773568
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142589952 unmapped: 100114432 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 386 ms_handle_reset con 0x55936e968c00 session 0x5593711585a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:01.507228+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142721024 unmapped: 99983360 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 387 heartbeat osd_stat(store_statfs(0x4d2078000/0x0/0x4ffc00000, data 0x27efd0af/0x276a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:02.507383+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 99975168 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 388 ms_handle_reset con 0x55936d21ec00 session 0x55936d441e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:03.507557+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141844480 unmapped: 100859904 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:04.507743+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141844480 unmapped: 100859904 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 388 heartbeat osd_stat(store_statfs(0x4d2077000/0x0/0x4ffc00000, data 0x27efec80/0x276a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:05.507907+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6746099 data_alloc: 218103808 data_used: 5455872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141918208 unmapped: 100786176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:06.508088+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 389 ms_handle_reset con 0x55936d32dc00 session 0x559370ec4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141950976 unmapped: 100753408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 389 heartbeat osd_stat(store_statfs(0x4d2077000/0x0/0x4ffc00000, data 0x27efec80/0x276a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:07.509179+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141950976 unmapped: 100753408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:08.509407+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141950976 unmapped: 100753408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:09.509629+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe3b400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 100712448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 390 ms_handle_reset con 0x55936fe3b400 session 0x55936d1c4000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:10.509842+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6653274 data_alloc: 218103808 data_used: 5468160
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 141991936 unmapped: 100712448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.149274826s of 10.741384506s, submitted: 192
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370d8f800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 390 ms_handle_reset con 0x559370d8f800 session 0x55936d1c5a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 heartbeat osd_stat(store_statfs(0x4d25bf000/0x0/0x4ffc00000, data 0x273153ea/0x26d4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:11.509974+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147103744 unmapped: 95600640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:12.510629+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147185664 unmapped: 95518720 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:13.510781+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147365888 unmapped: 95338496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:14.510884+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151830528 unmapped: 90873856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 heartbeat osd_stat(store_statfs(0x4cdd84000/0x0/0x4ffc00000, data 0x2bb4fe4d/0x2b58a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,3])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:15.511020+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7476726 data_alloc: 218103808 data_used: 5578752
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 156418048 unmapped: 86286336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:16.511168+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157859840 unmapped: 84844544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:17.511298+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 158195712 unmapped: 84508672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:18.511442+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 158498816 unmapped: 84205568 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:19.511572+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150446080 unmapped: 92258304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:20.511750+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8574706 data_alloc: 218103808 data_used: 5586944
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936c9d4800 session 0x55936ef7f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 154861568 unmapped: 87842816 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x559370bcd800 session 0x55936c44c780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.797548771s of 10.228250504s, submitted: 191
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 heartbeat osd_stat(store_statfs(0x4c194c000/0x0/0x4ffc00000, data 0x37f86e4d/0x379c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:21.511928+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150839296 unmapped: 91865088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936d32dc00 session 0x55936d368d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936e968c00 session 0x55936dc1b4a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fe3b400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936fe3b400 session 0x559370ec54a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936d21ec00 session 0x55936d1c5680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936c9d4800 session 0x55936d1cda40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:22.512111+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152002560 unmapped: 90701824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:23.512242+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 90693632 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 heartbeat osd_stat(store_statfs(0x4bf94d000/0x0/0x4ffc00000, data 0x39f87327/0x399c1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,2])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936d32dc00 session 0x55936dbee1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936e968c00 session 0x55936d1cd4a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:24.512393+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcd800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x559370bcd800 session 0x55936e6a4000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150552576 unmapped: 92151808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 ms_handle_reset con 0x55936c9d4800 session 0x55936f0e03c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:25.512572+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6807845 data_alloc: 218103808 data_used: 5582848
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150552576 unmapped: 92151808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 392 ms_handle_reset con 0x55936d21ec00 session 0x559370864f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:26.512739+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 392 heartbeat osd_stat(store_statfs(0x4d1d4f000/0x0/0x4ffc00000, data 0x27b86ddb/0x275bf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150560768 unmapped: 92143616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:27.512927+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150560768 unmapped: 92143616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:28.513042+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150511616 unmapped: 92192768 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:29.513148+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 394 ms_handle_reset con 0x55936d32dc00 session 0x559371159860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 92168192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:30.513312+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6651281 data_alloc: 218103808 data_used: 5505024
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 92168192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e968c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 394 handle_osd_map epochs [394,395], i have 394, src has [1,395]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.479928970s of 10.054928780s, submitted: 142
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:31.513494+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150552576 unmapped: 92151808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 395 ms_handle_reset con 0x55936c423800 session 0x55936efe0d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 396 ms_handle_reset con 0x55936e968c00 session 0x55936f0f6d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 396 heartbeat osd_stat(store_statfs(0x4d2c25000/0x0/0x4ffc00000, data 0x264cdd35/0x266e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:32.513657+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150691840 unmapped: 92012544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 396 ms_handle_reset con 0x55936c423800 session 0x559370ec45a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:33.513786+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 396 heartbeat osd_stat(store_statfs(0x4db021000/0x0/0x4ffc00000, data 0x1cccf926/0x1ceeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150732800 unmapped: 91971584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 396 heartbeat osd_stat(store_statfs(0x4e6023000/0x0/0x4ffc00000, data 0x130cf926/0x132eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:34.514188+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 396 ms_handle_reset con 0x55936d21ec00 session 0x55936d7190e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150732800 unmapped: 91971584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:35.514368+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4049020 data_alloc: 218103808 data_used: 5599232
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147529728 unmapped: 95174656 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 398 ms_handle_reset con 0x55936d32dc00 session 0x55936d719860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:36.514570+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:37.514707+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f7c1d000/0x0/0x4ffc00000, data 0x14d2f44/0x16f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:38.514921+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:39.515111+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f7c1d000/0x0/0x4ffc00000, data 0x14d2f44/0x16f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:40.515324+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2677922 data_alloc: 218103808 data_used: 5611520
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 398 heartbeat osd_stat(store_statfs(0x4f7c1d000/0x0/0x4ffc00000, data 0x14d2f44/0x16f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.185118675s of 10.281331062s, submitted: 207
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:41.515519+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:42.515665+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:43.515850+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f7c1a000/0x0/0x4ffc00000, data 0x14d49c7/0x16f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:44.516012+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145301504 unmapped: 97402880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:45.516171+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2693224 data_alloc: 218103808 data_used: 6742016
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145604608 unmapped: 97099776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:46.516378+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145596416 unmapped: 97107968 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:47.516510+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145596416 unmapped: 97107968 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:48.516672+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 399 heartbeat osd_stat(store_statfs(0x4f7c19000/0x0/0x4ffc00000, data 0x14d49c7/0x16f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145596416 unmapped: 97107968 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:49.516835+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ebec800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 399 ms_handle_reset con 0x55936ebec800 session 0x55936ef7f680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145596416 unmapped: 97107968 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:50.517044+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2696446 data_alloc: 218103808 data_used: 7036928
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145842176 unmapped: 96862208 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:51.517231+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.038855553s of 10.137535095s, submitted: 22
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 400 ms_handle_reset con 0x55936c597c00 session 0x55936ef7eb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145850368 unmapped: 96854016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:52.517384+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 400 ms_handle_reset con 0x55936c597c00 session 0x55936dc01e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f7c16000/0x0/0x4ffc00000, data 0x14d65a6/0x16f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145850368 unmapped: 96854016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 400 heartbeat osd_stat(store_statfs(0x4f7c16000/0x0/0x4ffc00000, data 0x14d65a6/0x16f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:53.517576+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145858560 unmapped: 96845824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 401 ms_handle_reset con 0x55936c423800 session 0x55936f0f6b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 401 heartbeat osd_stat(store_statfs(0x4f7c16000/0x0/0x4ffc00000, data 0x14d65a6/0x16f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:54.517761+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145858560 unmapped: 96845824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 401 ms_handle_reset con 0x55936d32dc00 session 0x55936ef25c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 401 ms_handle_reset con 0x55936d21ec00 session 0x55936f0e01e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:55.517932+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2705201 data_alloc: 218103808 data_used: 7049216
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145858560 unmapped: 96845824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:56.518094+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ebec800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145899520 unmapped: 96804864 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:57.518279+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c13000/0x0/0x4ffc00000, data 0x14d8185/0x16fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 ms_handle_reset con 0x55936ebec800 session 0x55936d867680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145121280 unmapped: 97583104 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:58.518411+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145121280 unmapped: 97583104 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 ms_handle_reset con 0x55936c423800 session 0x55936bfc7680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 ms_handle_reset con 0x55936c597c00 session 0x55936dc1a780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:59.518587+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 97517568 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 heartbeat osd_stat(store_statfs(0x4f7c06000/0x0/0x4ffc00000, data 0x14ded64/0x1704000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 ms_handle_reset con 0x55936d21ec00 session 0x55936f353c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:00.518809+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2711717 data_alloc: 218103808 data_used: 7053312
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 97517568 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:01.518997+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 ms_handle_reset con 0x55936d32dc00 session 0x55936c0b2f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21e000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 97517568 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 ms_handle_reset con 0x55936d21e000 session 0x5593708654a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:02.519150+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145186816 unmapped: 97517568 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.439951897s of 11.537820816s, submitted: 23
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 ms_handle_reset con 0x55936c597c00 session 0x55936fc1e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:03.519353+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145195008 unmapped: 97509376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 403 ms_handle_reset con 0x55936d21ec00 session 0x55936f0e01e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:04.519511+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 403 ms_handle_reset con 0x55936c423800 session 0x55936d441860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 97501184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 403 heartbeat osd_stat(store_statfs(0x4f7c06000/0x0/0x4ffc00000, data 0x14e08d3/0x1706000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:05.519684+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 403 ms_handle_reset con 0x55936d32dc00 session 0x55936fc1e000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2713435 data_alloc: 218103808 data_used: 7057408
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 97501184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 404 ms_handle_reset con 0x55936c597400 session 0x55936d2290e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:06.519813+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 97501184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:07.519976+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145203200 unmapped: 97501184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 404 ms_handle_reset con 0x55936c597400 session 0x55936dbee5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 404 ms_handle_reset con 0x55936c423800 session 0x55936d79e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:08.520132+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 97492992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 404 ms_handle_reset con 0x55936c597c00 session 0x55936ef25860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:09.520281+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 97476608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 404 heartbeat osd_stat(store_statfs(0x4f7c05000/0x0/0x4ffc00000, data 0x14e2442/0x1708000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:10.520531+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d21ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2715614 data_alloc: 218103808 data_used: 7057408
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 97468416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 404 handle_osd_map epochs [405,405], i have 405, src has [1,405]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 405 ms_handle_reset con 0x55936d21ec00 session 0x55936d228b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:11.520682+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145350656 unmapped: 97353728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 406 ms_handle_reset con 0x55936c597800 session 0x55936ef7f680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 406 ms_handle_reset con 0x55936d32dc00 session 0x55936dbe7a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:12.520797+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 97320960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:13.520909+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.889362335s of 10.610998154s, submitted: 78
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 406 ms_handle_reset con 0x55936c597400 session 0x559370ec45a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145440768 unmapped: 97263616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936c423800 session 0x5593708645a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:14.521143+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145473536 unmapped: 97230848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e422800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936e422800 session 0x559371159860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936c597c00 session 0x559370bc8780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:15.521511+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732952 data_alloc: 218103808 data_used: 7766016
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 heartbeat osd_stat(store_statfs(0x4f7bfc000/0x0/0x4ffc00000, data 0x14e7701/0x1712000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145014784 unmapped: 97689600 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 18K writes, 80K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 6334 syncs, 2.98 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 9397 writes, 42K keys, 9397 commit groups, 1.0 writes per commit group, ingest: 24.85 MB, 0.04 MB/s
                                           Interval WAL: 9397 writes, 3817 syncs, 2.46 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936c597c00 session 0x55936f353e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:16.521660+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936c423800 session 0x55936d868780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145063936 unmapped: 97640448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:17.521809+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145063936 unmapped: 97640448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:18.841036+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936c597400 session 0x55936fc1f0e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145072128 unmapped: 97632256 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936d32dc00 session 0x55936dbeef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e422800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936e422800 session 0x55936c0b2000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e422800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 ms_handle_reset con 0x55936c423800 session 0x55936f0e03c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:19.841194+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145178624 unmapped: 97525760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:20.841382+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2742362 data_alloc: 218103808 data_used: 7774208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 408 ms_handle_reset con 0x55936c597400 session 0x55936d1cda40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145211392 unmapped: 97492992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 408 ms_handle_reset con 0x55936e422800 session 0x55936c587e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 408 heartbeat osd_stat(store_statfs(0x4f7bf8000/0x0/0x4ffc00000, data 0x159d701/0x1716000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:21.841546+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 145260544 unmapped: 97443840 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 408 ms_handle_reset con 0x55936c597c00 session 0x55936f0f74a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bd0c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 408 ms_handle_reset con 0x559370bd0c00 session 0x55936d441e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:22.841694+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146415616 unmapped: 96288768 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 ms_handle_reset con 0x55936c423800 session 0x559370864b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 ms_handle_reset con 0x55936d32dc00 session 0x55936d869a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:23.841885+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146440192 unmapped: 96264192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 ms_handle_reset con 0x55936c597400 session 0x55936fe39860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 ms_handle_reset con 0x55936c597c00 session 0x55936ef7e5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e422800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7c5c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.574744225s of 11.091567039s, submitted: 119
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:24.842028+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7c2800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146440192 unmapped: 96264192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 ms_handle_reset con 0x55936d7c2800 session 0x55936fe394a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:25.842151+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2747052 data_alloc: 218103808 data_used: 7802880
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7bf3000/0x0/0x4ffc00000, data 0x15a0ea3/0x171b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146440192 unmapped: 96264192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7bf3000/0x0/0x4ffc00000, data 0x15a0ea3/0x171b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:26.842300+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146448384 unmapped: 96256000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:27.842606+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146448384 unmapped: 96256000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:28.842881+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146448384 unmapped: 96256000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:29.843065+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146448384 unmapped: 96256000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:30.843255+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2747212 data_alloc: 218103808 data_used: 7806976
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146448384 unmapped: 96256000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:31.843411+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146448384 unmapped: 96256000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 heartbeat osd_stat(store_statfs(0x4f7bf3000/0x0/0x4ffc00000, data 0x15a0ea3/0x171b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 handle_osd_map epochs [410,411], i have 409, src has [1,411]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 409 handle_osd_map epochs [410,411], i have 411, src has [1,411]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 411 ms_handle_reset con 0x55936c597400 session 0x55936ef24000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:32.843564+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146489344 unmapped: 96215040 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 411 ms_handle_reset con 0x55936c597c00 session 0x55936d869860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:33.843685+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146481152 unmapped: 96223232 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:34.843894+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.954500198s of 10.070856094s, submitted: 13
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146489344 unmapped: 96215040 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:35.844038+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 412 ms_handle_reset con 0x55936d32dc00 session 0x55936efe1c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759148 data_alloc: 218103808 data_used: 7811072
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146497536 unmapped: 96206848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:36.844168+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 146497536 unmapped: 96206848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8ac00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 412 heartbeat osd_stat(store_statfs(0x4f7be9000/0x0/0x4ffc00000, data 0x15a6102/0x1725000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:37.844330+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147595264 unmapped: 95109120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:38.844499+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 413 ms_handle_reset con 0x55936fc8ac00 session 0x559370ec5a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 95035392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f7be3000/0x0/0x4ffc00000, data 0x15a7d0d/0x172a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:39.844712+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 95035392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370d90c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:40.844955+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2777538 data_alloc: 234881024 data_used: 10043392
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f7be3000/0x0/0x4ffc00000, data 0x15a7d1d/0x172b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 95027200 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 413 ms_handle_reset con 0x559370d90c00 session 0x55936f0e0b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:41.845125+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 95027200 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 413 ms_handle_reset con 0x55936c597400 session 0x55936ef714a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:42.845538+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 95027200 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:43.845707+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 93200384 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 413 heartbeat osd_stat(store_statfs(0x4f7b10000/0x0/0x4ffc00000, data 0x167ad1d/0x17fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 413 ms_handle_reset con 0x55936c597c00 session 0x55936f0e14a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:44.846013+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 93200384 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.486153603s of 10.240762711s, submitted: 37
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:45.846289+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2788509 data_alloc: 234881024 data_used: 10186752
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 93200384 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:46.846499+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149504000 unmapped: 93200384 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:47.846696+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148692992 unmapped: 94011392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f7b0e000/0x0/0x4ffc00000, data 0x167c70e/0x17ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:48.846962+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148692992 unmapped: 94011392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:49.847120+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148692992 unmapped: 94011392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f7b0e000/0x0/0x4ffc00000, data 0x167c70e/0x17ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:50.847372+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2789019 data_alloc: 234881024 data_used: 10194944
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148692992 unmapped: 94011392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:51.847527+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148692992 unmapped: 94011392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f7b0e000/0x0/0x4ffc00000, data 0x167c70e/0x17ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:52.847701+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149250048 unmapped: 93454336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f7b0e000/0x0/0x4ffc00000, data 0x167c70e/0x17ff000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:53.847863+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149250048 unmapped: 93454336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:54.848048+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149307392 unmapped: 93396992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:55.848193+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2797165 data_alloc: 234881024 data_used: 10199040
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149307392 unmapped: 93396992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.116167068s of 11.193492889s, submitted: 13
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 414 ms_handle_reset con 0x55936d32dc00 session 0x55936bfc65a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:56.848366+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148865024 unmapped: 93839360 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:57.848592+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8ac00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148905984 unmapped: 93798400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 414 ms_handle_reset con 0x55936fc8ac00 session 0x55936ef7fc20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:58.848801+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 414 heartbeat osd_stat(store_statfs(0x4f7a9d000/0x0/0x4ffc00000, data 0x16ec780/0x1871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148905984 unmapped: 93798400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:59.848929+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 415 ms_handle_reset con 0x55936d220400 session 0x559370ec5c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148914176 unmapped: 93790208 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f7a9d000/0x0/0x4ffc00000, data 0x16ec780/0x1871000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:00.849344+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2804644 data_alloc: 234881024 data_used: 10207232
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148955136 unmapped: 93749248 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 415 ms_handle_reset con 0x55936c597400 session 0x55936d718f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 415 heartbeat osd_stat(store_statfs(0x4f7a99000/0x0/0x4ffc00000, data 0x16ee2fd/0x1874000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:01.849534+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148955136 unmapped: 93749248 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:02.849714+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148996096 unmapped: 93708288 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 ms_handle_reset con 0x55936c597c00 session 0x55936f0f6780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:03.849889+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f7a96000/0x0/0x4ffc00000, data 0x16efece/0x1877000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148996096 unmapped: 93708288 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 ms_handle_reset con 0x55936d220400 session 0x55936d719a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:04.850104+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 ms_handle_reset con 0x55936e422800 session 0x55936d229680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 93691904 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 ms_handle_reset con 0x55936d7c5c00 session 0x55936f0e10e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:05.850294+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2809342 data_alloc: 234881024 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 93650944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.253777981s of 10.252377510s, submitted: 45
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:06.850531+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f7a98000/0x0/0x4ffc00000, data 0x16efebe/0x1876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 93650944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f7a98000/0x0/0x4ffc00000, data 0x16efebe/0x1876000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:07.850775+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 93650944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 ms_handle_reset con 0x55936d32dc00 session 0x55936ef25c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:08.851153+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 93650944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 ms_handle_reset con 0x55936c597c00 session 0x55936d368b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 ms_handle_reset con 0x55936c597400 session 0x55936d869a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:09.851476+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149045248 unmapped: 93659136 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:10.851730+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2808068 data_alloc: 234881024 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149045248 unmapped: 93659136 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:11.851955+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 93650944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f7a94000/0x0/0x4ffc00000, data 0x16f1921/0x1879000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:12.852102+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149069824 unmapped: 93634560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:13.852232+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149094400 unmapped: 93609984 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 ms_handle_reset con 0x55936d220400 session 0x5593711590e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:14.852372+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149094400 unmapped: 93609984 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:15.852606+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2809970 data_alloc: 234881024 data_used: 10579968
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 93601792 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.507339239s of 10.011095047s, submitted: 83
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:16.852733+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f7a96000/0x0/0x4ffc00000, data 0x16f18bf/0x1878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149118976 unmapped: 93585408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:17.852891+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e422800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 ms_handle_reset con 0x55936e422800 session 0x55936c587e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149143552 unmapped: 93560832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f7a96000/0x0/0x4ffc00000, data 0x16f18bf/0x1878000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:18.853021+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149143552 unmapped: 93560832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:19.853217+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149151744 unmapped: 93552640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:20.853391+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f7bd9000/0x0/0x4ffc00000, data 0x15ae8bf/0x1735000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,1,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2794484 data_alloc: 234881024 data_used: 10223616
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149143552 unmapped: 93560832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f7bd9000/0x0/0x4ffc00000, data 0x15ae8bf/0x1735000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:21.853565+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 ms_handle_reset con 0x55936c597400 session 0x55936c0b2000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149143552 unmapped: 93560832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:22.853784+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 93544448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 ms_handle_reset con 0x55936c597c00 session 0x55936dbee5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f7bd9000/0x0/0x4ffc00000, data 0x15ae8bf/0x1735000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:23.853914+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 93544448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:24.854050+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149159936 unmapped: 93544448 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 heartbeat osd_stat(store_statfs(0x4f7bd9000/0x0/0x4ffc00000, data 0x15ae8bf/0x1735000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 417 handle_osd_map epochs [418,418], i have 418, src has [1,418]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:25.854189+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 418 ms_handle_reset con 0x55936d220400 session 0x55936f0e01e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2793298 data_alloc: 234881024 data_used: 10227712
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150265856 unmapped: 92438528 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f7bd9000/0x0/0x4ffc00000, data 0x14fa490/0x1734000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.949880123s of 10.104204178s, submitted: 64
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:26.854334+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150265856 unmapped: 92438528 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8ac00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 419 ms_handle_reset con 0x55936d32dc00 session 0x55936c0b2f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 419 ms_handle_reset con 0x559371607000 session 0x55936c44c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:27.854528+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 419 ms_handle_reset con 0x55936c9d4800 session 0x55936fc1f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150290432 unmapped: 92413952 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:28.854708+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150315008 unmapped: 92389376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:29.854880+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 420 ms_handle_reset con 0x55936fc8ac00 session 0x55936d228f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150315008 unmapped: 92389376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:30.855101+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2800346 data_alloc: 234881024 data_used: 10235904
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150315008 unmapped: 92389376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:31.855303+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 92381184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f7bd3000/0x0/0x4ffc00000, data 0x14fdc24/0x173b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:32.855497+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 92381184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:33.855692+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 92381184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f7bd0000/0x0/0x4ffc00000, data 0x14ff687/0x173e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:34.855888+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 92381184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:35.856058+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2802824 data_alloc: 234881024 data_used: 10244096
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 92381184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:36.856234+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 3.420173645s of 10.311150551s, submitted: 44
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 421 ms_handle_reset con 0x55936c597400 session 0x55936e6ac780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150323200 unmapped: 92381184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 421 ms_handle_reset con 0x55936c597c00 session 0x55936d79fc20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:37.856414+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150347776 unmapped: 92356608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 421 handle_osd_map epochs [422,422], i have 422, src has [1,422]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:38.856619+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 422 ms_handle_reset con 0x55936d220400 session 0x55936f3534a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150355968 unmapped: 92348416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f7bcb000/0x0/0x4ffc00000, data 0x1501266/0x1742000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:39.856812+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150355968 unmapped: 92348416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:40.856942+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 423 ms_handle_reset con 0x55936d32dc00 session 0x55936d440960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2813570 data_alloc: 234881024 data_used: 10264576
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 423 ms_handle_reset con 0x55936c597400 session 0x55936efe1860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150355968 unmapped: 92348416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:41.857089+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 423 ms_handle_reset con 0x55936c597c00 session 0x55936d866960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147415040 unmapped: 95289344 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f7bc7000/0x0/0x4ffc00000, data 0x15032d8/0x1746000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:42.857212+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 424 ms_handle_reset con 0x55936c9d4800 session 0x55936f0f72c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936fc8ac00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 424 ms_handle_reset con 0x55936fc8ac00 session 0x55936d229680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 95272960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:43.857352+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 424 heartbeat osd_stat(store_statfs(0x4f8825000/0x0/0x4ffc00000, data 0x8a68f0/0xae8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 95272960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:44.857504+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 424 ms_handle_reset con 0x55936c597400 session 0x55936d719a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147431424 unmapped: 95272960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:45.857667+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2665006 data_alloc: 218103808 data_used: 1806336
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 425 ms_handle_reset con 0x55936c597c00 session 0x55936d718f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147439616 unmapped: 95264768 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:46.857835+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.213904858s of 10.001238823s, submitted: 126
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 426 ms_handle_reset con 0x55936c9d4800 session 0x55936ef7ed20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147513344 unmapped: 95191040 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f8821000/0x0/0x4ffc00000, data 0x8a84eb/0xaec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:47.857958+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147513344 unmapped: 95191040 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:48.858142+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 426 handle_osd_map epochs [427,427], i have 427, src has [1,427]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147521536 unmapped: 95182848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 427 ms_handle_reset con 0x55936d32dc00 session 0x55936ef24000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 427 ms_handle_reset con 0x559371607000 session 0x55936fc1f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f881c000/0x0/0x4ffc00000, data 0x8abda9/0xaf1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:49.858295+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 427 ms_handle_reset con 0x55936c597400 session 0x55936dc1a3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147529728 unmapped: 95174656 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:50.858472+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2672770 data_alloc: 218103808 data_used: 1814528
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147529728 unmapped: 95174656 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:51.858631+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f881a000/0x0/0x4ffc00000, data 0x8abe1b/0xaf3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147546112 unmapped: 95158272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 428 ms_handle_reset con 0x55936c9d4800 session 0x55936ef250e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:52.858814+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e692400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 429 ms_handle_reset con 0x55936d32dc00 session 0x55936c0b23c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147562496 unmapped: 95141888 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:53.858989+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 429 ms_handle_reset con 0x55936e692400 session 0x55936d2292c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147562496 unmapped: 95141888 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936eb1ec00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 430 ms_handle_reset con 0x55936eb1ec00 session 0x559370bc8780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 430 ms_handle_reset con 0x55936c597c00 session 0x55936dc1b4a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:54.859133+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f880f000/0x0/0x4ffc00000, data 0x8b1577/0xafe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147595264 unmapped: 95109120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 430 ms_handle_reset con 0x55936c597400 session 0x55936e6a5e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:55.859277+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2685299 data_alloc: 218103808 data_used: 1826816
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 430 ms_handle_reset con 0x55936c9d4800 session 0x55936c0b34a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147595264 unmapped: 95109120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:56.859503+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.332761765s of 10.049203873s, submitted: 77
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147603456 unmapped: 95100928 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 431 handle_osd_map epochs [431,432], i have 431, src has [1,432]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:57.859624+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 432 ms_handle_reset con 0x55936d32dc00 session 0x55936f0e0780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f880b000/0x0/0x4ffc00000, data 0x8b3012/0xb01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e692400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 95092736 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559372b45400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 432 ms_handle_reset con 0x559372b45400 session 0x55936d8661e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:58.859744+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 95092736 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 432 ms_handle_reset con 0x55936e692400 session 0x5593711592c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:59.859875+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 433 ms_handle_reset con 0x55936c9d4800 session 0x55936ef7f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147611648 unmapped: 95092736 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:00.860038+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 433 ms_handle_reset con 0x55936d32dc00 session 0x55936c94d4a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 433 ms_handle_reset con 0x55936c597c00 session 0x55936c587c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 434 ms_handle_reset con 0x55936c423c00 session 0x55936d868f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 434 ms_handle_reset con 0x55936c597400 session 0x55936efe1a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2701592 data_alloc: 218103808 data_used: 1830912
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 95084544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:01.860184+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 95084544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:02.860345+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 434 heartbeat osd_stat(store_statfs(0x4f8803000/0x0/0x4ffc00000, data 0x8b87f6/0xb0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148668416 unmapped: 94035968 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:03.860524+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 435 ms_handle_reset con 0x55936c9d4800 session 0x55936e6ada40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 95084544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:04.860688+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 95084544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:05.860847+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e692400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 436 ms_handle_reset con 0x55936d32dc00 session 0x55936d440f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2708442 data_alloc: 218103808 data_used: 1835008
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 95084544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 436 ms_handle_reset con 0x55936c423c00 session 0x55936f0f7e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559373926c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 436 ms_handle_reset con 0x559373926c00 session 0x55936bfc7680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 436 handle_osd_map epochs [437,438], i have 436, src has [1,438]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:06.861010+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 436 handle_osd_map epochs [437,437], i have 438, src has [1,437]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.827843189s of 10.038818359s, submitted: 87
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f83ed000/0x0/0x4ffc00000, data 0x8bbaa7/0xb0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147628032 unmapped: 95076352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 438 ms_handle_reset con 0x55936e692400 session 0x55936f353e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 438 ms_handle_reset con 0x55936c597c00 session 0x55936f0e0960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:07.861190+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147628032 unmapped: 95076352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f83e7000/0x0/0x4ffc00000, data 0x8bf13d/0xb16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:08.861346+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559373926c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147628032 unmapped: 95076352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:09.861506+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 438 handle_osd_map epochs [439,439], i have 439, src has [1,439]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 439 heartbeat osd_stat(store_statfs(0x4f83e8000/0x0/0x4ffc00000, data 0x8bf13d/0xb16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147644416 unmapped: 95059968 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 439 ms_handle_reset con 0x559373926c00 session 0x55936d1cc3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:10.861696+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 439 ms_handle_reset con 0x55936c597400 session 0x55936d8665a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718734 data_alloc: 218103808 data_used: 1863680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147652608 unmapped: 95051776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:11.861890+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32dc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 95043584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 440 ms_handle_reset con 0x55936d32dc00 session 0x55936dc1a780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:12.862093+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 440 ms_handle_reset con 0x55936c423c00 session 0x559370ec54a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147652608 unmapped: 95051776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 441 ms_handle_reset con 0x55936c9d4800 session 0x55936d368b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:13.862371+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147652608 unmapped: 95051776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 441 ms_handle_reset con 0x55936c597400 session 0x55936f0e0f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f83dd000/0x0/0x4ffc00000, data 0x8c44f6/0xb20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:14.862546+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147652608 unmapped: 95051776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:15.862787+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e692400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559373926c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 441 ms_handle_reset con 0x559373926c00 session 0x559370865a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371605800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 441 ms_handle_reset con 0x55936e692400 session 0x55936c0b2000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2726129 data_alloc: 218103808 data_used: 1875968
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 441 ms_handle_reset con 0x55936c423c00 session 0x559370ec4780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147652608 unmapped: 95051776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:16.863240+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 443 ms_handle_reset con 0x55936c597400 session 0x55936fe383c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 443 ms_handle_reset con 0x55936c9d4800 session 0x55936fc1e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.722308636s of 10.164051056s, submitted: 71
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 443 ms_handle_reset con 0x559371605800 session 0x55936f0e12c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 95043584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 443 heartbeat osd_stat(store_statfs(0x4f83d6000/0x0/0x4ffc00000, data 0x8c7cb4/0xb26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:17.863408+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147660800 unmapped: 95043584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 443 ms_handle_reset con 0x55936c597c00 session 0x55936f0e01e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:18.863577+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 444 ms_handle_reset con 0x55936c423c00 session 0x55936d79eb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 444 ms_handle_reset con 0x55936c597400 session 0x5593708645a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147668992 unmapped: 95035392 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:19.863792+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 95027200 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 444 ms_handle_reset con 0x55936c9d4800 session 0x55936ef7e000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:20.864028+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371605800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2734658 data_alloc: 218103808 data_used: 1888256
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 95027200 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:21.864292+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559373926c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 444 ms_handle_reset con 0x559373926c00 session 0x55936d1c4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147677184 unmapped: 95027200 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:22.864593+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 444 heartbeat osd_stat(store_statfs(0x4f83d7000/0x0/0x4ffc00000, data 0x8c97dd/0xb27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 444 handle_osd_map epochs [445,445], i have 445, src has [1,445]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 444 handle_osd_map epochs [445,445], i have 445, src has [1,445]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 95002624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d874400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 445 ms_handle_reset con 0x55936d874400 session 0x55936dc1a1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:23.864769+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147718144 unmapped: 94986240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:24.865055+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147742720 unmapped: 94961664 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 446 ms_handle_reset con 0x55936c423c00 session 0x55936d868d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:25.865225+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 446 ms_handle_reset con 0x55936c597400 session 0x55936dc1be00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 446 ms_handle_reset con 0x559371605800 session 0x55936d1ccb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2744012 data_alloc: 218103808 data_used: 1900544
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 94937088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:26.865350+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 446 heartbeat osd_stat(store_statfs(0x4f83ce000/0x0/0x4ffc00000, data 0x8cd624/0xb2f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 94937088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.632634163s of 10.442392349s, submitted: 42
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:27.865511+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 94937088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:28.865643+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d874400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 94937088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:29.865813+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 448 ms_handle_reset con 0x55936d874400 session 0x559370ec5e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:30.865982+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2752062 data_alloc: 218103808 data_used: 1900544
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:31.866198+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:32.866373+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f83c6000/0x0/0x4ffc00000, data 0x8d0d1e/0xb36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 448 handle_osd_map epochs [449,449], i have 449, src has [1,449]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:33.866518+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:34.866725+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:35.866877+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2757162 data_alloc: 218103808 data_used: 1900544
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559373926c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 450 ms_handle_reset con 0x55936d220c00 session 0x55936c44d860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:36.867041+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f83bd000/0x0/0x4ffc00000, data 0x8d5d99/0xb3f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.643574715s of 10.084507942s, submitted: 47
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:37.867387+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:38.867591+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:39.867781+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148815872 unmapped: 93888512 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 452 ms_handle_reset con 0x55936c9d4800 session 0x55936bfc7c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f83bb000/0x0/0x4ffc00000, data 0x8d7421/0xb41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:40.868002+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 452 ms_handle_reset con 0x55936d220c00 session 0x55936e6ada40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f83bb000/0x0/0x4ffc00000, data 0x8d7421/0xb41000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 452 ms_handle_reset con 0x559373926c00 session 0x5593711594a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2764551 data_alloc: 218103808 data_used: 1908736
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 93880320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 452 ms_handle_reset con 0x55936c423c00 session 0x55936efe1a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:41.868149+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 93880320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:42.868309+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 93880320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:43.868530+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 93880320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:44.868748+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 93880320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d874400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 453 ms_handle_reset con 0x55936d874400 session 0x55936f0e1e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:45.868961+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2766475 data_alloc: 218103808 data_used: 1904640
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148824064 unmapped: 93880320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 453 handle_osd_map epochs [453,454], i have 453, src has [1,454]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:46.869190+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 454 heartbeat osd_stat(store_statfs(0x4f83b6000/0x0/0x4ffc00000, data 0x8db01a/0xb47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 454 ms_handle_reset con 0x55936c9d4800 session 0x55936fc1f2c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 454 ms_handle_reset con 0x55936c597400 session 0x55936c587c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148832256 unmapped: 93872128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 455 ms_handle_reset con 0x55936d220c00 session 0x5593711592c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:47.869446+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 455 ms_handle_reset con 0x55936c423c00 session 0x55936d8690e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148832256 unmapped: 93872128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:48.869749+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148832256 unmapped: 93872128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:49.870281+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.283062935s of 12.184521675s, submitted: 32
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559373926c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 457 ms_handle_reset con 0x559373926c00 session 0x55936f0e0780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:50.870606+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2782483 data_alloc: 218103808 data_used: 1908736
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:51.870819+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:52.871158+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 458 heartbeat osd_stat(store_statfs(0x4f83a4000/0x0/0x4ffc00000, data 0x8e3a85/0xb58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,4])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 458 handle_osd_map epochs [459,459], i have 459, src has [1,459]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 458 handle_osd_map epochs [459,459], i have 459, src has [1,459]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 459 ms_handle_reset con 0x55936c597400 session 0x55936bfc65a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 459 heartbeat osd_stat(store_statfs(0x4f83a4000/0x0/0x4ffc00000, data 0x8e3a85/0xb58000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,3])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148905984 unmapped: 93798400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:53.871324+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148905984 unmapped: 93798400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:54.871625+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:55.871789+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 460 ms_handle_reset con 0x55936c423c00 session 0x55936c0b34a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 460 heartbeat osd_stat(store_statfs(0x4f83a1000/0x0/0x4ffc00000, data 0x8e5680/0xb5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2792869 data_alloc: 218103808 data_used: 1916928
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:56.872019+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:57.872263+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:58.872507+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 460 heartbeat osd_stat(store_statfs(0x4f83a1000/0x0/0x4ffc00000, data 0x8e5680/0xb5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 461 ms_handle_reset con 0x55936c9d4800 session 0x559370bc8780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:59.872703+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:00.872894+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2795273 data_alloc: 218103808 data_used: 1925120
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.220159531s of 11.602508545s, submitted: 79
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:01.873097+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:02.873263+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 462 ms_handle_reset con 0x55936d220c00 session 0x55936c0b23c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:03.873402+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371605800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:04.873578+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f839d000/0x0/0x4ffc00000, data 0x8e8db2/0xb60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:05.873780+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2796885 data_alloc: 218103808 data_used: 1929216
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148897792 unmapped: 93806592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:06.873929+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f839e000/0x0/0x4ffc00000, data 0x8e8db2/0xb60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 462 handle_osd_map epochs [463,463], i have 463, src has [1,463]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148905984 unmapped: 93798400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:07.874068+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148905984 unmapped: 93798400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:08.874195+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148922368 unmapped: 93782016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 464 heartbeat osd_stat(store_statfs(0x4f839a000/0x0/0x4ffc00000, data 0x8ea993/0xb63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:09.874330+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148922368 unmapped: 93782016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:10.874547+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 464 ms_handle_reset con 0x559371605800 session 0x55936dc1a3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 464 ms_handle_reset con 0x55936c423c00 session 0x55936f0f72c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2801491 data_alloc: 218103808 data_used: 1933312
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148922368 unmapped: 93782016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:11.874752+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148922368 unmapped: 93782016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 464 heartbeat osd_stat(store_statfs(0x4f839a000/0x0/0x4ffc00000, data 0x8ebecf/0xb64000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.936418533s of 11.192409515s, submitted: 75
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 464 handle_osd_map epochs [465,465], i have 465, src has [1,465]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:12.874930+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148922368 unmapped: 93782016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:13.875090+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148922368 unmapped: 93782016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:14.875271+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148930560 unmapped: 93773824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:15.875476+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2808597 data_alloc: 218103808 data_used: 1941504
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148930560 unmapped: 93773824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:16.875615+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 466 ms_handle_reset con 0x55936c597400 session 0x55936d866960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 466 ms_handle_reset con 0x55936c9d4800 session 0x55936f3534a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148930560 unmapped: 93773824 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:17.875838+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 466 heartbeat osd_stat(store_statfs(0x4f8396000/0x0/0x4ffc00000, data 0x8eefdc/0xb68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 93741056 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:18.876011+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148971520 unmapped: 93732864 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:19.876164+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148971520 unmapped: 93732864 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 468 handle_osd_map epochs [468,469], i have 468, src has [1,469]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:20.876366+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2816567 data_alloc: 218103808 data_used: 1953792
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 148996096 unmapped: 93708288 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:21.876507+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149020672 unmapped: 93683712 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f838c000/0x0/0x4ffc00000, data 0x8f41d3/0xb70000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.571812153s of 10.016852379s, submitted: 71
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:22.876618+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 93650944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:23.876790+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 472 ms_handle_reset con 0x55936d220c00 session 0x55936dc1ba40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 93618176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:24.876915+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 472 heartbeat osd_stat(store_statfs(0x4f8382000/0x0/0x4ffc00000, data 0x8f9420/0xb79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 93618176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:25.877062+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2826481 data_alloc: 218103808 data_used: 1961984
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 93618176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:26.877212+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149086208 unmapped: 93618176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:27.877514+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149127168 unmapped: 93577216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 473 handle_osd_map epochs [473,474], i have 473, src has [1,474]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:28.877679+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149168128 unmapped: 93536256 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:29.877818+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149168128 unmapped: 93536256 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f837d000/0x0/0x4ffc00000, data 0x8fcaa0/0xb7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:30.877984+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2831917 data_alloc: 218103808 data_used: 1966080
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149168128 unmapped: 93536256 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:31.878111+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f837d000/0x0/0x4ffc00000, data 0x8fcaa0/0xb7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149168128 unmapped: 93536256 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.935470581s of 10.014788628s, submitted: 50
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:32.878263+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149184512 unmapped: 93519872 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:33.878413+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f837b000/0x0/0x4ffc00000, data 0x8fe53f/0xb82000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149184512 unmapped: 93519872 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:34.878582+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149184512 unmapped: 93519872 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:35.878700+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2832857 data_alloc: 218103808 data_used: 1970176
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149184512 unmapped: 93519872 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:36.878852+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 476 ms_handle_reset con 0x55936bee9c00 session 0x55936d868f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149209088 unmapped: 93495296 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:37.879028+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 93478912 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:38.879244+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 ms_handle_reset con 0x55936bee9c00 session 0x55936fe394a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 heartbeat osd_stat(store_statfs(0x4f8375000/0x0/0x4ffc00000, data 0x901b3d/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 93478912 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:39.879401+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 93478912 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:40.879616+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2839733 data_alloc: 218103808 data_used: 1974272
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 93478912 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:41.879770+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149225472 unmapped: 93478912 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:42.879914+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 heartbeat osd_stat(store_statfs(0x4f8376000/0x0/0x4ffc00000, data 0x901adb/0xb86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.542004585s of 10.693345070s, submitted: 38
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 ms_handle_reset con 0x55936c423c00 session 0x55936fe392c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 93470720 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:43.880098+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 93470720 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 ms_handle_reset con 0x55936c597400 session 0x55936f3521e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:44.880236+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 ms_handle_reset con 0x55936c9d4800 session 0x55936f0f63c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 93470720 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:45.880394+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 heartbeat osd_stat(store_statfs(0x4f8377000/0x0/0x4ffc00000, data 0x901adb/0xb86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d220c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2839241 data_alloc: 218103808 data_used: 1974272
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149241856 unmapped: 93462528 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:46.880596+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 ms_handle_reset con 0x55936d220c00 session 0x55936dc01c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 heartbeat osd_stat(store_statfs(0x4f8376000/0x0/0x4ffc00000, data 0x901aeb/0xb87000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 477 handle_osd_map epochs [477,478], i have 477, src has [1,478]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149258240 unmapped: 93446144 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:47.880731+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 478 ms_handle_reset con 0x55936c423c00 session 0x55936d1ccb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149266432 unmapped: 93437952 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:48.880961+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149266432 unmapped: 93437952 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:49.881121+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 478 ms_handle_reset con 0x55936c597400 session 0x55936d868d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x55936bee9c00 session 0x55936d79fc20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f836f000/0x0/0x4ffc00000, data 0x9050ee/0xb8e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149274624 unmapped: 93429760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:50.881386+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2848783 data_alloc: 218103808 data_used: 1982464
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149274624 unmapped: 93429760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:51.881527+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149274624 unmapped: 93429760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:52.881702+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x55936c9d4800 session 0x55936d2281e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcbc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e8c0800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.176959991s of 10.161531448s, submitted: 23
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149307392 unmapped: 93396992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:53.881866+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x55936e8c0800 session 0x55936dc1a1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x559370bcbc00 session 0x55936efe01e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x55936e693400 session 0x55936ef254a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149291008 unmapped: 93413376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:54.882009+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x55936bee9c00 session 0x55936e6a4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f836e000/0x0/0x4ffc00000, data 0x905140/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149299200 unmapped: 93405184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:55.882166+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x55936c423c00 session 0x559370ec5a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2852993 data_alloc: 218103808 data_used: 1990656
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149299200 unmapped: 93405184 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:56.882284+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c597400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x55936c597400 session 0x55936ef7e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 ms_handle_reset con 0x55936bee9c00 session 0x55936e6ad680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149315584 unmapped: 93388800 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:57.882417+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149315584 unmapped: 93388800 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:58.882590+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f8370000/0x0/0x4ffc00000, data 0x9050cb/0xb8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 479 handle_osd_map epochs [480,480], i have 480, src has [1,480]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149323776 unmapped: 93380608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:59.882762+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 93372416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:00.882946+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f836d000/0x0/0x4ffc00000, data 0x906c9c/0xb90000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2851431 data_alloc: 218103808 data_used: 1990656
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 93372416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:01.883110+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 480 ms_handle_reset con 0x55936c423c00 session 0x55936d869860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 93372416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:02.883251+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 93372416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:03.883410+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 93372416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:04.883631+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 93372416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:05.883811+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2851431 data_alloc: 218103808 data_used: 1990656
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 93372416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:06.883922+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 480 heartbeat osd_stat(store_statfs(0x4f836f000/0x0/0x4ffc00000, data 0x906c8c/0xb8f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 93372416 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:07.884066+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.446483612s of 14.829945564s, submitted: 54
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 480 handle_osd_map epochs [481,481], i have 481, src has [1,481]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 93347840 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:08.884223+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 93347840 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:09.884354+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 93347840 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:10.884567+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f836b000/0x0/0x4ffc00000, data 0x9086ef/0xb92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2855605 data_alloc: 218103808 data_used: 1998848
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 93347840 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:11.884728+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149356544 unmapped: 93347840 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:12.884909+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f836b000/0x0/0x4ffc00000, data 0x9086ef/0xb92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 93339648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:13.885091+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 93339648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:14.885266+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 ms_handle_reset con 0x55936e693400 session 0x55936d7192c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcbc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 ms_handle_reset con 0x559370bcbc00 session 0x55936fc1eb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149364736 unmapped: 93339648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c9d4800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:15.885404+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 ms_handle_reset con 0x55936c9d4800 session 0x55936ef25c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 ms_handle_reset con 0x55936bee9c00 session 0x55936d368000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2859809 data_alloc: 218103808 data_used: 1998848
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149372928 unmapped: 93331456 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:16.885620+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149372928 unmapped: 93331456 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:17.885766+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.476389885s of 10.013968468s, submitted: 20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 ms_handle_reset con 0x55936c423c00 session 0x55936ef24780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f836a000/0x0/0x4ffc00000, data 0x908761/0xb94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149372928 unmapped: 93331456 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:18.885926+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 ms_handle_reset con 0x55936e693400 session 0x55936dc00000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcbc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 ms_handle_reset con 0x559370bcbc00 session 0x55936efe03c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 93306880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:19.886082+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 93306880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:20.886279+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2856562 data_alloc: 218103808 data_used: 1998848
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 93306880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:21.886472+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 93306880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:22.886586+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f836c000/0x0/0x4ffc00000, data 0x9086ef/0xb92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 93306880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:23.886684+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149397504 unmapped: 93306880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:24.886821+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 93298688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:25.886907+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2856562 data_alloc: 218103808 data_used: 1998848
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 93298688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:26.887034+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 93298688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:27.887168+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f836c000/0x0/0x4ffc00000, data 0x9086ef/0xb92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e8c0800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 ms_handle_reset con 0x55936e8c0800 session 0x55936d869860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f836c000/0x0/0x4ffc00000, data 0x9086ef/0xb92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 93298688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:28.887319+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149405696 unmapped: 93298688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:29.887494+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 481 handle_osd_map epochs [481,482], i have 481, src has [1,482]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.541471481s of 11.644091606s, submitted: 20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f836c000/0x0/0x4ffc00000, data 0x9086ef/0xb92000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 93290496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:30.887682+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 482 ms_handle_reset con 0x55936bee9c00 session 0x55936e6ad680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2862518 data_alloc: 218103808 data_used: 2007040
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:31.887844+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149413888 unmapped: 93290496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 482 ms_handle_reset con 0x55936c423c00 session 0x55936ef7e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 482 ms_handle_reset con 0x55936e693400 session 0x559370ec5a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcbc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x559370bcbc00 session 0x55936e6a4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:32.888015+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149422080 unmapped: 93282304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e884400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x55936e884400 session 0x55936efe01e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:33.888151+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149446656 unmapped: 93257728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x55936bee9c00 session 0x55936dc1a1e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x55936c423c00 session 0x55936d868d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x55936e693400 session 0x55936d1ccb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:34.888306+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 93241344 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcbc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x559370bcbc00 session 0x55936dc01c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:35.888488+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 93241344 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f8363000/0x0/0x4ffc00000, data 0x90bf0f/0xb9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2872024 data_alloc: 218103808 data_used: 2015232
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e980000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:36.888602+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149463040 unmapped: 93241344 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e887000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x55936e980000 session 0x55936f3521e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x55936e887000 session 0x559370ec4780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 ms_handle_reset con 0x55936bee9c00 session 0x55936d868f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 484 ms_handle_reset con 0x55936c423c00 session 0x55936c0b2000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:37.888760+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149479424 unmapped: 93224960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 484 ms_handle_reset con 0x55936e693400 session 0x55936d8690e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559370bcbc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 484 ms_handle_reset con 0x559370bcbc00 session 0x55936f353a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 484 ms_handle_reset con 0x55936e435c00 session 0x55936dc1a780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:38.888911+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 484 ms_handle_reset con 0x55936bee9c00 session 0x55936ef252c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:39.889038+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:40.889232+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 484 ms_handle_reset con 0x55936c423c00 session 0x55936f0f7c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 484 handle_osd_map epochs [484,485], i have 484, src has [1,485]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.764933586s of 11.128066063s, submitted: 85
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f8362000/0x0/0x4ffc00000, data 0x90d9ba/0xb9b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2875767 data_alloc: 218103808 data_used: 2023424
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:41.889383+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:42.889590+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:43.889748+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:44.889857+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:45.889958+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f835f000/0x0/0x4ffc00000, data 0x90f58b/0xb9e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2875767 data_alloc: 218103808 data_used: 2023424
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:46.890089+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:47.890285+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149495808 unmapped: 93208576 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:48.890502+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 93192192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936e693400 session 0x559370865680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:49.890637+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149512192 unmapped: 93192192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e887000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936e887000 session 0x55936ef7e5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936bee9c00 session 0x55936f353e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x910ffe/0xba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:50.890876+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 93175808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936c423c00 session 0x5593708652c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.820325851s of 10.039444923s, submitted: 27
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936e435c00 session 0x559370ec41e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936e693400 session 0x5593711583c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2888429 data_alloc: 218103808 data_used: 2023424
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:51.891031+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149528576 unmapped: 93175808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559372b44400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x559372b44400 session 0x55936d229e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:52.891222+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 93167616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f8358000/0x0/0x4ffc00000, data 0x911080/0xba5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936bee9c00 session 0x55936ef7f680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:53.891375+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 93167616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936c423c00 session 0x55936f352d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:54.891530+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 149536768 unmapped: 93167616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936e435c00 session 0x55936fc1fe00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936e693400 session 0x55936d229a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559372b44400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:55.891677+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x559372b44400 session 0x55936d229e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:56.891891+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2882259 data_alloc: 218103808 data_used: 2023424
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f835c000/0x0/0x4ffc00000, data 0x910ffe/0xba2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:57.892032+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936bee9c00 session 0x55936d868f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936c423c00 session 0x559370ec4780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:58.892155+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:59.892322+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:00.892509+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:01.892772+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2881539 data_alloc: 218103808 data_used: 2023424
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f835d000/0x0/0x4ffc00000, data 0x910fee/0xba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:02.892952+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:03.893095+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:04.893265+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150593536 unmapped: 92110848 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:05.893489+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150601728 unmapped: 92102656 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:06.893656+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.515571594s of 15.623821259s, submitted: 28
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2881539 data_alloc: 218103808 data_used: 2023424
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 ms_handle_reset con 0x55936e435c00 session 0x55936dc01c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150601728 unmapped: 92102656 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 heartbeat osd_stat(store_statfs(0x4f835d000/0x0/0x4ffc00000, data 0x910fee/0xba1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:07.893842+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150601728 unmapped: 92102656 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 487 ms_handle_reset con 0x55936e999c00 session 0x55936fe383c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:08.893974+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150634496 unmapped: 92069888 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 487 ms_handle_reset con 0x55936e693400 session 0x55936d1ccb40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:09.894110+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150634496 unmapped: 92069888 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:10.894271+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150642688 unmapped: 92061696 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f8355000/0x0/0x4ffc00000, data 0x9146e8/0xba7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 489 ms_handle_reset con 0x55936bee9c00 session 0x55936d868d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:11.894411+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2892333 data_alloc: 218103808 data_used: 2031616
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 92045312 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:12.894610+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150659072 unmapped: 92045312 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:13.894747+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150675456 unmapped: 92028928 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f8350000/0x0/0x4ffc00000, data 0x917ea6/0xbad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:14.894954+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150675456 unmapped: 92028928 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:15.895101+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150675456 unmapped: 92028928 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets getting new tickets!
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:16.895322+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _finish_auth 0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:16.896082+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2894635 data_alloc: 218103808 data_used: 2031616
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150675456 unmapped: 92028928 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:17.895494+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150675456 unmapped: 92028928 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f8350000/0x0/0x4ffc00000, data 0x917ea6/0xbad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.166627884s of 11.466617584s, submitted: 25
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:18.895610+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 491 ms_handle_reset con 0x55936c423c00 session 0x55936e6a4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150691840 unmapped: 92012544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:19.895765+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f834b000/0x0/0x4ffc00000, data 0x9199b3/0xbb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150691840 unmapped: 92012544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:20.895961+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150691840 unmapped: 92012544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 491 ms_handle_reset con 0x55936e999c00 session 0x55936bfc7c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f834b000/0x0/0x4ffc00000, data 0x9199b3/0xbb2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:21.896134+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2903624 data_alloc: 218103808 data_used: 2035712
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150691840 unmapped: 92012544 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 491 handle_osd_map epochs [491,492], i have 491, src has [1,492]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:22.896273+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 492 ms_handle_reset con 0x55936be67400 session 0x55936f353c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 150708224 unmapped: 91996160 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 492 handle_osd_map epochs [493,493], i have 493, src has [1,493]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:23.896516+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151789568 unmapped: 90914816 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 493 ms_handle_reset con 0x55936ef75c00 session 0x55936fc1e3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:24.896686+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151797760 unmapped: 90906624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 493 ms_handle_reset con 0x55936ef75c00 session 0x55936efe12c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 493 ms_handle_reset con 0x55936e435c00 session 0x55936ef7e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:25.896847+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151797760 unmapped: 90906624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:26.896984+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2913363 data_alloc: 218103808 data_used: 2052096
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151797760 unmapped: 90906624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 493 ms_handle_reset con 0x55936be67400 session 0x55936d869860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f8342000/0x0/0x4ffc00000, data 0x91d5b1/0xbba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:27.897132+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151822336 unmapped: 90882048 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:28.897285+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151822336 unmapped: 90882048 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.500621796s of 10.995313644s, submitted: 39
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 493 ms_handle_reset con 0x55936bee9c00 session 0x55936efe03c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:29.897473+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151838720 unmapped: 90865664 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 494 ms_handle_reset con 0x55936d32c000 session 0x559370865e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:30.897670+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151863296 unmapped: 90841088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 494 handle_osd_map epochs [494,495], i have 494, src has [1,495]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 495 ms_handle_reset con 0x55936e999c00 session 0x55936ef7ef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 495 ms_handle_reset con 0x55936d32c000 session 0x55936e6a5a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 495 ms_handle_reset con 0x55936c423c00 session 0x55936ef24780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:31.897823+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2926104 data_alloc: 218103808 data_used: 2068480
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151871488 unmapped: 90832896 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:32.897992+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 495 heartbeat osd_stat(store_statfs(0x4f8339000/0x0/0x4ffc00000, data 0x921274/0xbc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151879680 unmapped: 90824704 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 496 ms_handle_reset con 0x55936be67400 session 0x55936fe38d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936bee9c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 496 ms_handle_reset con 0x55936bee9c00 session 0x55936dc012c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:33.898176+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151896064 unmapped: 90808320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f8336000/0x0/0x4ffc00000, data 0x922de3/0xbc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:34.898293+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 496 heartbeat osd_stat(store_statfs(0x4f8336000/0x0/0x4ffc00000, data 0x922de3/0xbc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151896064 unmapped: 90808320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 496 ms_handle_reset con 0x55936be67400 session 0x55936e6ad680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:35.898481+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151896064 unmapped: 90808320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:36.898639+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2930866 data_alloc: 218103808 data_used: 2068480
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151896064 unmapped: 90808320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c423c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 496 ms_handle_reset con 0x55936d32c000 session 0x55936c6154a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 497 ms_handle_reset con 0x55936e435c00 session 0x55936f0e1860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:37.898806+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 497 heartbeat osd_stat(store_statfs(0x4f8332000/0x0/0x4ffc00000, data 0x92499c/0xbcb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 497 handle_osd_map epochs [497,498], i have 497, src has [1,498]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151904256 unmapped: 90800128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 498 ms_handle_reset con 0x55936e999c00 session 0x55936ef25c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 498 ms_handle_reset con 0x55936c423c00 session 0x55936dbeed20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:38.898965+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151904256 unmapped: 90800128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 498 handle_osd_map epochs [498,499], i have 498, src has [1,499]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.303022385s of 10.517838478s, submitted: 46
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:39.899168+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 499 ms_handle_reset con 0x55936be67400 session 0x55936cad3a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151928832 unmapped: 90775552 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:40.899400+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 499 heartbeat osd_stat(store_statfs(0x4f832a000/0x0/0x4ffc00000, data 0x928174/0xbd1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 499 ms_handle_reset con 0x55936d32c000 session 0x55936d1cc960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151928832 unmapped: 90775552 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:41.899537+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943264 data_alloc: 218103808 data_used: 2076672
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151928832 unmapped: 90775552 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:42.899615+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151928832 unmapped: 90775552 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:43.899787+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151928832 unmapped: 90775552 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:44.899984+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151945216 unmapped: 90759168 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f832b000/0x0/0x4ffc00000, data 0x929cef/0xbd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 500 ms_handle_reset con 0x55936e435c00 session 0x559370bc90e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:45.900130+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151961600 unmapped: 90742784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:46.900264+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942618 data_alloc: 218103808 data_used: 2088960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151961600 unmapped: 90742784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:47.900492+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151961600 unmapped: 90742784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:48.900636+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151961600 unmapped: 90742784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f832b000/0x0/0x4ffc00000, data 0x929cef/0xbd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:49.900821+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151961600 unmapped: 90742784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:50.901041+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151961600 unmapped: 90742784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:51.901225+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942938 data_alloc: 218103808 data_used: 2097152
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151961600 unmapped: 90742784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:52.901396+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151961600 unmapped: 90742784 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f832b000/0x0/0x4ffc00000, data 0x929cef/0xbd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:53.901561+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151969792 unmapped: 90734592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:54.901706+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151969792 unmapped: 90734592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:55.901868+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151969792 unmapped: 90734592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:56.902080+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942938 data_alloc: 218103808 data_used: 2097152
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151969792 unmapped: 90734592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:57.902267+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151969792 unmapped: 90734592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:58.902505+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151969792 unmapped: 90734592 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f832b000/0x0/0x4ffc00000, data 0x929cef/0xbd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:59.902671+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:00.902868+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:01.903032+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942938 data_alloc: 218103808 data_used: 2097152
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:02.903194+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f832b000/0x0/0x4ffc00000, data 0x929cef/0xbd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:03.903364+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:04.903577+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:05.903764+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 26.194675446s of 26.642568588s, submitted: 52
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 500 heartbeat osd_stat(store_statfs(0x4f832b000/0x0/0x4ffc00000, data 0x929cef/0xbd2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:06.903916+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2942064 data_alloc: 218103808 data_used: 2093056
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:07.904205+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 90726400 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:08.904478+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151994368 unmapped: 90710016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:09.904706+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 151994368 unmapped: 90710016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 502 handle_osd_map epochs [502,502], i have 502, src has [1,502]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:10.904948+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 90685440 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f8325000/0x0/0x4ffc00000, data 0x92d2dd/0xbd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:11.905134+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2948012 data_alloc: 218103808 data_used: 2093056
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 90685440 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:12.905354+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 90685440 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:13.905571+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 90685440 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:14.905794+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 502 heartbeat osd_stat(store_statfs(0x4f8326000/0x0/0x4ffc00000, data 0x92d2dd/0xbd7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 502 handle_osd_map epochs [503,503], i have 503, src has [1,503]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 502 handle_osd_map epochs [503,503], i have 503, src has [1,503]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152043520 unmapped: 90660864 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f8322000/0x0/0x4ffc00000, data 0x92ed5c/0xbda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:15.905942+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152051712 unmapped: 90652672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:16.906061+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2960046 data_alloc: 218103808 data_used: 2088960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152051712 unmapped: 90652672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:17.906253+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152059904 unmapped: 90644480 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:18.906502+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 503 ms_handle_reset con 0x55936e999c00 session 0x55936ef7f680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.274590492s of 12.814240456s, submitted: 36
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152068096 unmapped: 90636288 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 503 ms_handle_reset con 0x55936ef75c00 session 0x55936d368b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:19.906637+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152068096 unmapped: 90636288 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:20.906822+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 503 heartbeat osd_stat(store_statfs(0x4f8326000/0x0/0x4ffc00000, data 0x92e857/0xbd8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152068096 unmapped: 90636288 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:21.906972+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2958671 data_alloc: 218103808 data_used: 2088960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152068096 unmapped: 90636288 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 503 handle_osd_map epochs [503,504], i have 503, src has [1,504]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:22.907184+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152076288 unmapped: 90628096 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:23.907353+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 504 handle_osd_map epochs [505,505], i have 504, src has [1,505]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152076288 unmapped: 90628096 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:24.907480+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 90619904 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:25.907666+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152084480 unmapped: 90619904 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:26.907781+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2964939 data_alloc: 218103808 data_used: 2097152
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f8320000/0x0/0x4ffc00000, data 0x931e8b/0xbde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152092672 unmapped: 90611712 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:27.907900+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152092672 unmapped: 90611712 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:28.908061+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 505 heartbeat osd_stat(store_statfs(0x4f8320000/0x0/0x4ffc00000, data 0x931e8b/0xbde000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152100864 unmapped: 90603520 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.214451313s of 10.134897232s, submitted: 41
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:29.908228+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152100864 unmapped: 90603520 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:30.908406+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152100864 unmapped: 90603520 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:31.908580+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2963792 data_alloc: 218103808 data_used: 2093056
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152100864 unmapped: 90603520 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:32.908728+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152109056 unmapped: 90595328 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 506 heartbeat osd_stat(store_statfs(0x4f831d000/0x0/0x4ffc00000, data 0x9338de/0xbe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:33.908872+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 506 ms_handle_reset con 0x55936be67400 session 0x55936dc1a3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 506 ms_handle_reset con 0x55936d32c000 session 0x55936f0e1860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152117248 unmapped: 90587136 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 506 heartbeat osd_stat(store_statfs(0x4f831d000/0x0/0x4ffc00000, data 0x9338de/0xbe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:34.909525+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152125440 unmapped: 90578944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:35.909608+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152125440 unmapped: 90578944 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 506 handle_osd_map epochs [506,507], i have 506, src has [1,507]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:36.909750+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2970022 data_alloc: 218103808 data_used: 2109440
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f831b000/0x0/0x4ffc00000, data 0x934fbb/0xbe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:37.909916+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 507 ms_handle_reset con 0x55936e435c00 session 0x55936c6154a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f831b000/0x0/0x4ffc00000, data 0x934fbb/0xbe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:38.910158+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.359460354s of 10.019811630s, submitted: 43
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f831c000/0x0/0x4ffc00000, data 0x934fbb/0xbe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:39.910290+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:40.910689+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 507 ms_handle_reset con 0x55936e999c00 session 0x55936dc012c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:41.910853+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967318 data_alloc: 218103808 data_used: 2101248
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:42.910973+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:43.911137+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f831e000/0x0/0x4ffc00000, data 0x934f49/0xbe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:44.911262+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 507 heartbeat osd_stat(store_statfs(0x4f831e000/0x0/0x4ffc00000, data 0x934f49/0xbe0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:45.911493+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:46.911650+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967318 data_alloc: 218103808 data_used: 2101248
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152141824 unmapped: 90562560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 507 handle_osd_map epochs [507,508], i have 507, src has [1,508]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:47.911785+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152158208 unmapped: 90546176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:48.911965+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152158208 unmapped: 90546176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:49.912146+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152158208 unmapped: 90546176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 508 handle_osd_map epochs [508,509], i have 508, src has [1,509]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.272193909s of 11.065503120s, submitted: 14
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:50.912320+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 509 heartbeat osd_stat(store_statfs(0x4f831a000/0x0/0x4ffc00000, data 0x9369ac/0xbe3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 90513408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:51.912546+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2975138 data_alloc: 218103808 data_used: 2109440
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ef75c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 90513408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 509 ms_handle_reset con 0x55936ef75c00 session 0x55936ef7ef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:52.912709+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 509 heartbeat osd_stat(store_statfs(0x4f8316000/0x0/0x4ffc00000, data 0x938529/0xbe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152190976 unmapped: 90513408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:53.912897+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 509 heartbeat osd_stat(store_statfs(0x4f8316000/0x0/0x4ffc00000, data 0x938529/0xbe6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 509 handle_osd_map epochs [510,510], i have 509, src has [1,510]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 509 handle_osd_map epochs [510,510], i have 510, src has [1,510]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 90505216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 510 ms_handle_reset con 0x55936be67400 session 0x559370865e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:54.913067+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 90505216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:55.913204+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 90505216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:56.913372+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977440 data_alloc: 218103808 data_used: 2109440
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152199168 unmapped: 90505216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:57.913624+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 510 heartbeat osd_stat(store_statfs(0x4f8314000/0x0/0x4ffc00000, data 0x93a0fa/0xbe9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 510 handle_osd_map epochs [511,511], i have 511, src has [1,511]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152215552 unmapped: 90488832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 511 ms_handle_reset con 0x55936d32c000 session 0x55936efe03c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:58.913820+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152223744 unmapped: 90480640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:59.913986+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152223744 unmapped: 90480640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 511 heartbeat osd_stat(store_statfs(0x4f8310000/0x0/0x4ffc00000, data 0x93bc93/0xbec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 511 handle_osd_map epochs [512,512], i have 512, src has [1,512]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.147971153s of 10.222497940s, submitted: 13
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:00.914161+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 512 ms_handle_reset con 0x55936e435c00 session 0x55936d869860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152240128 unmapped: 90464256 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:01.914362+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2986690 data_alloc: 218103808 data_used: 2109440
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152240128 unmapped: 90464256 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:02.914527+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152256512 unmapped: 90447872 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:03.914731+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152256512 unmapped: 90447872 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 513 handle_osd_map epochs [513,514], i have 513, src has [1,514]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 513 handle_osd_map epochs [514,514], i have 514, src has [1,514]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 514 ms_handle_reset con 0x55936e999c00 session 0x55936efe12c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:04.915405+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x559371607800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 514 ms_handle_reset con 0x559371607800 session 0x55936fe383c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 514 heartbeat osd_stat(store_statfs(0x4f8306000/0x0/0x4ffc00000, data 0x940efe/0xbf7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152281088 unmapped: 90423296 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 514 handle_osd_map epochs [515,515], i have 514, src has [1,515]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 515 ms_handle_reset con 0x55936be67400 session 0x55936d229a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:05.915683+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152289280 unmapped: 90415104 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 515 ms_handle_reset con 0x55936d32c000 session 0x55936f0e14a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 515 ms_handle_reset con 0x55936e435c00 session 0x5593711581e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:06.915845+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2996033 data_alloc: 218103808 data_used: 2113536
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 90382336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 515 handle_osd_map epochs [516,516], i have 515, src has [1,516]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:07.916052+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 90382336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:08.916638+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 90382336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 516 heartbeat osd_stat(store_statfs(0x4f8302000/0x0/0x4ffc00000, data 0x9444ec/0xbfb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:09.917536+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152322048 unmapped: 90382336 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 516 handle_osd_map epochs [517,517], i have 516, src has [1,517]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.912360191s of 10.040486336s, submitted: 85
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:10.917978+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 517 ms_handle_reset con 0x55936e999c00 session 0x55936d867680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152330240 unmapped: 90374144 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:11.918543+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e996c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 517 ms_handle_reset con 0x55936e996c00 session 0x55936fe390e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3002231 data_alloc: 218103808 data_used: 2117632
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152330240 unmapped: 90374144 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:12.918773+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152330240 unmapped: 90374144 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:13.919144+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 517 heartbeat osd_stat(store_statfs(0x4f82fe000/0x0/0x4ffc00000, data 0x946085/0xbfe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 517 handle_osd_map epochs [518,518], i have 517, src has [1,518]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 517 handle_osd_map epochs [518,518], i have 518, src has [1,518]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 518 ms_handle_reset con 0x55936be67400 session 0x55936c44cd20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 90357760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:14.919765+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 90357760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:15.920189+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 90357760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:16.920376+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3003389 data_alloc: 218103808 data_used: 2117632
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152346624 unmapped: 90357760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 518 heartbeat osd_stat(store_statfs(0x4f82fc000/0x0/0x4ffc00000, data 0x947c8e/0xc01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:17.920548+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 518 handle_osd_map epochs [518,519], i have 518, src has [1,519]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152363008 unmapped: 90341376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:18.920742+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 519 ms_handle_reset con 0x55936d32c000 session 0x55936fe38960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152363008 unmapped: 90341376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:19.920886+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 519 ms_handle_reset con 0x55936e435c00 session 0x55936dc00d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152363008 unmapped: 90341376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 519 heartbeat osd_stat(store_statfs(0x4f82f9000/0x0/0x4ffc00000, data 0x9496f1/0xc04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:20.921172+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 519 heartbeat osd_stat(store_statfs(0x4f82f9000/0x0/0x4ffc00000, data 0x9496f1/0xc04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.619618416s of 10.403499603s, submitted: 33
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152363008 unmapped: 90341376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:21.921399+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 519 handle_osd_map epochs [520,520], i have 519, src has [1,520]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 520 ms_handle_reset con 0x55936e999c00 session 0x55936c94c5a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012970 data_alloc: 218103808 data_used: 2125824
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e981400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 520 ms_handle_reset con 0x55936e981400 session 0x55936d1c41e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152387584 unmapped: 90316800 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 520 heartbeat osd_stat(store_statfs(0x4f82f5000/0x0/0x4ffc00000, data 0x94b26e/0xc07000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:22.921634+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 520 handle_osd_map epochs [520,521], i have 520, src has [1,521]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 521 ms_handle_reset con 0x55936be67400 session 0x55936fe38d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152420352 unmapped: 90284032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 521 ms_handle_reset con 0x55936d32c000 session 0x55936c94d680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:23.921770+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 521 ms_handle_reset con 0x55936e435c00 session 0x55936bfc6960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 90267648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:24.921917+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 90267648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:25.922088+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 521 heartbeat osd_stat(store_statfs(0x4f82f3000/0x0/0x4ffc00000, data 0x94ce3f/0xc0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 90267648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:26.922354+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013696 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 90267648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:27.922541+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 90267648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:28.923005+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152436736 unmapped: 90267648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:29.923186+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 521 heartbeat osd_stat(store_statfs(0x4f82f3000/0x0/0x4ffc00000, data 0x94ce3f/0xc0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 521 handle_osd_map epochs [522,522], i have 521, src has [1,522]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 521 handle_osd_map epochs [522,522], i have 522, src has [1,522]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 90243072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:30.923575+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 90243072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:31.923880+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 90243072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:32.924174+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 90243072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:33.924623+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 90243072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:34.924814+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 90243072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:35.924986+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152461312 unmapped: 90243072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:36.925157+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:37.925373+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:38.925492+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:39.925676+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:40.925847+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:41.926041+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:42.926245+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:43.926685+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:44.926975+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152469504 unmapped: 90234880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:45.927185+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:46.927625+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:47.927777+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:48.927909+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:49.928111+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:50.928348+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:51.928545+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:52.928694+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:53.928865+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:54.929011+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:55.929150+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:56.929336+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:57.929511+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:58.929651+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:59.929789+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:00.929959+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:01.930102+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152477696 unmapped: 90226688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:02.930286+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 90218496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:03.930530+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 90218496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:04.930712+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 90218496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:05.930929+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 90218496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:06.931126+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 90218496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:07.931377+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152485888 unmapped: 90218496 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:08.931566+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:09.931729+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:10.931982+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:11.932224+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:12.932403+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:13.932643+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:14.932888+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:15.933069+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:16.933883+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:17.934075+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:18.934267+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:19.934573+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152494080 unmapped: 90210304 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:20.934892+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:21.935129+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:22.935345+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:23.935536+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:24.935804+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:25.936004+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:26.936194+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:27.936403+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:28.936698+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:29.936946+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:30.937217+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:31.937386+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3016670 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152502272 unmapped: 90202112 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:32.937537+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f0000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:33.937670+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:34.937916+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:35.938194+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 75.045028687s of 75.303054810s, submitted: 70
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e999c00 session 0x55936ef7e000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e693800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:36.938378+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e693800 session 0x55936e6a4f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015790 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:37.938550+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:38.938745+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:39.938934+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:40.939235+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:41.939415+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015790 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:42.939593+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:43.940686+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:44.940846+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936be67400 session 0x55936ef7f2c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:45.941028+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:46.941213+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017287 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:47.941413+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:48.941654+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:49.941916+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:50.942189+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:51.942390+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017287 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:52.942541+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152510464 unmapped: 90193920 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:53.942692+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 90185728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:54.942848+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 90185728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:55.942987+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 90185728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:56.943128+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152518656 unmapped: 90185728 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.081863403s of 20.664186478s, submitted: 10
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936d32c000 session 0x55936d719a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3017510 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:57.943303+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152543232 unmapped: 90161152 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:58.943495+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152551424 unmapped: 90152960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:59.943642+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152551424 unmapped: 90152960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:00.943829+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152551424 unmapped: 90152960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e435c00 session 0x55936d1c5680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:01.943991+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152551424 unmapped: 90152960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018415 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:02.944203+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152551424 unmapped: 90152960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:03.944412+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165142528 unmapped: 77561856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:04.944625+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165142528 unmapped: 77561856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:05.944843+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165142528 unmapped: 77561856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f62f1000/0x0/0x4ffc00000, data 0x294e8a2/0x2c0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:06.945006+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152559616 unmapped: 90144768 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.424390793s of 10.363306999s, submitted: 27
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3271391 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:07.945174+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 156762112 unmapped: 85942272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e999c00 session 0x55936c44dc20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:08.945338+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ebecc00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 90120192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936ebecc00 session 0x55936ef24b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:09.945490+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 90120192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:10.945735+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 90120192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:11.945973+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 90120192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3299216 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:12.946208+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152584192 unmapped: 90120192 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:13.946360+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 90112000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:14.946564+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 90112000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:15.946758+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 90112000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:16.946934+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 90112000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3299216 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:17.947147+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 90112000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:18.947273+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 90112000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:19.947488+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 90112000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:20.947669+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152592384 unmapped: 90112000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:21.947817+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 90103808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3299216 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:22.947991+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 90103808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:23.948152+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 90103808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:24.948311+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 90103808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:25.948469+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 90103808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:26.948624+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 90103808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3299216 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:27.948743+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 90103808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:28.948855+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152600576 unmapped: 90103808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:29.949005+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:30.949198+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:31.949340+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:32.949475+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3299216 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:33.949632+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:34.949796+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:35.949995+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:36.950146+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:37.950299+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3299216 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:38.950519+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:39.950715+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:40.950943+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:41.951143+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152608768 unmapped: 90095616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5af1000/0x0/0x4ffc00000, data 0x314e8a2/0x340d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:42.951322+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3299216 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152616960 unmapped: 90087424 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 34.906822205s of 35.744586945s, submitted: 9
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936be67400 session 0x55936d869a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:43.951530+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152920064 unmapped: 89784320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:44.951724+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152928256 unmapped: 89776128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:45.951936+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152936448 unmapped: 89767936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5acd000/0x0/0x4ffc00000, data 0x31728a2/0x3431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:46.952532+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 152936448 unmapped: 89767936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:47.952706+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3334264 data_alloc: 218103808 data_used: 6660096
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 156942336 unmapped: 85762048 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5acd000/0x0/0x4ffc00000, data 0x31728a2/0x3431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:48.952860+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:49.953021+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:50.953205+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5acd000/0x0/0x4ffc00000, data 0x31728a2/0x3431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:51.953308+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:52.953513+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377464 data_alloc: 234881024 data_used: 12767232
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:53.953649+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5acd000/0x0/0x4ffc00000, data 0x31728a2/0x3431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:54.953763+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:55.953876+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:56.954026+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:57.954146+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377464 data_alloc: 234881024 data_used: 12767232
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 157941760 unmapped: 84762624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.692958832s of 15.068251610s, submitted: 2
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:58.954270+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 168935424 unmapped: 73768960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f582d000/0x0/0x4ffc00000, data 0x31728a2/0x3431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,55])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:59.954398+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169885696 unmapped: 72818688 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:00.954576+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169967616 unmapped: 72736768 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5a9d000/0x0/0x4ffc00000, data 0x31728a2/0x3431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:01.954704+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170065920 unmapped: 72638464 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:02.954869+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3398100 data_alloc: 234881024 data_used: 14266368
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170172416 unmapped: 72531968 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f5a6d000/0x0/0x4ffc00000, data 0x31728a2/0x3431000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:03.954995+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 168198144 unmapped: 74506240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:04.955111+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 74440704 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:05.955273+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 74440704 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:06.955414+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 168263680 unmapped: 74440704 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f599a000/0x0/0x4ffc00000, data 0x32758a2/0x3534000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:07.955550+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f599a000/0x0/0x4ffc00000, data 0x32758a2/0x3534000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410708 data_alloc: 234881024 data_used: 14401536
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 74416128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:08.955674+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.210902691s of 10.877555847s, submitted: 113
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 74416128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:09.955819+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 168288256 unmapped: 74416128 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:10.955996+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169205760 unmapped: 73498624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d0e000/0x0/0x4ffc00000, data 0x3f018a2/0x41c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:11.956123+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936d32c000 session 0x55936ef7f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e435c00 session 0x55936ef254a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170844160 unmapped: 71860224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e999c00 session 0x55936d2281e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:12.956306+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496472 data_alloc: 234881024 data_used: 14274560
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170844160 unmapped: 71860224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:13.956474+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170844160 unmapped: 71860224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:14.956603+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:15.956737+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 22K writes, 90K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 22K writes, 7681 syncs, 2.92 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3529 writes, 10K keys, 3529 commit groups, 1.0 writes per commit group, ingest: 9.54 MB, 0.02 MB/s
                                           Interval WAL: 3529 writes, 1347 syncs, 2.62 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:16.956912+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:17.957019+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496472 data_alloc: 234881024 data_used: 14274560
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:18.957190+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:19.957678+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:20.957869+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:21.958480+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:22.958830+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496472 data_alloc: 234881024 data_used: 14274560
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:23.959048+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:24.959350+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:25.959532+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:26.959732+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:27.960002+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496472 data_alloc: 234881024 data_used: 14274560
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:28.960349+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:29.960687+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:30.960901+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:31.961208+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:32.962109+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496472 data_alloc: 234881024 data_used: 14274560
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170893312 unmapped: 71811072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:33.962282+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170901504 unmapped: 71802880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:34.962453+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170901504 unmapped: 71802880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:35.962695+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170901504 unmapped: 71802880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:36.962922+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ebee000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936ebee000 session 0x5593708654a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170901504 unmapped: 71802880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:37.963240+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3496472 data_alloc: 234881024 data_used: 14274560
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d26000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936be67400 session 0x55936d79ed20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170901504 unmapped: 71802880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:38.963474+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170901504 unmapped: 71802880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936d32c000 session 0x55936bfc7e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e435c00 session 0x55936c0b21e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:39.963612+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ebee400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.469814301s of 30.930002213s, submitted: 69
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170000384 unmapped: 72704000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:40.963770+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170000384 unmapped: 72704000 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:41.964003+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:42.964167+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487372 data_alloc: 234881024 data_used: 14274560
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:43.964376+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:44.964553+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:45.964791+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:46.964970+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:47.965224+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489452 data_alloc: 234881024 data_used: 14458880
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:48.965649+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:49.965918+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:50.967211+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:51.967839+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:52.968247+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489452 data_alloc: 234881024 data_used: 14458880
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170008576 unmapped: 72695808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.671759605s of 13.676218033s, submitted: 1
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:53.968498+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 72630272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:54.968817+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 72630272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:55.968956+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 72630272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:56.969472+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 72630272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:57.969657+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504024 data_alloc: 234881024 data_used: 14913536
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 72630272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:58.969931+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170074112 unmapped: 72630272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:59.970261+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 72515584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:00.970539+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 72515584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:01.970731+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 72515584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:02.970904+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3502600 data_alloc: 234881024 data_used: 14909440
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 72515584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:03.971178+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 72515584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:04.971480+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170188800 unmapped: 72515584 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:05.971676+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.119247437s of 12.181498528s, submitted: 20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 72351744 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:06.971965+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 72351744 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:07.972273+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3501896 data_alloc: 234881024 data_used: 14909440
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 72351744 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:08.972473+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170426368 unmapped: 72278016 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:09.972777+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 72261632 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:10.973040+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 72261632 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d4e000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:11.973311+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 72261632 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:12.973527+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3501272 data_alloc: 234881024 data_used: 14876672
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 72261632 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:13.973657+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 72261632 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:14.973826+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 72261632 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:15.974015+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.616064072s of 10.034454346s, submitted: 33
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170442752 unmapped: 72261632 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:16.974186+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170450944 unmapped: 72253440 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:18.150057+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3500920 data_alloc: 234881024 data_used: 14876672
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170450944 unmapped: 72253440 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:19.150228+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170467328 unmapped: 72237056 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:20.150364+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:21.150581+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:22.150751+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:23.150892+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3501880 data_alloc: 234881024 data_used: 14901248
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e999c00 session 0x559370bc92c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936ebee400 session 0x55936f353c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:24.151688+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:25.151912+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936be67400 session 0x55936d228d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:26.152118+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:27.152285+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:28.152500+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3501656 data_alloc: 234881024 data_used: 14901248
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:29.152692+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170483712 unmapped: 72220672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:30.152899+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.548484802s of 14.507252693s, submitted: 75
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936d32c000 session 0x55936ef7f680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170508288 unmapped: 72196096 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e435c00 session 0x559371158b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4d56000/0x0/0x4ffc00000, data 0x3ee98a2/0x41a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:31.153099+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:32.153336+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:33.153509+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036960 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:34.153654+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:35.153806+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:36.153976+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:37.154140+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:38.154271+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036960 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:39.154787+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:40.154933+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:41.155129+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:42.155307+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.395086288s of 12.514822960s, submitted: 33
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e999c00 session 0x55936efe0f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:43.155512+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e8c0400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e8c0400 session 0x55936c6154a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036960 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:44.155737+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:45.155906+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:46.156070+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:47.156272+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:48.156442+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3036960 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:49.156826+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164151296 unmapped: 78553088 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:50.157027+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936be67400 session 0x55936d866960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:51.157275+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82b1000/0x0/0x4ffc00000, data 0x98e8a2/0xc4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:52.157449+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:53.157589+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3040446 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:54.157834+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:55.157994+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:56.158135+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:57.158309+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82b1000/0x0/0x4ffc00000, data 0x98e8a2/0xc4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:58.158490+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3040446 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:59.158633+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:00.158787+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:01.159065+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82b1000/0x0/0x4ffc00000, data 0x98e8a2/0xc4d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164421632 unmapped: 78282752 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.807395935s of 18.917888641s, submitted: 6
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936d32c000 session 0x55936ef7e960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:02.159230+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f82f1000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,2])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173088768 unmapped: 69615616 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:03.159464+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e435c00 session 0x55936f0e12c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250968 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:04.159660+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:05.159829+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:06.160145+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:07.160325+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:08.160508+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250968 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:09.160725+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:10.160864+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:11.161046+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:12.161198+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:13.161365+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250968 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:14.161550+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:15.161740+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:16.161911+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:17.162043+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:18.762064+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250968 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e8c0400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.080049515s of 17.645441055s, submitted: 10
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e8c0400 session 0x55936bfc74a0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:19.762206+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e999c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:20.762641+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:21.762833+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:22.762949+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:23.763129+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275157 data_alloc: 218103808 data_used: 5435392
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 164700160 unmapped: 78004224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:24.763260+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165650432 unmapped: 77053952 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:25.763372+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165650432 unmapped: 77053952 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:26.763559+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165650432 unmapped: 77053952 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:27.764333+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165650432 unmapped: 77053952 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:28.764469+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303477 data_alloc: 218103808 data_used: 9400320
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165658624 unmapped: 77045760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:29.764650+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165658624 unmapped: 77045760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:30.764787+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165658624 unmapped: 77045760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:31.765017+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165658624 unmapped: 77045760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:32.765169+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165658624 unmapped: 77045760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f64aa000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:33.765358+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303477 data_alloc: 218103808 data_used: 9400320
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 165658624 unmapped: 77045760 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.724617004s of 14.757602692s, submitted: 6
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:34.765504+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 174964736 unmapped: 67739648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:35.765659+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175128576 unmapped: 67575808 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f516a000/0x0/0x4ffc00000, data 0x27958a2/0x2a54000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:36.765869+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175448064 unmapped: 67256320 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:37.765984+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 66945024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4767000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:38.766214+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373061 data_alloc: 218103808 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 66945024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4767000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:39.766352+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 66945024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:40.766581+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 66945024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4767000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:41.766775+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 66945024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:42.766913+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 66945024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:43.767096+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373061 data_alloc: 218103808 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 175759360 unmapped: 66945024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.655580521s of 10.155440331s, submitted: 103
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:44.767276+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173899776 unmapped: 68804608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e999c00 session 0x55936d228000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936c596000 session 0x55936d873c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:45.767378+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936be67400 session 0x55936c615a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173916160 unmapped: 68788224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:46.767557+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173916160 unmapped: 68788224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4967000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:47.767740+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173916160 unmapped: 68788224 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4967000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:48.767902+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357777 data_alloc: 218103808 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:49.768019+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:50.768139+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:51.768308+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:52.768528+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4967000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:53.768720+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357777 data_alloc: 218103808 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4967000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:54.768873+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:55.769033+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:56.769175+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:57.769833+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:58.770004+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357777 data_alloc: 218103808 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:59.770147+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4967000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:00.770309+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4967000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:01.770530+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:02.770666+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936d32c000 session 0x55936dbee3c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:03.770774+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3357777 data_alloc: 218103808 data_used: 10575872
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e435c00 session 0x55936dbeef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:04.770907+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e8c0400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e8c0400 session 0x55936c0b3c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.738245010s of 20.857685089s, submitted: 20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936be67400 session 0x55936c0b2000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:05.772054+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:06.772782+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:07.772943+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173924352 unmapped: 68780032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:08.773124+0000)
Nov 22 04:25:13 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19327 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360255 data_alloc: 218103808 data_used: 10604544
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:09.774099+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:10.774256+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:11.774523+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:12.774690+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:13.774824+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360415 data_alloc: 218103808 data_used: 10629120
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:14.774959+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:15.775120+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:16.775233+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:17.775491+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:18.775622+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360415 data_alloc: 218103808 data_used: 10629120
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173940736 unmapped: 68763648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.185776711s of 14.190197945s, submitted: 1
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:19.775776+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172843008 unmapped: 69861376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:20.775971+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172859392 unmapped: 69844992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:21.776152+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172859392 unmapped: 69844992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:22.776291+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172859392 unmapped: 69844992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:23.776596+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366735 data_alloc: 234881024 data_used: 11104256
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172859392 unmapped: 69844992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:24.776736+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 69754880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:25.776872+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 69754880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:26.777051+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 69754880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:27.777187+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 69754880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:28.777378+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367439 data_alloc: 234881024 data_used: 11100160
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 69754880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:29.777518+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172949504 unmapped: 69754880 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:30.777651+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.469251633s of 11.514067650s, submitted: 6
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:31.778187+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:32.778339+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:33.778481+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366735 data_alloc: 234881024 data_used: 11100160
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:34.778607+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:35.778753+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:36.778874+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:37.778990+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:38.779146+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366575 data_alloc: 218103808 data_used: 11096064
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:39.779363+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:40.779553+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:41.779724+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:42.779887+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:43.780067+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3366575 data_alloc: 218103808 data_used: 11096064
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 69672960 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4966000/0x0/0x4ffc00000, data 0x2d288b2/0x2fe8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:44.780266+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 69476352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936c596000 session 0x55936f353860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936d32c000 session 0x55936c44cd20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.453919411s of 14.486362457s, submitted: 5
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e435c00 session 0x55936f3523c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:45.780400+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 69476352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:46.780574+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 69476352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f4967000/0x0/0x4ffc00000, data 0x2d288a2/0x2fe7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:47.780763+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 69476352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:48.780976+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3374170 data_alloc: 234881024 data_used: 12460032
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 69476352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:49.781187+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 69476352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:50.781404+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 173228032 unmapped: 69476352 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e884800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 ms_handle_reset con 0x55936e884800 session 0x55936f3532c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:51.781693+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:52.782008+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:53.782170+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:54.782361+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:55.782549+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:56.782688+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:57.782867+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:58.783171+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:59.783306+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:00.784027+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:01.784211+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:02.784470+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:03.784693+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:04.784873+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:05.785012+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:06.785188+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:07.785489+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:08.785874+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:09.786042+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:10.786273+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:11.786491+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:12.786653+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:13.786815+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:14.786958+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:15.787115+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:16.787330+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:17.787525+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:18.787708+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:19.787844+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:20.787984+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:21.788212+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:22.788476+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:23.788622+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:24.788781+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:25.788967+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:26.789119+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:27.789269+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:28.789471+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:29.789619+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:30.789797+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:31.790046+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:32.790285+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:33.790512+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:34.790656+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:35.790891+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:36.791114+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:37.791296+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 heartbeat osd_stat(store_statfs(0x4f6d41000/0x0/0x4ffc00000, data 0x94e8a2/0xc0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:38.791517+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053810 data_alloc: 218103808 data_used: 2129920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:39.791767+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 522 handle_osd_map epochs [523,523], i have 522, src has [1,523]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.457309723s of 54.526065826s, submitted: 14
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936be67400 session 0x55936fc1f680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:40.791902+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:41.792121+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936c596000 session 0x55936fc1f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f6d3e000/0x0/0x4ffc00000, data 0x95041f/0xc10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:42.792272+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:43.792397+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058601 data_alloc: 218103808 data_used: 2138112
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:44.792568+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:45.792716+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:46.792946+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:47.793084+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f6d3e000/0x0/0x4ffc00000, data 0x95041f/0xc10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:48.793222+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058601 data_alloc: 218103808 data_used: 2138112
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d32c000 session 0x55936d867860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169320448 unmapped: 73383936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936e435c00 session 0x55936cad30e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7c5800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d7c5800 session 0x55936e6ad860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936be67400 session 0x55936e6ac780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:49.793397+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936c596000 session 0x55936ef24b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d32c000 session 0x559371159e00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936e435c00 session 0x55936d368f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e834c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936e834c00 session 0x55936ef7e780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936be67400 session 0x55936d867860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:50.793526+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:51.793717+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:52.793876+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:53.794077+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126217 data_alloc: 218103808 data_used: 2138112
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f649c000/0x0/0x4ffc00000, data 0x11f241f/0x14b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:54.794280+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:55.794555+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:56.794704+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936c596000 session 0x55936fc1f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:57.794869+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f649c000/0x0/0x4ffc00000, data 0x11f241f/0x14b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d32c000 session 0x55936fc1f680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:58.795034+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126217 data_alloc: 218103808 data_used: 2138112
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936e435c00 session 0x55936f3532c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ebed800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:59.795256+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936ebed800 session 0x55936f3523c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169828352 unmapped: 72876032 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:00.795378+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 169869312 unmapped: 72835072 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:01.795569+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:02.795782+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:03.795941+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187657 data_alloc: 218103808 data_used: 10833920
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f649c000/0x0/0x4ffc00000, data 0x11f241f/0x14b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:04.796213+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:05.796381+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f649c000/0x0/0x4ffc00000, data 0x11f241f/0x14b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:06.796749+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:07.797079+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:08.797623+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187817 data_alloc: 218103808 data_used: 10838016
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:09.797863+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:10.797993+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 172122112 unmapped: 70582272 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:11.798466+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f649c000/0x0/0x4ffc00000, data 0x11f241f/0x14b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.603864670s of 31.928466797s, submitted: 22
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176226304 unmapped: 66478080 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:12.798972+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177102848 unmapped: 65601536 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:13.799094+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3264291 data_alloc: 234881024 data_used: 11710464
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177102848 unmapped: 65601536 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:14.799253+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f4b56000/0x0/0x4ffc00000, data 0x198a41f/0x1c4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177102848 unmapped: 65601536 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:15.799719+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177102848 unmapped: 65601536 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:16.800028+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177102848 unmapped: 65601536 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:17.800257+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177102848 unmapped: 65601536 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:18.800396+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3258619 data_alloc: 234881024 data_used: 11710464
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177381376 unmapped: 65323008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:19.800936+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f4b42000/0x0/0x4ffc00000, data 0x19ac41f/0x1c6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177381376 unmapped: 65323008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:20.801146+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177381376 unmapped: 65323008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:21.801346+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f4b42000/0x0/0x4ffc00000, data 0x19ac41f/0x1c6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177381376 unmapped: 65323008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:22.801570+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177381376 unmapped: 65323008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:23.801883+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f4b42000/0x0/0x4ffc00000, data 0x19ac41f/0x1c6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3258939 data_alloc: 234881024 data_used: 11718656
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.272868156s of 12.611395836s, submitted: 85
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177389568 unmapped: 65314816 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:24.802140+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177389568 unmapped: 65314816 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:25.802390+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 177389568 unmapped: 65314816 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:26.802645+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 64266240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:27.802790+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d32c000 session 0x55936c6150e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 64266240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:28.802965+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f4b3d000/0x0/0x4ffc00000, data 0x19b141f/0x1c71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f4b3d000/0x0/0x4ffc00000, data 0x19b141f/0x1c71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3259167 data_alloc: 234881024 data_used: 11718656
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936be67400 session 0x55936c44cd20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936c596000 session 0x55936d2292c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 178438144 unmapped: 64266240 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:29.803095+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936e435c00 session 0x55936f0f7c20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171663360 unmapped: 71041024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:30.803282+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171663360 unmapped: 71041024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:31.803501+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171663360 unmapped: 71041024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:32.803715+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9e000/0x0/0x4ffc00000, data 0x95041f/0xc10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171663360 unmapped: 71041024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:33.803934+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9e000/0x0/0x4ffc00000, data 0x95041f/0xc10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066969 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171663360 unmapped: 71041024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:34.804156+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9e000/0x0/0x4ffc00000, data 0x95041f/0xc10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936ebed800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.954183578s of 10.992171288s, submitted: 12
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936ebed800 session 0x559370bc8780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171663360 unmapped: 71041024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:35.804314+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936be67400 session 0x55936c0b21e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9e000/0x0/0x4ffc00000, data 0x95041f/0xc10000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171671552 unmapped: 71032832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:36.804492+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936c596000 session 0x55936e6a4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:37.804654+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9d000/0x0/0x4ffc00000, data 0x95042f/0xc11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:38.804833+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068221 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:39.804959+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:40.805089+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:41.805296+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:42.805495+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9d000/0x0/0x4ffc00000, data 0x95042f/0xc11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:43.805680+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068221 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:44.805869+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9d000/0x0/0x4ffc00000, data 0x95042f/0xc11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:45.806054+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:46.806172+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9d000/0x0/0x4ffc00000, data 0x95042f/0xc11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:47.806338+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:48.806468+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9d000/0x0/0x4ffc00000, data 0x95042f/0xc11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068221 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:49.806743+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.072954178s of 14.123879433s, submitted: 17
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171679744 unmapped: 71024640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d32c000 session 0x55936f0e01e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936e435c00 session 0x55936fe38b40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:50.806881+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 71819264 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:51.807065+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 71819264 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:52.807217+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 71819264 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:53.807363+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 71819264 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3068045 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b9d000/0x0/0x4ffc00000, data 0x95042f/0xc11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:54.807507+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 71819264 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:55.807637+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170885120 unmapped: 71819264 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c422c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:56.807793+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b5d000/0x0/0x4ffc00000, data 0x99042f/0xc51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b5d000/0x0/0x4ffc00000, data 0x99042f/0xc51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [0,0,0,0,0,2])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936c422c00 session 0x559370865a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:57.808393+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b5d000/0x0/0x4ffc00000, data 0x99042f/0xc51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:58.808515+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073028 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:59.808716+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:00.808859+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:01.809051+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:02.809534+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:03.809750+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b5d000/0x0/0x4ffc00000, data 0x99042f/0xc51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073028 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:04.809929+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:05.810123+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f5b5d000/0x0/0x4ffc00000, data 0x99042f/0xc51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x944f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:06.810338+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:07.810536+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:08.810695+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 171147264 unmapped: 71557120 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.124326706s of 19.426313400s, submitted: 11
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936be67400 session 0x55936c0b2000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936c596000 session 0x55936d1c4d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3073122 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:09.810844+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 182771712 unmapped: 59932672 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d32c000 session 0x55936dbe7a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936e435c00 session 0x55936dbeef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:10.810983+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170213376 unmapped: 72491008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90459/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90459/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:11.811268+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170213376 unmapped: 72491008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90491/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:12.811508+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170213376 unmapped: 72491008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:13.811677+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170213376 unmapped: 72491008 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3329500 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:14.811809+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90491/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:15.811962+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:16.812206+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90491/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:17.812355+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90491/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:18.812499+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3329500 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:19.812672+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90491/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:20.812828+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90491/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:21.813063+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55937164c800
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55937164c800 session 0x55936fe38d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479c000/0x0/0x4ffc00000, data 0x2d90491/0x3052000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:22.813202+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:23.813408+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936be67400 session 0x55936d1c4000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936c596000 session 0x559371158780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3329500 data_alloc: 218103808 data_used: 2142208
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:24.813602+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170229760 unmapped: 72474624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.893125534s of 15.805371284s, submitted: 73
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d32c000 session 0x55936d868d20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:25.813739+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 72441856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479b000/0x0/0x4ffc00000, data 0x2d904a1/0x3053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:26.813874+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32ac00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 72441856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:27.814026+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 72441856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479b000/0x0/0x4ffc00000, data 0x2d904a1/0x3053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:28.814173+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 72441856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3333275 data_alloc: 218103808 data_used: 2408448
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:29.814325+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 170262528 unmapped: 72441856 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:30.814466+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479b000/0x0/0x4ffc00000, data 0x2d904a1/0x3053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479b000/0x0/0x4ffc00000, data 0x2d904a1/0x3053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:31.814631+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:32.814875+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:33.815095+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430715 data_alloc: 234881024 data_used: 16158720
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:34.815353+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:35.815543+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479b000/0x0/0x4ffc00000, data 0x2d904a1/0x3053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f479b000/0x0/0x4ffc00000, data 0x2d904a1/0x3053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:36.815721+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:37.815944+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:38.816143+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430715 data_alloc: 234881024 data_used: 16158720
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:39.816346+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 176324608 unmapped: 66379776 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.150213242s of 15.610229492s, submitted: 8
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:40.816496+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187031552 unmapped: 55672832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:41.816683+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f441b000/0x0/0x4ffc00000, data 0x2d904a1/0x3053000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 188219392 unmapped: 54484992 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:42.816852+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185950208 unmapped: 56754176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:43.817019+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185950208 unmapped: 56754176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554815 data_alloc: 234881024 data_used: 17448960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:44.817226+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185950208 unmapped: 56754176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:45.817383+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 56721408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f3901000/0x0/0x4ffc00000, data 0x3c2a4a1/0x3eed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:46.817556+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 56721408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:47.817743+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 56721408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:48.817913+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f3901000/0x0/0x4ffc00000, data 0x3c2a4a1/0x3eed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 56721408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554815 data_alloc: 234881024 data_used: 17448960
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:49.818069+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 56721408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f3901000/0x0/0x4ffc00000, data 0x3c2a4a1/0x3eed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:50.818214+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185982976 unmapped: 56721408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f3901000/0x0/0x4ffc00000, data 0x3c2a4a1/0x3eed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:51.818383+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.825826645s of 11.371797562s, submitted: 114
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 56713216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:52.818531+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 56713216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f3901000/0x0/0x4ffc00000, data 0x3c2a4a1/0x3eed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:53.818661+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185991168 unmapped: 56713216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:54.818796+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3552479 data_alloc: 234881024 data_used: 17453056
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 185999360 unmapped: 56705024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f3901000/0x0/0x4ffc00000, data 0x3c2a4a1/0x3eed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936e435c00 session 0x55936f0f6000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936d32ac00 session 0x559371159680
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 ms_handle_reset con 0x55936be67400 session 0x55936ef7ed20
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:55.818955+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 56696832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:56.819089+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 56696832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:57.819217+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 56696832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:58.819369+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 56696832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 heartbeat osd_stat(store_statfs(0x4f3902000/0x0/0x4ffc00000, data 0x3c2a491/0x3eec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:59.819562+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3548094 data_alloc: 234881024 data_used: 17453056
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 56696832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:00.819716+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 56696832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:01.819906+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 56696832 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.583123207s of 10.352546692s, submitted: 34
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:02.820105+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186015744 unmapped: 56688640 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _renew_subs
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 523 handle_osd_map epochs [524,524], i have 523, src has [1,524]
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:03.820312+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936c596000 session 0x55936f0f6000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936d32c000 session 0x559370bc8780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:04.826740+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554363 data_alloc: 234881024 data_used: 17457152
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:05.827064+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:06.827333+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:07.827563+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:08.827924+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:09.828183+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554523 data_alloc: 234881024 data_used: 17461248
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:10.828385+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:11.828678+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:12.828901+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:13.829063+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:14.829203+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554523 data_alloc: 234881024 data_used: 17461248
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:15.829391+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936e435c00 session 0x55936f3532c0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:16.829520+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186040320 unmapped: 56664064 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d7c6000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936d7c6000 session 0x55936fc1f860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:17.829673+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 56647680 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936be67400 session 0x55936d867860
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.650804520s of 16.239063263s, submitted: 21
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936c596000 session 0x55936ef7e780
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:18.829787+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 186056704 unmapped: 56647680 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fd000/0x0/0x4ffc00000, data 0x3c2c093/0x3ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:19.830063+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558225 data_alloc: 234881024 data_used: 17469440
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187146240 unmapped: 55558144 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:20.830219+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187146240 unmapped: 55558144 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:21.830417+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187146240 unmapped: 55558144 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:22.830603+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187179008 unmapped: 55525376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:23.830768+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187179008 unmapped: 55525376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:24.830945+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559345 data_alloc: 234881024 data_used: 17596416
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187179008 unmapped: 55525376 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fd000/0x0/0x4ffc00000, data 0x3c2c093/0x3ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:25.831057+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fd000/0x0/0x4ffc00000, data 0x3c2c093/0x3ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 55492608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:26.831167+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 55492608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fd000/0x0/0x4ffc00000, data 0x3c2c093/0x3ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:27.831265+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 55492608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fd000/0x0/0x4ffc00000, data 0x3c2c093/0x3ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fd000/0x0/0x4ffc00000, data 0x3c2c093/0x3ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:28.831538+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 55492608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:29.831681+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559345 data_alloc: 234881024 data_used: 17596416
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 55492608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:30.831848+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 55492608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:31.832015+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 55492608 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:32.832220+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.398906708s of 14.444002151s, submitted: 12
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 54239232 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:33.832377+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f2cfd000/0x0/0x4ffc00000, data 0x482c093/0x4af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189685760 unmapped: 53018624 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:34.832694+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3676269 data_alloc: 234881024 data_used: 23511040
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 52871168 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:35.832838+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 52871168 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:36.833058+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 52871168 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:37.833273+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 52871168 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:38.833477+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 190513152 unmapped: 52191232 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:39.833651+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3706045 data_alloc: 234881024 data_used: 23515136
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f28fd000/0x0/0x4ffc00000, data 0x4c2c093/0x4ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 190513152 unmapped: 52191232 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:40.833875+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 190513152 unmapped: 52191232 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:41.834318+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 190513152 unmapped: 52191232 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:42.834498+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f28fd000/0x0/0x4ffc00000, data 0x4c2c093/0x4ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 190513152 unmapped: 52191232 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:43.834666+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 190513152 unmapped: 52191232 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:44.834913+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.217733383s of 11.776284218s, submitted: 17
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704461 data_alloc: 234881024 data_used: 23515136
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189005824 unmapped: 53698560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:45.835053+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189005824 unmapped: 53698560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f28fd000/0x0/0x4ffc00000, data 0x4c2c093/0x4ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:46.835209+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189005824 unmapped: 53698560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:47.835631+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 189005824 unmapped: 53698560 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:48.835784+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193028096 unmapped: 49676288 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:49.835951+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3716125 data_alloc: 234881024 data_used: 27578368
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:50.836110+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f28fd000/0x0/0x4ffc00000, data 0x4c2c093/0x4ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:51.836311+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:52.836505+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:53.836646+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:54.836921+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3714541 data_alloc: 234881024 data_used: 27578368
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f28fd000/0x0/0x4ffc00000, data 0x4c2c093/0x4ef1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:55.837823+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:56.838030+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:57.838334+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:58.838591+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193110016 unmapped: 49594368 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:59.838905+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.065845490s of 15.164588928s, submitted: 15
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936d32c000 session 0x55936cad30e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936e435c00 session 0x55936d2290e0
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3714633 data_alloc: 234881024 data_used: 27598848
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193118208 unmapped: 49586176 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e994c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936e994c00 session 0x55936d718f00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:00.839110+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f28fd000/0x0/0x4ffc00000, data 0x4c2c070/0x4ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193126400 unmapped: 49577984 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:01.839288+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193126400 unmapped: 49577984 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:02.839613+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193126400 unmapped: 49577984 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f28fd000/0x0/0x4ffc00000, data 0x4c2c070/0x4ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:03.839797+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936be67400
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936be67400 session 0x55936f353a40
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936c596000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193134592 unmapped: 49569792 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936c596000 session 0x55936c44c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:04.839991+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590689 data_alloc: 234881024 data_used: 26222592
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193142784 unmapped: 49561600 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:05.840304+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936d32c000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936d32c000 session 0x55936ef24000
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: handle_auth_request added challenge on 0x55936e435c00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193142784 unmapped: 49561600 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 ms_handle_reset con 0x55936e435c00 session 0x55936ef7ef00
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:06.840483+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193142784 unmapped: 49561600 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:07.840771+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193142784 unmapped: 49561600 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:08.841006+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193142784 unmapped: 49561600 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:09.841199+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:10.841361+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:11.841518+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:12.841654+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:13.841805+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:14.841976+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:15.842139+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:16.842328+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:17.842472+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:18.842629+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:19.842819+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:20.842982+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:21.843190+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:22.843368+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:23.843527+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:24.843669+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:25.843869+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:26.844033+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:27.844243+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:28.844499+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:29.844662+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:30.844873+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:31.845159+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:32.845406+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:33.845601+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:34.845791+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:35.845965+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:36.846112+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:37.846267+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:38.846389+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:39.846540+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:40.846701+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:41.846881+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:42.847089+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193150976 unmapped: 49553408 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:43.847257+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:44.847397+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:45.847552+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:46.847709+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:47.847877+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:48.848027+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:49.848497+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:50.848644+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:51.848807+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:52.848960+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:53.849140+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193159168 unmapped: 49545216 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:54.849296+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:55.849480+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:56.849655+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:57.849783+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:58.849896+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:59.850012+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:00.850205+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:01.850352+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:02.850475+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:03.850621+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:04.850750+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:05.850978+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:06.851173+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:07.851338+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:08.851520+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:09.851688+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:10.851843+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:11.852062+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:12.852256+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:13.852466+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:14.852569+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:15.852902+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:16.853079+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:17.853201+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:18.853371+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:19.853526+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:20.853679+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:21.853863+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:22.854008+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:23.854158+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:24.854294+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:25.854416+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:26.854558+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:27.854683+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:28.854835+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:29.855000+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:30.855158+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:31.855365+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:32.855512+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:33.855626+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:34.855788+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:35.855940+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:36.856103+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:37.856227+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:38.856367+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: osd.2 524 heartbeat osd_stat(store_statfs(0x4f38fe000/0x0/0x4ffc00000, data 0x3c2c070/0x3ef0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x840f9c6), peers [0,1] op hist [])
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193167360 unmapped: 49537024 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:39.856489+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:13 compute-0 ceph-osd[90752]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:13 compute-0 ceph-osd[90752]: bluestore.MempoolThread(0x55936b181b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3590849 data_alloc: 234881024 data_used: 26226688
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193396736 unmapped: 49307648 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:40.856601+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'config diff' '{prefix=config diff}'
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'config show' '{prefix=config show}'
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 48930816 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:41.856761+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: prioritycache tune_memory target: 4294967296 mapped: 193896448 unmapped: 48807936 heap: 242704384 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: tick
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_tickets
Nov 22 04:25:13 compute-0 ceph-osd[90752]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:42.856916+0000)
Nov 22 04:25:13 compute-0 ceph-osd[90752]: do_command 'log dump' '{prefix=log dump}'
Nov 22 04:25:13 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Nov 22 04:25:13 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/146308894' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19331 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Nov 22 04:25:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1780819068' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:14 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:14 compute-0 nova_compute[253461]: 2025-11-22 04:25:14.542 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:14 compute-0 ceph-mon[75011]: from='client.19323 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3672681399' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mon[75011]: from='client.19327 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/146308894' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mon[75011]: from='client.19331 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1780819068' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19339 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:14 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Nov 22 04:25:14 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/171367018' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 04:25:15 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:15 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19341 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Nov 22 04:25:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/255322193' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 04:25:15 compute-0 podman[312853]: 2025-11-22 04:25:15.410544121 +0000 UTC m=+0.084924434 container health_status 66b4dc3bf3c89dc4e1e08ee00dff75d14dcca88f14b5077d745ba02e2aeab706 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 22 04:25:15 compute-0 nova_compute[253461]: 2025-11-22 04:25:15.429 253465 DEBUG oslo_service.periodic_task [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Nov 22 04:25:15 compute-0 nova_compute[253461]: 2025-11-22 04:25:15.430 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Nov 22 04:25:15 compute-0 nova_compute[253461]: 2025-11-22 04:25:15.439 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:15 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Nov 22 04:25:15 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654836685' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 04:25:15 compute-0 ceph-mon[75011]: from='client.19335 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:15 compute-0 ceph-mon[75011]: from='client.19339 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/171367018' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Nov 22 04:25:15 compute-0 ceph-mon[75011]: pgmap v2333: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:15 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/255322193' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Nov 22 04:25:15 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19349 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:15 compute-0 ceph-mgr[75294]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 04:25:15 compute-0 ceph-7adcc38b-6484-5de6-b879-33a0309153df-mgr-compute-0-wbwfxq[75290]: 2025-11-22T04:25:15.976+0000 7fd3dfb26640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Nov 22 04:25:16 compute-0 nova_compute[253461]: 2025-11-22 04:25:16.202 253465 DEBUG nova.compute.manager [None req-016c3498-10b2-4705-9bf3-3d475e01d08f - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Nov 22 04:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Nov 22 04:25:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/515835157' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 04:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Nov 22 04:25:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006845599' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 04:25:16 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Nov 22 04:25:16 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464182847' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 04:25:16 compute-0 ceph-mon[75011]: from='client.19341 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3654836685' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Nov 22 04:25:16 compute-0 ceph-mon[75011]: from='client.19349 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/515835157' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Nov 22 04:25:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4006845599' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Nov 22 04:25:16 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/464182847' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Nov 22 04:25:17 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Nov 22 04:25:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1557668744' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 04:25:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Nov 22 04:25:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/587679225' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 04:25:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Nov 22 04:25:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2632540742' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 04:25:17 compute-0 crontab[313207]: (root) LIST (root)
Nov 22 04:25:17 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Nov 22 04:25:17 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1933185865' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 04:25:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Nov 22 04:25:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3251852599' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:45.303730+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f80f6000/0x0/0x4ffc00000, data 0x263334a/0x27c7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 286 ms_handle_reset con 0x5609544e6c00 session 0x560950de2960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130154496 unmapped: 48816128 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:46.303927+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 286 ms_handle_reset con 0x56094f84e800 session 0x560950de3860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f9000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 286 ms_handle_reset con 0x5609525cd800 session 0x56094f3f72c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 286 ms_handle_reset con 0x560950511c00 session 0x5609517c63c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130154496 unmapped: 48816128 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:47.304102+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 286 heartbeat osd_stat(store_statfs(0x4f80f6000/0x0/0x4ffc00000, data 0x2634bf4/0x27c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 286 handle_osd_map epochs [287,287], i have 287, src has [1,287]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 287 ms_handle_reset con 0x56094f8f9000 session 0x560950de1680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2108195 data_alloc: 218103808 data_used: 7782400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130211840 unmapped: 48758784 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 ms_handle_reset con 0x5609519af800 session 0x56095183b4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:48.304325+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 ms_handle_reset con 0x56094f84e800 session 0x560951226f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 ms_handle_reset con 0x560954598c00 session 0x56094f897e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609501e3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f80f0000/0x0/0x4ffc00000, data 0x263835c/0x27cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 48750592 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:49.304489+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 ms_handle_reset con 0x5609501e3400 session 0x560950de1a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 48734208 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:50.304699+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 48734208 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:51.304899+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 48734208 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:52.305147+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f80f1000/0x0/0x4ffc00000, data 0x263834c/0x27cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2109426 data_alloc: 218103808 data_used: 7778304
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 48734208 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:53.305339+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130236416 unmapped: 48734208 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:54.305579+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525cc800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.454835892s of 10.463539124s, submitted: 120
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 ms_handle_reset con 0x5609525cc800 session 0x560951228f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130170880 unmapped: 48799744 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:55.305780+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 heartbeat osd_stat(store_statfs(0x4f80f1000/0x0/0x4ffc00000, data 0x26383be/0x27cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 48783360 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:56.305927+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 289 ms_handle_reset con 0x56094f7b2c00 session 0x56094f7785a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 290 ms_handle_reset con 0x56094f7b2c00 session 0x56094f77c5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 48783360 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:57.306096+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2119487 data_alloc: 218103808 data_used: 7794688
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130203648 unmapped: 48766976 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 291 ms_handle_reset con 0x5609543d4000 session 0x5609516f85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:58.306250+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130203648 unmapped: 48766976 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:59.306489+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c58400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 291 ms_handle_reset con 0x560951f30800 session 0x5609516f8b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130187264 unmapped: 48783360 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:00.306647+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954165800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 292 ms_handle_reset con 0x560954165800 session 0x56095183ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 293 ms_handle_reset con 0x56094f7b3400 session 0x5609519e85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130220032 unmapped: 48750592 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:01.306797+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 293 ms_handle_reset con 0x56094f7b3400 session 0x560952189680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 293 ms_handle_reset con 0x560951c58400 session 0x560953c90960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 293 heartbeat osd_stat(store_statfs(0x4f80df000/0x0/0x4ffc00000, data 0x2641205/0x27df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130228224 unmapped: 48742400 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:02.307046+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2134270 data_alloc: 218103808 data_used: 7806976
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 293 ms_handle_reset con 0x560951f30800 session 0x560950de3e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130252800 unmapped: 48717824 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:03.307228+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 294 ms_handle_reset con 0x56094f7b2c00 session 0x560953c90000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 294 ms_handle_reset con 0x56094f7b0800 session 0x56095183b2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130301952 unmapped: 48668672 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:04.307474+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.267668724s of 10.546594620s, submitted: 119
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 294 ms_handle_reset con 0x56094f7b3400 session 0x56095190a000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130301952 unmapped: 48668672 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:05.307694+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c58400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 295 ms_handle_reset con 0x560951f30800 session 0x560950de0d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 296 ms_handle_reset con 0x560951c58400 session 0x56094f364d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130326528 unmapped: 48644096 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:06.307884+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 296 ms_handle_reset con 0x560954598000 session 0x56094f77c000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 296 ms_handle_reset con 0x56094f7b2c00 session 0x560952188960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c58400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f80d3000/0x0/0x4ffc00000, data 0x2646518/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130359296 unmapped: 48611328 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:07.308004+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 ms_handle_reset con 0x560951c58400 session 0x560953c910e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 ms_handle_reset con 0x56094f7b3400 session 0x560952188000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 ms_handle_reset con 0x560951f30800 session 0x5609512292c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 ms_handle_reset con 0x560954598000 session 0x5609500c7e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f80d3000/0x0/0x4ffc00000, data 0x2646518/0x27e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146357 data_alloc: 218103808 data_used: 7827456
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:08.308222+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 48562176 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289e400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 ms_handle_reset con 0x56095289e400 session 0x560951227680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:09.308373+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 48562176 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f80d4000/0x0/0x4ffc00000, data 0x2647d9b/0x27ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:10.308501+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 130408448 unmapped: 48562176 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 298 handle_osd_map epochs [298,298], i have 298, src has [1,298]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 298 ms_handle_reset con 0x56094f7b3800 session 0x5609516f9680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:11.308652+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 129572864 unmapped: 49397760 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 299 ms_handle_reset con 0x560951c34800 session 0x56095183a960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 299 ms_handle_reset con 0x56094f7b1c00 session 0x56094f3645a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 299 ms_handle_reset con 0x5609545fd800 session 0x56095122cf00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:12.308806+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 129581056 unmapped: 49389568 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2164709 data_alloc: 218103808 data_used: 7847936
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:13.308952+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 129581056 unmapped: 49389568 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 ms_handle_reset con 0x5609517f9800 session 0x56095183a3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:14.309087+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 129622016 unmapped: 49348608 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 ms_handle_reset con 0x56094f7b1c00 session 0x56094f896b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f80c9000/0x0/0x4ffc00000, data 0x264d14a/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:15.309215+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 129622016 unmapped: 49348608 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 ms_handle_reset con 0x56094f7b3800 session 0x5609519c4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 ms_handle_reset con 0x560951c34800 session 0x56095015d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 ms_handle_reset con 0x5609545fd800 session 0x56095190ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 ms_handle_reset con 0x56094f7b2400 session 0x56095183a5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 ms_handle_reset con 0x56094f7b1c00 session 0x5609519aad20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 300 handle_osd_map epochs [300,301], i have 300, src has [1,301]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.425191879s of 10.856781960s, submitted: 121
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x56094f7b3800 session 0x56095183a1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x5609545fd800 session 0x56095122c960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:16.309314+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x560951c34800 session 0x56094ec4bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 132145152 unmapped: 46825472 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x560951ba3c00 session 0x5609519aa960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x56094f7b1c00 session 0x56095195f2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 heartbeat osd_stat(store_statfs(0x4f80c9000/0x0/0x4ffc00000, data 0x264d14a/0x27f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x56094f7b3800 session 0x560953cee3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x560951c34800 session 0x5609515e2000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x5609545fd800 session 0x56094f778d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:17.309397+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 47718400 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2279988 data_alloc: 218103808 data_used: 7856128
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:18.309512+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 131252224 unmapped: 47718400 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953a64000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c56800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 ms_handle_reset con 0x560951c56800 session 0x560953c90780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:19.309679+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 131260416 unmapped: 47710208 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 302 ms_handle_reset con 0x56094f7b1c00 session 0x560950de1e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 302 ms_handle_reset con 0x560951c34800 session 0x560952189e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c56800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 302 ms_handle_reset con 0x560951c56800 session 0x56095122cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:20.309840+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 39837696 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 303 ms_handle_reset con 0x56094f7b3800 session 0x56094f3652c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 303 ms_handle_reset con 0x5609545fd800 session 0x56094f8d5a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 303 ms_handle_reset con 0x560953a64000 session 0x5609517acf00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 303 ms_handle_reset con 0x56094f7b1c00 session 0x56095183b680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f698c000/0x0/0x4ffc00000, data 0x3d7f52d/0x3f30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:21.310014+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 132407296 unmapped: 46563328 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:22.310164+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 132423680 unmapped: 46546944 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c56800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 305 ms_handle_reset con 0x56094f7b3800 session 0x56095122d0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 305 ms_handle_reset con 0x560951c56800 session 0x56094f392d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 305 ms_handle_reset con 0x5609519ae800 session 0x5609500c63c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 305 ms_handle_reset con 0x56094f7b1c00 session 0x56095121e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2380173 data_alloc: 218103808 data_used: 7872512
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:23.310314+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 131883008 unmapped: 47087616 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c56800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953a64000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 306 ms_handle_reset con 0x560953a64000 session 0x56095121f860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 306 ms_handle_reset con 0x56094f9a1800 session 0x56094f60c5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:24.310456+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 132268032 unmapped: 46702592 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 307 ms_handle_reset con 0x560951c56800 session 0x56094f392960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 307 ms_handle_reset con 0x560951c34800 session 0x560951221a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:25.310606+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 140304384 unmapped: 38666240 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 307 ms_handle_reset con 0x56094f7b1c00 session 0x56094f60da40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c56800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953a64000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 307 ms_handle_reset con 0x560953a64000 session 0x56095053b0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.391288757s of 10.167940140s, submitted: 212
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:26.310825+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 140320768 unmapped: 38649856 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 308 ms_handle_reset con 0x560951c56800 session 0x56095053a3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 308 ms_handle_reset con 0x56094f9a1800 session 0x56094f60d2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94ac00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 308 heartbeat osd_stat(store_statfs(0x4f64b1000/0x0/0x4ffc00000, data 0x4250ed1/0x440c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c56000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 308 ms_handle_reset con 0x560951c56000 session 0x560950de32c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:27.310954+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 140812288 unmapped: 38158336 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 308 ms_handle_reset con 0x56094f7b1c00 session 0x560951227c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2555495 data_alloc: 234881024 data_used: 21811200
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:28.311096+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 140820480 unmapped: 38150144 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 308 handle_osd_map epochs [308,309], i have 308, src has [1,309]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 309 ms_handle_reset con 0x56094f8f8000 session 0x5609515d3860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c58400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 309 ms_handle_reset con 0x56094f9a1800 session 0x56094f8972c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 309 ms_handle_reset con 0x56094f84fc00 session 0x56095053af00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 309 ms_handle_reset con 0x560951c58400 session 0x560953c90b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 309 ms_handle_reset con 0x56094f94ac00 session 0x56095183a1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:29.311282+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 140943360 unmapped: 38027264 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c58400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:30.311396+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 140967936 unmapped: 38002688 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 310 ms_handle_reset con 0x560951c58400 session 0x5609500c7e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:31.311538+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 140984320 unmapped: 37986304 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:32.311666+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 37953536 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 312 ms_handle_reset con 0x56094f7b1c00 session 0x560953c91c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f63dd000/0x0/0x4ffc00000, data 0x4322c01/0x44df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2564267 data_alloc: 234881024 data_used: 21803008
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:33.311788+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 37953536 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:34.311931+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 141017088 unmapped: 37953536 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 313 ms_handle_reset con 0x560954164000 session 0x56094f43f2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 313 heartbeat osd_stat(store_statfs(0x4f63db000/0x0/0x4ffc00000, data 0x43247fe/0x44e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [0,0,0,0,3,12])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:35.312063+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 146227200 unmapped: 32743424 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 315 ms_handle_reset con 0x560954164c00 session 0x56095190b2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.167194366s of 10.004605293s, submitted: 285
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 315 ms_handle_reset con 0x56094f7b1c00 session 0x5609521892c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:36.312194+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149626880 unmapped: 29343744 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:37.312331+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150994944 unmapped: 27975680 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2671171 data_alloc: 234881024 data_used: 24039424
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:38.312507+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150994944 unmapped: 27975680 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:39.312648+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 315 heartbeat osd_stat(store_statfs(0x4f58c2000/0x0/0x4ffc00000, data 0x4e34fa4/0x4ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 27926528 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:40.312773+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151076864 unmapped: 27893760 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 316 ms_handle_reset con 0x56094f7b4000 session 0x5609504b0960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:41.312900+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150069248 unmapped: 28901376 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 316 ms_handle_reset con 0x560953ad9c00 session 0x56094f3970e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 316 ms_handle_reset con 0x560951ba5400 session 0x56094f778780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095048bc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f58c7000/0x0/0x4ffc00000, data 0x4e36a6f/0x4ff6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 316 ms_handle_reset con 0x56095048bc00 session 0x5609515d3c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:42.313052+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150085632 unmapped: 28884992 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 316 ms_handle_reset con 0x56094f7b1c00 session 0x56094f60de00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2673109 data_alloc: 234881024 data_used: 24047616
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:43.313223+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150233088 unmapped: 28737536 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 316 ms_handle_reset con 0x560951c34000 session 0x56094f3f61e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:44.313345+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150724608 unmapped: 28246016 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c56800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609523ffc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 317 ms_handle_reset con 0x560951c56800 session 0x56095183bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:45.313508+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150732800 unmapped: 28237824 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 318 ms_handle_reset con 0x5609523ffc00 session 0x5609515e30e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 318 ms_handle_reset con 0x56094f7b4400 session 0x560951220f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:46.313659+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150773760 unmapped: 28196864 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.272058487s of 10.483208656s, submitted: 105
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:47.313778+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150781952 unmapped: 28188672 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 319 ms_handle_reset con 0x56094f7b1c00 session 0x56094f3934a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2697662 data_alloc: 234881024 data_used: 24715264
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 319 heartbeat osd_stat(store_statfs(0x4f585e000/0x0/0x4ffc00000, data 0x4e9ad4a/0x505f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:48.313910+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150994944 unmapped: 27975680 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 319 ms_handle_reset con 0x5609543d5000 session 0x56095053b0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f763c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:49.314133+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151019520 unmapped: 27951104 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 320 ms_handle_reset con 0x56094f763c00 session 0x56094f396d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 320 ms_handle_reset con 0x560954164000 session 0x56095121e960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:50.314304+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 320 handle_osd_map epochs [321,321], i have 321, src has [1,321]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 27926528 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 321 ms_handle_reset con 0x560953ad9800 session 0x560950de05a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:51.314489+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151044096 unmapped: 27926528 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:52.314654+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 29114368 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f763c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 321 ms_handle_reset con 0x56094f763c00 session 0x56095183ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2698302 data_alloc: 234881024 data_used: 24723456
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:53.314833+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149856256 unmapped: 29114368 heap: 178970624 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 321 heartbeat osd_stat(store_statfs(0x4f5857000/0x0/0x4ffc00000, data 0x4e9e53e/0x5065000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 322 ms_handle_reset con 0x56094f7b1c00 session 0x5609516f85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 322 ms_handle_reset con 0x560954164000 session 0x56094f77c5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 322 ms_handle_reset con 0x5609519af400 session 0x560951228f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:54.314966+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163995648 unmapped: 22937600 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 322 ms_handle_reset con 0x56094f7b3c00 session 0x5609519e94a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f763c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:55.315108+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 167493632 unmapped: 19439616 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 324 ms_handle_reset con 0x56094f763c00 session 0x560951207860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:56.315238+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 167608320 unmapped: 19324928 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 325 ms_handle_reset con 0x56094f7b1c00 session 0x56095190bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 325 ms_handle_reset con 0x5609519af400 session 0x56095122c960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 325 ms_handle_reset con 0x560954164000 session 0x5609500c6d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.163507462s of 10.207719803s, submitted: 202
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:57.315368+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163749888 unmapped: 23183360 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2945776 data_alloc: 251658240 data_used: 34787328
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f40ec000/0x0/0x4ffc00000, data 0x66d1384/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c35c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:58.315531+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 325 ms_handle_reset con 0x560951c35c00 session 0x560951228000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 325 ms_handle_reset con 0x560951ba4000 session 0x560950de1a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163790848 unmapped: 23142400 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:59.315676+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163872768 unmapped: 23060480 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:00.315823+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163872768 unmapped: 23060480 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:01.315952+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163872768 unmapped: 23060480 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:02.316133+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163872768 unmapped: 23060480 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2952442 data_alloc: 251658240 data_used: 34799616
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:03.316372+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x560953ad8800 session 0x5609517c63c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x56094f84e800 session 0x560950de3860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x56095289e000 session 0x560950de2960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163889152 unmapped: 23044096 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x560951ba4800 session 0x560951225860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x56094f7b4000 session 0x56094f392000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x560951ba5400 session 0x56094f43e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x560951ba4000 session 0x5609515d23c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x56094f84e800 session 0x56094f7790e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x560951ba4800 session 0x5609515d25a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x56094f7b4000 session 0x5609504b05a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f40c2000/0x0/0x4ffc00000, data 0x66f6e59/0x67fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x56094f84e800 session 0x56094f77d2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:04.316503+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163930112 unmapped: 23003136 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x560951ba4000 session 0x56094f77cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:05.316671+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 22994944 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f40e7000/0x0/0x4ffc00000, data 0x66d2e49/0x67d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 ms_handle_reset con 0x560951ba5400 session 0x56095053ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f40e7000/0x0/0x4ffc00000, data 0x66d2e59/0x67d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:06.316796+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163938304 unmapped: 22994944 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:07.316923+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 heartbeat osd_stat(store_statfs(0x4f40e7000/0x0/0x4ffc00000, data 0x66d2e59/0x67d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.323715210s of 10.621623993s, submitted: 52
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 327 ms_handle_reset con 0x56095289e000 session 0x56095053a960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163971072 unmapped: 22962176 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2951483 data_alloc: 251658240 data_used: 34693120
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:08.317068+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 327 ms_handle_reset con 0x560951ba5400 session 0x560952189860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163979264 unmapped: 22953984 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 327 ms_handle_reset con 0x560951ba4000 session 0x560950de14a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 327 ms_handle_reset con 0x56094ef10000 session 0x56095183af00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:09.317251+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 165044224 unmapped: 21889024 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x56094f9a1c00 session 0x56094f8da1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x56094f7b4000 session 0x56094f8dad20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x560953ad8800 session 0x560952188000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x560951ba4000 session 0x56094f43e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x560951ba5400 session 0x560951225860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x56094f84e800 session 0x56094f60cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x56094f9a1c00 session 0x56095190b860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x56094ef10000 session 0x5609521acb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:10.317367+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 165634048 unmapped: 21299200 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x560951ba5400 session 0x5609517ac1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x560953ad8800 session 0x560951228960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:11.317540+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 167051264 unmapped: 19881984 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:12.330977+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c57000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 ms_handle_reset con 0x560951c57000 session 0x56094f8d4d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172023808 unmapped: 14909440 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 heartbeat osd_stat(store_statfs(0x4f4579000/0x0/0x4ffc00000, data 0x4fd569a/0x51a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2840331 data_alloc: 251658240 data_used: 41558016
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:13.331122+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f31c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172056576 unmapped: 14876672 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 329 ms_handle_reset con 0x560951f31c00 session 0x560951220780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:14.331281+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172097536 unmapped: 14835712 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952401c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 329 ms_handle_reset con 0x560952401c00 session 0x56094f8961e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d4c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:15.331481+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172097536 unmapped: 14835712 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 329 ms_handle_reset con 0x5609543d4c00 session 0x56095121e3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 329 handle_osd_map epochs [330,331], i have 329, src has [1,331]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c35c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:16.331624+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172146688 unmapped: 14786560 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 331 ms_handle_reset con 0x560951c35c00 session 0x56094f393680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095386ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 331 ms_handle_reset con 0x56095386ec00 session 0x560951226f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:17.331744+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.120322227s of 10.071544647s, submitted: 148
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 331 ms_handle_reset con 0x560951ba2c00 session 0x56094f3923c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172146688 unmapped: 14786560 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 331 heartbeat osd_stat(store_statfs(0x4f4570000/0x0/0x4ffc00000, data 0x4fda983/0x51ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2854881 data_alloc: 251658240 data_used: 41562112
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:18.331838+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172195840 unmapped: 14737408 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 332 heartbeat osd_stat(store_statfs(0x4f415e000/0x0/0x4ffc00000, data 0x4fda9f5/0x51af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 332 ms_handle_reset con 0x560951f2e800 session 0x56094f43f4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:19.331957+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172212224 unmapped: 14721024 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b93800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:20.332098+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 332 ms_handle_reset con 0x560951b93800 session 0x56094f396f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095321dc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172220416 unmapped: 14712832 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 332 handle_osd_map epochs [332,333], i have 332, src has [1,333]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 333 ms_handle_reset con 0x56095321dc00 session 0x560951221e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:21.332410+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 333 heartbeat osd_stat(store_statfs(0x4f4158000/0x0/0x4ffc00000, data 0x4fde16f/0x51b5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 172228608 unmapped: 14704640 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 333 ms_handle_reset con 0x560952400800 session 0x56094f397e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609523fec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 333 ms_handle_reset con 0x5609523fec00 session 0x5609512294a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:22.332561+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 174358528 unmapped: 12574720 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b93800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2884205 data_alloc: 251658240 data_used: 44838912
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 333 ms_handle_reset con 0x560951b93800 session 0x56094f60d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:23.332662+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 174710784 unmapped: 12222464 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:24.332784+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 174727168 unmapped: 12206080 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:25.332956+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 174792704 unmapped: 12140544 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 334 ms_handle_reset con 0x560951c59400 session 0x5609519ab0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c35800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 334 ms_handle_reset con 0x560951c35800 session 0x5609504b1680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f40a4000/0x0/0x4ffc00000, data 0x50940fd/0x5269000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:26.333122+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 174825472 unmapped: 12107776 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 336 heartbeat osd_stat(store_statfs(0x4f409c000/0x0/0x4ffc00000, data 0x5097729/0x526f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:27.333260+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.131002426s of 10.034239769s, submitted: 123
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 174841856 unmapped: 12091392 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 336 ms_handle_reset con 0x5609519af000 session 0x56094f8db2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2894135 data_alloc: 251658240 data_used: 44851200
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:28.333374+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b93000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175079424 unmapped: 11853824 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 336 ms_handle_reset con 0x560951b93000 session 0x56095122d0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:29.333545+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 336 ms_handle_reset con 0x5609519af000 session 0x56094f396b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175079424 unmapped: 11853824 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 336 heartbeat osd_stat(store_statfs(0x4f409b000/0x0/0x4ffc00000, data 0x50992fa/0x5272000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:30.333659+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175095808 unmapped: 11837440 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 336 heartbeat osd_stat(store_statfs(0x4f409b000/0x0/0x4ffc00000, data 0x50992fa/0x5272000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 336 ms_handle_reset con 0x56095289fc00 session 0x56094f60de00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:31.333793+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175104000 unmapped: 11829248 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:32.333936+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175104000 unmapped: 11829248 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2894431 data_alloc: 251658240 data_used: 44855296
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:33.334069+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 337 ms_handle_reset con 0x560951ba2800 session 0x56094f779c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175136768 unmapped: 11796480 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 337 ms_handle_reset con 0x56094f8f8c00 session 0x56095121e960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 337 heartbeat osd_stat(store_statfs(0x4f409c000/0x0/0x4ffc00000, data 0x50992fa/0x5272000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609544f7400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:34.334220+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 338 ms_handle_reset con 0x5609544f7400 session 0x5609515d2b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175349760 unmapped: 11583488 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:35.334411+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 338 ms_handle_reset con 0x56094f8f8c00 session 0x56094f77dc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175423488 unmapped: 11509760 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 338 ms_handle_reset con 0x5609519af000 session 0x56095053b0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:36.334583+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175472640 unmapped: 11460608 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 339 ms_handle_reset con 0x560951ba2800 session 0x560951228000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:37.334711+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176119808 unmapped: 10813440 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.121606827s of 10.666492462s, submitted: 70
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2912984 data_alloc: 268435456 data_used: 46604288
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 340 ms_handle_reset con 0x56095289fc00 session 0x56095122d2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:38.334838+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176144384 unmapped: 10788864 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:39.334986+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 340 heartbeat osd_stat(store_statfs(0x4f408e000/0x0/0x4ffc00000, data 0x50a008c/0x527e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f9400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176177152 unmapped: 10756096 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 340 ms_handle_reset con 0x56094f8f9400 session 0x5609500c63c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 340 ms_handle_reset con 0x56094f8f8c00 session 0x56095053b2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:40.335113+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176201728 unmapped: 10731520 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 341 ms_handle_reset con 0x5609519af000 session 0x56094f5f01e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 341 ms_handle_reset con 0x560951ba2800 session 0x56095015dc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:41.335266+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176209920 unmapped: 10723328 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 341 heartbeat osd_stat(store_statfs(0x4f408c000/0x0/0x4ffc00000, data 0x50a1c6d/0x5281000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:42.335399+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 341 ms_handle_reset con 0x56095289fc00 session 0x56094f396f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176209920 unmapped: 10723328 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2916554 data_alloc: 268435456 data_used: 46612480
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:43.335531+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176209920 unmapped: 10723328 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:44.335683+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176209920 unmapped: 10723328 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94a000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 342 ms_handle_reset con 0x56094f7b4c00 session 0x5609519e92c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:45.335887+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 10698752 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:46.336007+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 343 ms_handle_reset con 0x56094f7b4c00 session 0x5609517c63c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176267264 unmapped: 10665984 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 343 ms_handle_reset con 0x56094f94a000 session 0x5609517ac1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:47.336206+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176267264 unmapped: 10665984 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f4083000/0x0/0x4ffc00000, data 0x50a52b9/0x5289000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2926346 data_alloc: 268435456 data_used: 46628864
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:48.336357+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176308224 unmapped: 10625024 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f4083000/0x0/0x4ffc00000, data 0x50a52b9/0x5289000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:49.336506+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176308224 unmapped: 10625024 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 343 heartbeat osd_stat(store_statfs(0x4f4083000/0x0/0x4ffc00000, data 0x50a52b9/0x5289000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 343 handle_osd_map epochs [344,344], i have 344, src has [1,344]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.401639938s of 11.586874962s, submitted: 35
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:50.336680+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 344 ms_handle_reset con 0x56094f8f8c00 session 0x56094f77d860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176390144 unmapped: 10543104 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b93c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 344 ms_handle_reset con 0x560951b93c00 session 0x56094f60cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94bc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:51.336884+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176390144 unmapped: 10543104 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:52.337043+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 344 ms_handle_reset con 0x56094f94bc00 session 0x56094f43e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 10534912 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 344 heartbeat osd_stat(store_statfs(0x4f4082000/0x0/0x4ffc00000, data 0x50a6e8a/0x528c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2927196 data_alloc: 268435456 data_used: 46624768
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:53.337203+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176472064 unmapped: 10461184 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226cc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 344 ms_handle_reset con 0x56095226cc00 session 0x56094f8da1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:54.337349+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 344 heartbeat osd_stat(store_statfs(0x4f4082000/0x0/0x4ffc00000, data 0x50a6edc/0x528c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 344 ms_handle_reset con 0x560951c59000 session 0x5609521894a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 10436608 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:55.337486+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 10387456 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 345 ms_handle_reset con 0x560951f2e000 session 0x56094f7785a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 346 ms_handle_reset con 0x56094f7b5400 session 0x560951221e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:56.337618+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953838c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 346 ms_handle_reset con 0x56094f84e800 session 0x560951228f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 346 ms_handle_reset con 0x560951ba4000 session 0x56095122cf00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 9265152 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 ms_handle_reset con 0x560953838c00 session 0x56094f364780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 ms_handle_reset con 0x56094f7b5400 session 0x560950de14a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:57.337798+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 ms_handle_reset con 0x560951c59000 session 0x5609517ac1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 9125888 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 ms_handle_reset con 0x560951f2e000 session 0x56094f396f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2922601 data_alloc: 268435456 data_used: 46522368
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:58.337902+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 heartbeat osd_stat(store_statfs(0x4f4155000/0x0/0x4ffc00000, data 0x4fd2004/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177856512 unmapped: 9076736 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609544f7800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 ms_handle_reset con 0x5609517f7c00 session 0x5609500c6f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:59.338023+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 ms_handle_reset con 0x5609505ff800 session 0x56094f897e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 ms_handle_reset con 0x560950484800 session 0x56094f3654a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176472064 unmapped: 10461184 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226dc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:00.338185+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176488448 unmapped: 10444800 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad8000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.384457588s of 11.017684937s, submitted: 168
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 348 ms_handle_reset con 0x56095226dc00 session 0x56095183a780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:01.338350+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176488448 unmapped: 10444800 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94b800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 349 ms_handle_reset con 0x56094f94b800 session 0x5609500c6b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 349 ms_handle_reset con 0x560953ad8000 session 0x56095121e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:02.338499+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 349 ms_handle_reset con 0x5609505ff800 session 0x5609517ac3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 169033728 unmapped: 17899520 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 349 handle_osd_map epochs [349,350], i have 349, src has [1,350]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 349 handle_osd_map epochs [350,350], i have 350, src has [1,350]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 ms_handle_reset con 0x560950484800 session 0x56094f3934a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 ms_handle_reset con 0x5609517f7c00 session 0x560951229e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:03.338654+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2718505 data_alloc: 251658240 data_used: 30937088
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 ms_handle_reset con 0x5609544f7800 session 0x56095122d0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 168198144 unmapped: 18735104 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 ms_handle_reset con 0x560950484800 session 0x5609504b1680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 ms_handle_reset con 0x5609519af000 session 0x56094f896b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:04.338985+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 heartbeat osd_stat(store_statfs(0x4f51af000/0x0/0x4ffc00000, data 0x3f772f3/0x415e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 ms_handle_reset con 0x560954598c00 session 0x56094f897860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 168280064 unmapped: 18653184 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 ms_handle_reset con 0x56094f7b3800 session 0x56095190a780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 heartbeat osd_stat(store_statfs(0x4f51af000/0x0/0x4ffc00000, data 0x3f772f3/0x415e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:05.339183+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 ms_handle_reset con 0x56094f7b1c00 session 0x5609512274a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 148267008 unmapped: 38666240 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:06.339341+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 148267008 unmapped: 38666240 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 351 ms_handle_reset con 0x56094f7b3800 session 0x56094f77dc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 351 ms_handle_reset con 0x560950484800 session 0x56094f43eb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:07.339493+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 147382272 unmapped: 39550976 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f6a83000/0x0/0x4ffc00000, data 0x26a4d09/0x288a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f6a84000/0x0/0x4ffc00000, data 0x26a4cf9/0x2889000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:08.339647+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2391053 data_alloc: 218103808 data_used: 5623808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 147382272 unmapped: 39550976 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:09.339799+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed6000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 147390464 unmapped: 39542784 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 351 ms_handle_reset con 0x560953ed6000 session 0x56094f43ed20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 351 ms_handle_reset con 0x560953ed7400 session 0x56094f43e1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:10.339947+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 147423232 unmapped: 39510016 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94a000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.600321770s of 10.361361504s, submitted: 145
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 ms_handle_reset con 0x56094f94a000 session 0x56095183bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:11.340112+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 ms_handle_reset con 0x56094f7b3800 session 0x56094f3f70e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 148979712 unmapped: 37953536 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:12.340275+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f6a82000/0x0/0x4ffc00000, data 0x26a67a4/0x288b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 148979712 unmapped: 37953536 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 ms_handle_reset con 0x560950484800 session 0x5609516f8d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:13.340484+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2395445 data_alloc: 218103808 data_used: 8056832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 148987904 unmapped: 37945344 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:14.340665+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 148987904 unmapped: 37945344 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 ms_handle_reset con 0x560954598400 session 0x560952189e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f6a81000/0x0/0x4ffc00000, data 0x26a6816/0x288d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:15.340847+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 ms_handle_reset con 0x560951ba2400 session 0x5609521883c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954165800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 148996096 unmapped: 37937152 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 ms_handle_reset con 0x560954165800 session 0x56094f3923c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b93400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 ms_handle_reset con 0x560951b93400 session 0x56094f8d50e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:16.340985+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149757952 unmapped: 37175296 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f64d5000/0x0/0x4ffc00000, data 0x2c537b4/0x2e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:17.341081+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f64d5000/0x0/0x4ffc00000, data 0x2c537b4/0x2e39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149757952 unmapped: 37175296 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 353 ms_handle_reset con 0x56094f7b3800 session 0x56094f778b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:18.341239+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2448783 data_alloc: 218103808 data_used: 8065024
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 37920768 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:19.341344+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 37920768 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:20.341522+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 37920768 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:21.341671+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 37920768 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:22.341811+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.568221092s of 11.333637238s, submitted: 76
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 353 ms_handle_reset con 0x5609519ae000 session 0x56094f77cf00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 37920768 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:23.341998+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 353 heartbeat osd_stat(store_statfs(0x4f64d1000/0x0/0x4ffc00000, data 0x2c55393/0x2e3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2449685 data_alloc: 218103808 data_used: 8065024
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 37920768 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:24.342161+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 37920768 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:25.342355+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609544e7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 353 ms_handle_reset con 0x5609544e7c00 session 0x56094f8da1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149012480 unmapped: 37920768 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525cd400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 353 ms_handle_reset con 0x5609525cd400 session 0x560951206d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 353 ms_handle_reset con 0x560952400c00 session 0x56094f8da960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:26.342473+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 354 ms_handle_reset con 0x56094f7b3800 session 0x5609515e2b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 37879808 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f64d0000/0x0/0x4ffc00000, data 0x2c553f5/0x2e3e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 ms_handle_reset con 0x5609519ae000 session 0x5609500c63c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:27.342590+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 ms_handle_reset con 0x560952400800 session 0x5609500c7860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 37879808 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:28.342709+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2463703 data_alloc: 218103808 data_used: 8085504
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149053440 unmapped: 37879808 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b92000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 ms_handle_reset con 0x560951b92000 session 0x56094f43e1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 ms_handle_reset con 0x56094f7b3800 session 0x56094f3f61e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 heartbeat osd_stat(store_statfs(0x4f64c7000/0x0/0x4ffc00000, data 0x2c58bb3/0x2e46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 ms_handle_reset con 0x5609519ae000 session 0x56094f8da1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:29.342849+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 ms_handle_reset con 0x560952400800 session 0x56094f778b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149135360 unmapped: 37797888 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 ms_handle_reset con 0x560952400c00 session 0x56094f3923c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:30.342970+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149143552 unmapped: 37789696 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 heartbeat osd_stat(store_statfs(0x4f64c3000/0x0/0x4ffc00000, data 0x2c58c44/0x2e4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c35400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 ms_handle_reset con 0x560951c35400 session 0x5609519e92c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:31.343284+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x56094f84f000 session 0x560952189e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150233088 unmapped: 36700160 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:32.343449+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150233088 unmapped: 36700160 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.111255646s of 10.382912636s, submitted: 63
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x560952400800 session 0x5609515e2960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x560952400c00 session 0x5609517c6f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 heartbeat osd_stat(store_statfs(0x4f64c3000/0x0/0x4ffc00000, data 0x2c5a751/0x2e4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:33.343555+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2482178 data_alloc: 218103808 data_used: 8822784
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149962752 unmapped: 36970496 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c57000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x560951c57000 session 0x56095195e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:34.343659+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x560953ad9c00 session 0x56095195fc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151601152 unmapped: 35332096 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x56094f7b3800 session 0x56095183bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x5609519ae000 session 0x560953c905a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:35.343789+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x560953ad9c00 session 0x56094f77dc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x56094f84f000 session 0x56094f3654a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c57000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 ms_handle_reset con 0x560951c57000 session 0x56094f897e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151601152 unmapped: 35332096 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:36.344162+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151601152 unmapped: 35332096 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 ms_handle_reset con 0x56094f84f000 session 0x560951225860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 ms_handle_reset con 0x56094f7b3800 session 0x5609517ac1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c57000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 ms_handle_reset con 0x5609519ae000 session 0x5609519ab860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 ms_handle_reset con 0x560951c57000 session 0x560951229e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:37.344296+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151633920 unmapped: 35299328 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 ms_handle_reset con 0x560952400800 session 0x56094f43f4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 ms_handle_reset con 0x560953ad9c00 session 0x56095122c960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 ms_handle_reset con 0x56094f7b3800 session 0x56095122de00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:38.344505+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2514515 data_alloc: 234881024 data_used: 12193792
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 heartbeat osd_stat(store_statfs(0x4f64c2000/0x0/0x4ffc00000, data 0x2c5c133/0x2e4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151658496 unmapped: 35274752 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 358 ms_handle_reset con 0x56094f84f000 session 0x56094f43e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:39.344661+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 358 ms_handle_reset con 0x560953ed7000 session 0x56094f3f6d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954151400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 358 ms_handle_reset con 0x560954151400 session 0x56095183ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 37830656 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 359 ms_handle_reset con 0x56094f7b2400 session 0x56094f7781e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:40.344825+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 359 ms_handle_reset con 0x56094f7b3800 session 0x56095183ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 37830656 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:41.344979+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f6a6c000/0x0/0x4ffc00000, data 0x26b27f1/0x28a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 37830656 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:42.345148+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 359 heartbeat osd_stat(store_statfs(0x4f6a6c000/0x0/0x4ffc00000, data 0x26b27f1/0x28a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 149102592 unmapped: 37830656 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 359 ms_handle_reset con 0x56094f84f000 session 0x56095122c960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:43.345296+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2438234 data_alloc: 218103808 data_used: 8617984
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 359 ms_handle_reset con 0x560953ad9c00 session 0x560951229e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150421504 unmapped: 36511744 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:44.345476+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150421504 unmapped: 36511744 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.653697968s of 12.358178139s, submitted: 186
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 359 ms_handle_reset con 0x5609519ae400 session 0x5609519ab860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:45.345671+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150421504 unmapped: 36511744 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:46.345827+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f6a6c000/0x0/0x4ffc00000, data 0x26b27f1/0x28a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150413312 unmapped: 36519936 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:47.345972+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 ms_handle_reset con 0x56094f7b2400 session 0x5609517ac1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 ms_handle_reset con 0x56094f7b3800 session 0x560951225860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150421504 unmapped: 36511744 heap: 186933248 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:48.346103+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 ms_handle_reset con 0x56094f84f000 session 0x56094f77dc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2538691 data_alloc: 218103808 data_used: 8622080
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 ms_handle_reset con 0x5609519ae400 session 0x560952189e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 46424064 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:49.346291+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 46424064 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:50.346576+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 46424064 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f5dd8000/0x0/0x4ffc00000, data 0x3347254/0x3536000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:51.346853+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 46424064 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:52.346986+0000)
Nov 22 04:25:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 46424064 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f5dd8000/0x0/0x4ffc00000, data 0x3347254/0x3536000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/413808923' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:53.347230+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f6800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 ms_handle_reset con 0x5609517f6800 session 0x56094f3f61e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2541289 data_alloc: 218103808 data_used: 8622080
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 46424064 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:54.347401+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 46424064 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:55.347650+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150519808 unmapped: 46424064 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.351531982s of 10.709509850s, submitted: 40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:56.347846+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 361 handle_osd_map epochs [361,361], i have 361, src has [1,361]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 361 ms_handle_reset con 0x5609519af800 session 0x56095183b680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226c400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150536192 unmapped: 46407680 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 361 heartbeat osd_stat(store_statfs(0x4f5dd3000/0x0/0x4ffc00000, data 0x3348de1/0x353a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,1,6])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 361 ms_handle_reset con 0x56095226c400 session 0x560950de34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d4800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:57.348028+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 361 ms_handle_reset con 0x5609543d4800 session 0x5609515d34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952401c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x560952401c00 session 0x56094f43ed20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c57800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x560951c57800 session 0x56094f60cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x560952400400 session 0x5609500c7860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151281664 unmapped: 45662208 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:58.348229+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2642813 data_alloc: 218103808 data_used: 8642560
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x560954164000 session 0x560950de1c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 151281664 unmapped: 45662208 heap: 196943872 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609544e6c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:59.348487+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94ac00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x56094f94ac00 session 0x5609512072c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150716416 unmapped: 63029248 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:00.348663+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095386e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x56095386e800 session 0x560951207680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 154886144 unmapped: 58859520 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x560952400c00 session 0x560951207e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:01.348780+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94ac00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 54583296 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:02.348963+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 heartbeat osd_stat(store_statfs(0x4ec397000/0x0/0x4ffc00000, data 0xcd80f23/0xcf77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163487744 unmapped: 50257920 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:03.349125+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4402875 data_alloc: 218103808 data_used: 8142848
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 150429696 unmapped: 63315968 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 heartbeat osd_stat(store_statfs(0x4e5397000/0x0/0x4ffc00000, data 0x13d80f23/0x13f77000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:04.349269+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 168738816 unmapped: 45006848 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:05.349476+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.438160419s of 10.047687531s, submitted: 156
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 56000512 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:06.349609+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 175554560 unmapped: 38191104 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x5609544e6c00 session 0x56094f396000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b33000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:07.349722+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x560952b33000 session 0x56095195e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 ms_handle_reset con 0x5609545fd400 session 0x56094f364b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 162979840 unmapped: 50765824 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:08.349816+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5647986 data_alloc: 234881024 data_used: 27029504
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 363 ms_handle_reset con 0x560951ba5800 session 0x56094f3f72c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 363 ms_handle_reset con 0x56094f762800 session 0x56095190bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 162996224 unmapped: 50749440 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:09.349963+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 363 ms_handle_reset con 0x560951ba5800 session 0x5609519e9e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 363 heartbeat osd_stat(store_statfs(0x4db795000/0x0/0x4ffc00000, data 0x1d9825a1/0x1db78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163004416 unmapped: 50741248 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:10.350099+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163004416 unmapped: 50741248 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:11.350287+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b33000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609544e6c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 363 ms_handle_reset con 0x5609544e6c00 session 0x560952189680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163012608 unmapped: 50733056 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:12.350404+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609544e6000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 364 ms_handle_reset con 0x5609544e6000 session 0x56094f60c3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 364 heartbeat osd_stat(store_statfs(0x4db78f000/0x0/0x4ffc00000, data 0x1d98420e/0x1db7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163037184 unmapped: 50708480 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:13.350610+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 365 ms_handle_reset con 0x5609545fd400 session 0x5609519e92c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953839000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 365 ms_handle_reset con 0x560953839000 session 0x56094f43f4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 365 ms_handle_reset con 0x560952b33000 session 0x56094f396b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5665348 data_alloc: 234881024 data_used: 27049984
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 163078144 unmapped: 50667520 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:14.350771+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 167616512 unmapped: 46129152 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:15.350921+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095321c800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 365 ms_handle_reset con 0x56095321c800 session 0x56094f77c5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 366 ms_handle_reset con 0x5609519af800 session 0x5609512241e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 366 heartbeat osd_stat(store_statfs(0x4daf2b000/0x0/0x4ffc00000, data 0x1e1dee31/0x1e3dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.574152946s of 10.045818329s, submitted: 93
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 166969344 unmapped: 46776320 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 367 ms_handle_reset con 0x5609519ae400 session 0x56094f43e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 367 ms_handle_reset con 0x560950484800 session 0x56095195fc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:16.351047+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 367 heartbeat osd_stat(store_statfs(0x4daf0a000/0x0/0x4ffc00000, data 0x1e1f59e6/0x1e3f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 167698432 unmapped: 46047232 heap: 213745664 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:17.351148+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 367 ms_handle_reset con 0x5609519ae400 session 0x56095183a000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205832192 unmapped: 20520960 heap: 226353152 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:18.351307+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6148988 data_alloc: 251658240 data_used: 29487104
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 180944896 unmapped: 49610752 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:19.351477+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 367 heartbeat osd_stat(store_statfs(0x4d4c72000/0x0/0x4ffc00000, data 0x2449d9e6/0x2469c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [0,2] op hist [0,0,0,0,0,1,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189112320 unmapped: 41443328 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:20.351660+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185573376 unmapped: 44982272 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:21.351849+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186998784 unmapped: 43556864 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:22.352058+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 178757632 unmapped: 51798016 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:23.353194+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7612600 data_alloc: 251658240 data_used: 29720576
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 47022080 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:24.353353+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 367 heartbeat osd_stat(store_statfs(0x4c9c26000/0x0/0x4ffc00000, data 0x2f0d99e6/0x2f2d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,5])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c57c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188153856 unmapped: 42401792 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:25.353531+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 368 ms_handle_reset con 0x560951c57c00 session 0x5609515d3c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.663234711s of 10.039048195s, submitted: 537
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 181444608 unmapped: 49111040 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 368 heartbeat osd_stat(store_statfs(0x4c43fb000/0x0/0x4ffc00000, data 0x34900591/0x34b02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:26.353715+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 369 ms_handle_reset con 0x5609519af800 session 0x560950de10e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 369 ms_handle_reset con 0x560951ba5000 session 0x560951225860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 369 ms_handle_reset con 0x560950484800 session 0x5609515e3c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 181829632 unmapped: 48726016 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:27.353855+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 181829632 unmapped: 48726016 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519ae400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:28.354087+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 370 ms_handle_reset con 0x5609519ae400 session 0x560951228f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8562286 data_alloc: 251658240 data_used: 29745152
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 181846016 unmapped: 48709632 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:29.354306+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 ms_handle_reset con 0x560951c59800 session 0x5609515e23c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 ms_handle_reset con 0x560951c34000 session 0x56095183ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 182935552 unmapped: 47620096 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:30.354493+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 182943744 unmapped: 47611904 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:31.354678+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 heartbeat osd_stat(store_statfs(0x4c1ff5000/0x0/0x4ffc00000, data 0x36d0534d/0x36f08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 182951936 unmapped: 47603712 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:32.354848+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f31400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 ms_handle_reset con 0x560951f31400 session 0x560952189a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525cc000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 ms_handle_reset con 0x5609525cc000 session 0x560950de1c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 47063040 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:33.355041+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8562042 data_alloc: 251658240 data_used: 30793728
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 47063040 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:34.355279+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 ms_handle_reset con 0x56094f7b1c00 session 0x560950de3a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953839000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 47063040 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:35.355517+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 ms_handle_reset con 0x5609543d5000 session 0x560952189680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 ms_handle_reset con 0x560953839000 session 0x5609519e92c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 47054848 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.848671913s of 10.230717659s, submitted: 90
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:36.355649+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 heartbeat osd_stat(store_statfs(0x4c1ff6000/0x0/0x4ffc00000, data 0x36d0534d/0x36f08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 ms_handle_reset con 0x560951f30800 session 0x56095183b860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950485400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183508992 unmapped: 47046656 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 heartbeat osd_stat(store_statfs(0x4c1fef000/0x0/0x4ffc00000, data 0x36d09db0/0x36f0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:37.355854+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 ms_handle_reset con 0x560950485400 session 0x5609512061e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187269120 unmapped: 43286528 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 ms_handle_reset con 0x56094f7b1c00 session 0x560950de14a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:38.355999+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 ms_handle_reset con 0x560951f30800 session 0x5609516f9e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8607682 data_alloc: 251658240 data_used: 30801920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 heartbeat osd_stat(store_statfs(0x4c1c6a000/0x0/0x4ffc00000, data 0x3708ee12/0x37294000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183230464 unmapped: 47325184 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:39.356123+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954599800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 ms_handle_reset con 0x560954599800 session 0x56094f365680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095225e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba7000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 ms_handle_reset con 0x560951ba7000 session 0x56094f8d4b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183230464 unmapped: 47325184 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:40.356270+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 373 ms_handle_reset con 0x5609517f7000 session 0x56095121ef00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b1c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 373 ms_handle_reset con 0x56094f7b1c00 session 0x5609519aab40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba7000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190046208 unmapped: 40509440 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:41.356466+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 373 heartbeat osd_stat(store_statfs(0x4c13fd000/0x0/0x4ffc00000, data 0x378f999f/0x37b01000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 373 ms_handle_reset con 0x560951f30800 session 0x5609516f8000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183271424 unmapped: 47284224 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 373 ms_handle_reset con 0x560951ba7000 session 0x5609519ab0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:42.356614+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 373 ms_handle_reset con 0x560953ad9c00 session 0x5609516f85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954599800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 374 ms_handle_reset con 0x56095225e800 session 0x5609515d25a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183304192 unmapped: 47251456 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:43.356761+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8690733 data_alloc: 251658240 data_used: 30957568
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184172544 unmapped: 46383104 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:44.356868+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 46161920 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:45.357000+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095321cc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 374 heartbeat osd_stat(store_statfs(0x4c13f2000/0x0/0x4ffc00000, data 0x379005b1/0x37b0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184442880 unmapped: 46112768 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:46.357124+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184442880 unmapped: 46112768 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:47.357237+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.812884331s of 11.150787354s, submitted: 95
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 46374912 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:48.357352+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8681372 data_alloc: 251658240 data_used: 31477760
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 46374912 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:49.357547+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-mon[75011]: pgmap v2334: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1557668744' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Nov 22 04:25:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/587679225' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Nov 22 04:25:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2632540742' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 46374912 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1933185865' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:50.357752+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 374 heartbeat osd_stat(store_statfs(0x4c13d7000/0x0/0x4ffc00000, data 0x3791b5b1/0x37b27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,3])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 374 heartbeat osd_stat(store_statfs(0x4c13d6000/0x0/0x4ffc00000, data 0x3791c5b1/0x37b28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,3,0,3])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184229888 unmapped: 46325760 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:51.357951+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184229888 unmapped: 46325760 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:52.358215+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184238080 unmapped: 46317568 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:53.358480+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8686954 data_alloc: 251658240 data_used: 31490048
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 46301184 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:54.358639+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 heartbeat osd_stat(store_statfs(0x4c13c5000/0x0/0x4ffc00000, data 0x3792b12e/0x37b38000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,2,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184287232 unmapped: 46268416 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:55.359617+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184287232 unmapped: 46268416 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:56.359792+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186155008 unmapped: 44400640 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:57.359931+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 ms_handle_reset con 0x56094f7b5000 session 0x56095015d4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.752749920s of 10.267893791s, submitted: 24
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 45416448 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:58.360119+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8708939 data_alloc: 251658240 data_used: 31645696
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184385536 unmapped: 46170112 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:59.360302+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 heartbeat osd_stat(store_statfs(0x4c1159000/0x0/0x4ffc00000, data 0x37b97190/0x37da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184385536 unmapped: 46170112 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:00.360439+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 heartbeat osd_stat(store_statfs(0x4c1159000/0x0/0x4ffc00000, data 0x37b97190/0x37da5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184459264 unmapped: 46096384 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:01.360588+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184459264 unmapped: 46096384 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:02.360759+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 heartbeat osd_stat(store_statfs(0x4c1157000/0x0/0x4ffc00000, data 0x37b98190/0x37da6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184459264 unmapped: 46096384 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:03.360925+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8737867 data_alloc: 251658240 data_used: 32120832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 44974080 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:04.361073+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185581568 unmapped: 44974080 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:05.361279+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fdc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 ms_handle_reset con 0x5609545fdc00 session 0x56094f3f7c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185778176 unmapped: 44777472 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d5800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:06.361405+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 ms_handle_reset con 0x56094f84e800 session 0x5609504b05a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b33400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 ms_handle_reset con 0x560952b33400 session 0x5609504b1e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 ms_handle_reset con 0x560951ba2400 session 0x56094f396f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 heartbeat osd_stat(store_statfs(0x4c1139000/0x0/0x4ffc00000, data 0x37be4190/0x37dc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185794560 unmapped: 44761088 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:07.361640+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.308856964s of 10.146277428s, submitted: 33
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 heartbeat osd_stat(store_statfs(0x4c1138000/0x0/0x4ffc00000, data 0x37be41a0/0x37dc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x56094f7b5000 session 0x56094f397e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185942016 unmapped: 44613632 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x56094f84e800 session 0x5609519e85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:08.361786+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b33400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x560952b33400 session 0x560951226b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fdc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x5609543d5800 session 0x56094f778960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8790971 data_alloc: 251658240 data_used: 32509952
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x5609545fdc00 session 0x56094f396f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x56094f7b5000 session 0x560951229c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185942016 unmapped: 44613632 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:09.361916+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x56094f84e800 session 0x56095195f4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185942016 unmapped: 44613632 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:10.362028+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b33400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x560952b33400 session 0x560953c90960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 44597248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:11.362133+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 heartbeat osd_stat(store_statfs(0x4c0ad5000/0x0/0x4ffc00000, data 0x38245d1d/0x38429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:12.362269+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 44597248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:13.362490+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 44597248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8789427 data_alloc: 251658240 data_used: 32514048
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:14.362662+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 44597248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:15.362858+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 44597248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:16.363048+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 44597248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 heartbeat osd_stat(store_statfs(0x4c0ad5000/0x0/0x4ffc00000, data 0x38245d1d/0x38429000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:17.363213+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 44597248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:18.363332+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185958400 unmapped: 44597248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 heartbeat osd_stat(store_statfs(0x4c0ad2000/0x0/0x4ffc00000, data 0x38248d1d/0x3842c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8789327 data_alloc: 251658240 data_used: 32514048
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x560951c59400 session 0x5609504b05a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:19.363489+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.535270691s of 11.482724190s, submitted: 40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186007552 unmapped: 44548096 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x560954599800 session 0x560951229e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 376 ms_handle_reset con 0x56094f8f8800 session 0x56094f3f7c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x560954598000 session 0x56094f77da40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x560950484400 session 0x560951224b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b33400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:20.363612+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186392576 unmapped: 44163072 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x560951c59400 session 0x560951226d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x56095321cc00 session 0x560952188960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:21.363728+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186392576 unmapped: 44163072 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 heartbeat osd_stat(store_statfs(0x4c0aa5000/0x0/0x4ffc00000, data 0x382748ec/0x38459000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:22.363832+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191062016 unmapped: 39493632 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x56094f8f8800 session 0x5609515d34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x560950484400 session 0x56094f392d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x560954598000 session 0x56095053a3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x560951c59400 session 0x5609519aad20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:23.363984+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191086592 unmapped: 39469056 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8824925 data_alloc: 251658240 data_used: 39120896
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:24.364583+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095321d000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191119360 unmapped: 39436288 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 ms_handle_reset con 0x56095321d000 session 0x56095195fc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:25.364777+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191119360 unmapped: 39436288 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 handle_osd_map epochs [377,378], i have 377, src has [1,378]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 377 handle_osd_map epochs [378,378], i have 378, src has [1,378]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x56094f8f8800 session 0x56094f8d4b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 heartbeat osd_stat(store_statfs(0x4c0d10000/0x0/0x4ffc00000, data 0x3800992b/0x381ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:26.364926+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191119360 unmapped: 39436288 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x560952400c00 session 0x56094f60d2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x56094f94ac00 session 0x56094f77c780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:27.365062+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x560951c59400 session 0x5609512281e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191143936 unmapped: 39411712 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:28.365229+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191168512 unmapped: 39387136 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 8823939 data_alloc: 251658240 data_used: 39153664
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:29.365364+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191193088 unmapped: 39362560 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.110128403s of 10.147725105s, submitted: 102
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x560950484400 session 0x56094f60d860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x56094f8f8800 session 0x56095053be00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94ac00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x56094f94ac00 session 0x56094f778b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:30.365565+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183115776 unmapped: 47439872 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x560951c59400 session 0x56094f392780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 ms_handle_reset con 0x560950484400 session 0x56094f8d52c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:31.365725+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 heartbeat osd_stat(store_statfs(0x4c221e000/0x0/0x4ffc00000, data 0x36acd54e/0x36ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183123968 unmapped: 47431680 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:32.365921+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183123968 unmapped: 47431680 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 379 heartbeat osd_stat(store_statfs(0x4c221a000/0x0/0x4ffc00000, data 0x36acefb1/0x36ce3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:33.366198+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190291968 unmapped: 40263680 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8751153 data_alloc: 234881024 data_used: 27168768
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:34.366352+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190431232 unmapped: 40124416 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 ms_handle_reset con 0x560952400c00 session 0x56094f60d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 heartbeat osd_stat(store_statfs(0x4c1118000/0x0/0x4ffc00000, data 0x37f34fb1/0x37de3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,9,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:35.366533+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189562880 unmapped: 40992768 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:36.366671+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189546496 unmapped: 41009152 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 ms_handle_reset con 0x560951c59c00 session 0x56095053ab40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:37.366796+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189546496 unmapped: 41009152 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:38.367004+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189546496 unmapped: 41009152 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954150400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8762681 data_alloc: 234881024 data_used: 26693632
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 ms_handle_reset con 0x560950484000 session 0x5609516f8b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 ms_handle_reset con 0x560954150400 session 0x560950de0d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:39.367165+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189554688 unmapped: 41000960 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.204234123s of 10.386863708s, submitted: 296
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 heartbeat osd_stat(store_statfs(0x4c10ce000/0x0/0x4ffc00000, data 0x37f82b54/0x37e30000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:40.367374+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189554688 unmapped: 41000960 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525ccc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 ms_handle_reset con 0x5609525ccc00 session 0x560953c905a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 ms_handle_reset con 0x560953ad9400 session 0x56095190ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:41.367526+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188841984 unmapped: 41713664 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:42.367708+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188841984 unmapped: 41713664 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560951c59c00 session 0x5609516f8b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:43.367932+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188850176 unmapped: 41705472 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8775216 data_alloc: 234881024 data_used: 26710016
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:44.368070+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189153280 unmapped: 41402368 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560950484000 session 0x56095122d2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x5609543d4000 session 0x56094f778b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950510400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:45.368225+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189161472 unmapped: 41394176 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560950510400 session 0x56094f60d860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba6400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 heartbeat osd_stat(store_statfs(0x4c103d000/0x0/0x4ffc00000, data 0x38010619/0x37ec0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,0,1,0,0,2])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560951ba6400 session 0x56094f8d4b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560950484000 session 0x560952188960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:46.368376+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188882944 unmapped: 41672704 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 heartbeat osd_stat(store_statfs(0x4c098f000/0x0/0x4ffc00000, data 0x386c05b7/0x3856f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950510400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560950510400 session 0x560951229e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560951c59c00 session 0x56095195f4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:47.368566+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188899328 unmapped: 41656320 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 heartbeat osd_stat(store_statfs(0x4c0a18000/0x0/0x4ffc00000, data 0x386375b7/0x384e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x5609543d4000 session 0x56094f896d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba6800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:48.368716+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41639936 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560951ba6800 session 0x560950de1e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8819711 data_alloc: 234881024 data_used: 26705920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:49.368846+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188915712 unmapped: 41639936 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c58c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x560951c58c00 session 0x56095183b860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.464101791s of 10.058191299s, submitted: 91
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x56094f8f8800 session 0x56094f43e960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:50.369005+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188923904 unmapped: 41631744 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 ms_handle_reset con 0x56094f84e000 session 0x56094f60cf00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 heartbeat osd_stat(store_statfs(0x4c0a24000/0x0/0x4ffc00000, data 0x3862b4f3/0x384d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:51.369140+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188932096 unmapped: 41623552 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 382 ms_handle_reset con 0x560953ad9400 session 0x56095183ab40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095048b000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:52.369302+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 382 ms_handle_reset con 0x56095048b000 session 0x56094f60da40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188948480 unmapped: 41607168 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 383 ms_handle_reset con 0x56094f84e000 session 0x56095015cb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:53.369516+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188964864 unmapped: 41590784 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 383 ms_handle_reset con 0x56094f8f8800 session 0x560951220780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c58c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8825032 data_alloc: 234881024 data_used: 26701824
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:54.369677+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188973056 unmapped: 41582592 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 383 heartbeat osd_stat(store_statfs(0x4c0a20000/0x0/0x4ffc00000, data 0x3862ebd1/0x384dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 handle_osd_map epochs [384,384], i have 384, src has [1,384]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x560951c58c00 session 0x560951226f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x56094f7b5000 session 0x5609516f8000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x56094f84e800 session 0x560951206d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x560952b33400 session 0x5609519aab40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:55.369872+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 41533440 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x56094f7b5000 session 0x560950de10e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x56094f84e800 session 0x5609517c7a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:56.369990+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x56094f84e000 session 0x5609517c6f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185065472 unmapped: 45490176 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x56094f8f8800 session 0x56094f8da5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:57.371633+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185090048 unmapped: 45465600 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x560952400400 session 0x560951221860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 ms_handle_reset con 0x56094f84e000 session 0x560951221e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:58.371782+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176693248 unmapped: 53862400 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 heartbeat osd_stat(store_statfs(0x4c2e3a000/0x0/0x4ffc00000, data 0x362176e1/0x360c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8420587 data_alloc: 218103808 data_used: 8450048
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:59.372041+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176693248 unmapped: 53862400 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fcc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.113045692s of 10.135205269s, submitted: 150
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 385 ms_handle_reset con 0x5609545fcc00 session 0x56094f8d4780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:00.372233+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 385 heartbeat osd_stat(store_statfs(0x4c2280000/0x0/0x4ffc00000, data 0x376352dc/0x36c7c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 176971776 unmapped: 53583872 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 385 handle_osd_map epochs [385,386], i have 385, src has [1,386]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 386 ms_handle_reset con 0x560951f2f000 session 0x56095195e960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 387 ms_handle_reset con 0x56094f7b3000 session 0x56095015dc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 387 ms_handle_reset con 0x560950484800 session 0x56094f896b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:01.372377+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177037312 unmapped: 53518336 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:02.372530+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177037312 unmapped: 53518336 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 387 handle_osd_map epochs [387,388], i have 387, src has [1,388]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 388 ms_handle_reset con 0x56094f7b3000 session 0x5609517ad4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 388 ms_handle_reset con 0x56094f84e000 session 0x56095190a5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:03.372648+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177053696 unmapped: 53501952 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8642647 data_alloc: 234881024 data_used: 15355904
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:04.372788+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177152000 unmapped: 53403648 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:05.373013+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177152000 unmapped: 53403648 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 388 heartbeat osd_stat(store_statfs(0x4c2274000/0x0/0x4ffc00000, data 0x3763a4a9/0x36c85000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:06.373140+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7ce000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 389 ms_handle_reset con 0x56094f7ce000 session 0x5609521883c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177242112 unmapped: 53313536 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 389 heartbeat osd_stat(store_statfs(0x4c2eff000/0x0/0x4ffc00000, data 0x3614b07a/0x35ffe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:07.373301+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177250304 unmapped: 53305344 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 389 ms_handle_reset con 0x56094f7cec00 session 0x56094f3654a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:08.373766+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 389 heartbeat osd_stat(store_statfs(0x4c3693000/0x0/0x4ffc00000, data 0x359b708a/0x3586b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177250304 unmapped: 53305344 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8440839 data_alloc: 234881024 data_used: 15360000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f6400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:09.373974+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 53288960 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 390 ms_handle_reset con 0x5609517f6400 session 0x5609515e3c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:10.374110+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 53288960 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954150400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.272590637s of 10.755624771s, submitted: 133
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 390 ms_handle_reset con 0x5609505ff800 session 0x5609512281e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 390 ms_handle_reset con 0x560954150400 session 0x5609519c4960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:11.374260+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 181862400 unmapped: 48693248 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:12.375005+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 181960704 unmapped: 48594944 heap: 230555648 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504db400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:13.375168+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 182181888 unmapped: 90349568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9087345 data_alloc: 234881024 data_used: 17121280
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:14.375327+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 heartbeat osd_stat(store_statfs(0x4c006b000/0x0/0x4ffc00000, data 0x38fdb6e8/0x38e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [0,2] op hist [0,0,0,0,1,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 183525376 unmapped: 89006080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:15.375576+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 heartbeat osd_stat(store_statfs(0x4b9ecb000/0x0/0x4ffc00000, data 0x3dfdb6e8/0x3de93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,2] op hist [0,0,0,0,1,2])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192290816 unmapped: 80240640 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:16.375724+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184434688 unmapped: 88096768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:17.375876+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184762368 unmapped: 87769088 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:18.375991+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193306624 unmapped: 79224832 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 heartbeat osd_stat(store_statfs(0x4b42cb000/0x0/0x4ffc00000, data 0x43bdb6e8/0x43a93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,2] op hist [0,0,0,0,0,1,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 10161441 data_alloc: 234881024 data_used: 17121280
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:19.376130+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186310656 unmapped: 86220800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:20.376270+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186548224 unmapped: 85983232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.453081369s of 10.010639191s, submitted: 372
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 ms_handle_reset con 0x56094f7b5000 session 0x560950de2960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0e400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:21.376486+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191234048 unmapped: 81297408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 heartbeat osd_stat(store_statfs(0x4adecb000/0x0/0x4ffc00000, data 0x49fdb6e8/0x49e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7e9f9c6), peers [0,2] op hist [0,0,0,0,0,0,4])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 ms_handle_reset con 0x5609504db400 session 0x56094f43ed20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 ms_handle_reset con 0x560957a0e400 session 0x560951220f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 ms_handle_reset con 0x56094f7b5c00 session 0x5609517ada40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:22.376661+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 ms_handle_reset con 0x56094f7b5000 session 0x5609504b14a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 ms_handle_reset con 0x5609505ff800 session 0x5609519e8780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187277312 unmapped: 85254144 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:23.376814+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f6400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187277312 unmapped: 85254144 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8792617 data_alloc: 234881024 data_used: 17145856
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 ms_handle_reset con 0x5609517f6400 session 0x56094f60d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:24.376992+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 ms_handle_reset con 0x56094f7b5000 session 0x56095190bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188456960 unmapped: 84074496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 391 handle_osd_map epochs [392,392], i have 392, src has [1,392]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 392 ms_handle_reset con 0x56094f7b5c00 session 0x560950de1c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:25.377210+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188473344 unmapped: 84058112 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 392 ms_handle_reset con 0x5609505ff800 session 0x56095121eb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0e400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 392 ms_handle_reset con 0x560957a0e400 session 0x56094f43eb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:26.377379+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188489728 unmapped: 84041728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095225f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 392 heartbeat osd_stat(store_statfs(0x4c1ab9000/0x0/0x4ffc00000, data 0x35fdd224/0x35e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,2] op hist [0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:27.377492+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 392 heartbeat osd_stat(store_statfs(0x4c1ab9000/0x0/0x4ffc00000, data 0x35fdd224/0x35e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 84033536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 393 heartbeat osd_stat(store_statfs(0x4c1ab9000/0x0/0x4ffc00000, data 0x35fdd224/0x35e93000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:28.377625+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 393 heartbeat osd_stat(store_statfs(0x4c1ad7000/0x0/0x4ffc00000, data 0x35c59da3/0x35e76000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188522496 unmapped: 84008960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 393 handle_osd_map epochs [394,394], i have 394, src has [1,394]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 394 ms_handle_reset con 0x56095225f000 session 0x56094f60da40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8585921 data_alloc: 234881024 data_used: 17125376
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:29.377796+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 394 ms_handle_reset con 0x56094f7b5000 session 0x5609519aad20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186392576 unmapped: 86138880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:30.377985+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186392576 unmapped: 86138880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.266202450s of 10.036149025s, submitted: 291
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:31.378176+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186408960 unmapped: 86122496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 396 heartbeat osd_stat(store_statfs(0x4c2f75000/0x0/0x4ffc00000, data 0x347ba54b/0x349d8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x82af9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 396 ms_handle_reset con 0x56094f7b5c00 session 0x560953c905a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:32.378298+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186908672 unmapped: 85622784 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 396 ms_handle_reset con 0x5609505ff800 session 0x560951229e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:33.378409+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0e400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185049088 unmapped: 87482368 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5869290 data_alloc: 234881024 data_used: 14925824
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:34.378621+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 396 handle_osd_map epochs [396,397], i have 396, src has [1,397]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 396 handle_osd_map epochs [397,397], i have 397, src has [1,397]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 397 ms_handle_reset con 0x560957a0e400 session 0x560952188000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505fe400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185090048 unmapped: 87441408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:35.378812+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185122816 unmapped: 87408640 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 397 ms_handle_reset con 0x5609505fe400 session 0x560951226780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:36.378973+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 87375872 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 397 heartbeat osd_stat(store_statfs(0x4f4bb1000/0x0/0x4ffc00000, data 0x33bdcbb/0x35dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 397 handle_osd_map epochs [398,398], i have 398, src has [1,398]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:37.379107+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 87375872 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:38.379345+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 87375872 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3096639 data_alloc: 234881024 data_used: 14921728
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:39.379580+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 87375872 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:40.379711+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185155584 unmapped: 87375872 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:41.379867+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.543312073s of 10.524272919s, submitted: 294
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f53ae000/0x0/0x4ffc00000, data 0x33bf74a/0x35df000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185163776 unmapped: 87367680 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:42.380019+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185163776 unmapped: 87367680 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:43.380150+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185163776 unmapped: 87367680 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3103657 data_alloc: 234881024 data_used: 15171584
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:44.380314+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f53ac000/0x0/0x4ffc00000, data 0x33c11cd/0x35e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185171968 unmapped: 87359488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f53ac000/0x0/0x4ffc00000, data 0x33c11cd/0x35e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:45.380510+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185171968 unmapped: 87359488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:46.380668+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185171968 unmapped: 87359488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:47.380857+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185171968 unmapped: 87359488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:48.381109+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185171968 unmapped: 87359488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109577 data_alloc: 234881024 data_used: 15839232
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:49.381504+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 399 ms_handle_reset con 0x560950511800 session 0x56094f896780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185188352 unmapped: 87343104 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 399 heartbeat osd_stat(store_statfs(0x4f53ab000/0x0/0x4ffc00000, data 0x33c11dd/0x35e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:50.381703+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226dc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185188352 unmapped: 87343104 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:51.381867+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.040049553s of 10.123568535s, submitted: 24
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 400 ms_handle_reset con 0x56095226dc00 session 0x5609512252c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185188352 unmapped: 87343104 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:52.382015+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095240d400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954165000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 400 ms_handle_reset con 0x560954165000 session 0x56095195f4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 400 ms_handle_reset con 0x56095240d400 session 0x56094f77d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 87334912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 400 heartbeat osd_stat(store_statfs(0x4f53a5000/0x0/0x4ffc00000, data 0x33c2e2e/0x35e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:53.382150+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c57400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 400 handle_osd_map epochs [400,401], i have 400, src has [1,401]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186253312 unmapped: 86278144 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 401 ms_handle_reset con 0x560951c57400 session 0x56094f8d52c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3126764 data_alloc: 234881024 data_used: 15859712
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 401 heartbeat osd_stat(store_statfs(0x4f53a1000/0x0/0x4ffc00000, data 0x33c49ab/0x35ec000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:54.382330+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186261504 unmapped: 86269952 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:55.382522+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 401 ms_handle_reset con 0x560951f30400 session 0x560953c91680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 401 ms_handle_reset con 0x560951f30800 session 0x56095121f2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186269696 unmapped: 86261760 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:56.382701+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504ddc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186261504 unmapped: 86269952 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:57.382855+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 401 handle_osd_map epochs [401,402], i have 401, src has [1,402]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 401 handle_osd_map epochs [402,402], i have 402, src has [1,402]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x5609504ddc00 session 0x56094f3923c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f53a1000/0x0/0x4ffc00000, data 0x33c49bb/0x35ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186310656 unmapped: 86220800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:58.382989+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186318848 unmapped: 86212608 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x560957a0f400 session 0x560951220f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x56094f7b3c00 session 0x5609515e3c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3132183 data_alloc: 234881024 data_used: 15859712
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:59.383196+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 86163456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x56094f7b3c00 session 0x56094f896b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:00.383378+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504ddc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x5609504ddc00 session 0x5609519aa960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186368000 unmapped: 86163456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:01.383566+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.951952934s of 10.225536346s, submitted: 66
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x560951f30400 session 0x560953c91860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186384384 unmapped: 86147072 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x560951f30800 session 0x56094f60c780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 heartbeat osd_stat(store_statfs(0x4f539e000/0x0/0x4ffc00000, data 0x33c6538/0x35f0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:02.383716+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186392576 unmapped: 86138880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x560957a0f400 session 0x560950de01e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504ddc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 ms_handle_reset con 0x5609504ddc00 session 0x560951221a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:03.383870+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187465728 unmapped: 85065728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 403 ms_handle_reset con 0x560951f30400 session 0x5609517c6000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3134555 data_alloc: 234881024 data_used: 15872000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:04.384033+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 403 ms_handle_reset con 0x56094f7b3c00 session 0x5609519aad20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187482112 unmapped: 85049344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:05.384133+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 403 ms_handle_reset con 0x56094f7b2400 session 0x5609517c6d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7ce000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 403 ms_handle_reset con 0x56094f7ce000 session 0x5609512212c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187564032 unmapped: 84967424 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 404 ms_handle_reset con 0x56094f7b2400 session 0x5609512292c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:06.384238+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 404 ms_handle_reset con 0x560957a0f000 session 0x56095015d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187645952 unmapped: 84885504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 404 heartbeat osd_stat(store_statfs(0x4f539b000/0x0/0x4ffc00000, data 0x33c9be6/0x35f1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:07.384371+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187662336 unmapped: 84869120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 404 ms_handle_reset con 0x56094f7b3c00 session 0x56094f392960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504ddc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:08.384543+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 404 ms_handle_reset con 0x5609504ddc00 session 0x56095015dc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187695104 unmapped: 84836352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3132833 data_alloc: 234881024 data_used: 15859712
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:09.384686+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 404 ms_handle_reset con 0x560953ed7c00 session 0x56094f43eb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187703296 unmapped: 84828160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:10.384842+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 24K writes, 102K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 24K writes, 8410 syncs, 2.93 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 56K keys, 13K commit groups, 1.0 writes per commit group, ingest: 36.91 MB, 0.06 MB/s
                                           Interval WAL: 13K writes, 5413 syncs, 2.47 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187711488 unmapped: 84819968 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 405 ms_handle_reset con 0x56094f7b2400 session 0x56095190b2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:11.385002+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187703296 unmapped: 84828160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 405 heartbeat osd_stat(store_statfs(0x4f539c000/0x0/0x4ffc00000, data 0x33c9c48/0x35f2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.455002785s of 10.280631065s, submitted: 137
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 405 handle_osd_map epochs [406,406], i have 406, src has [1,406]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 406 ms_handle_reset con 0x56094f7b3c00 session 0x560951221e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504ddc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:12.385138+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 406 ms_handle_reset con 0x560953ed7c00 session 0x56094f43e780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 406 ms_handle_reset con 0x5609504ddc00 session 0x56094f778780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187736064 unmapped: 84795392 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:13.385265+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f6c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187768832 unmapped: 84762624 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 406 ms_handle_reset con 0x56094f762000 session 0x560951225c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 406 ms_handle_reset con 0x5609517f6c00 session 0x560951221860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x560957a0f000 session 0x56094f8d50e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3150804 data_alloc: 234881024 data_used: 15896576
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:14.385458+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187785216 unmapped: 84746240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x56094f762000 session 0x560951fcde00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x5609504da800 session 0x5609519c54a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:15.385640+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187785216 unmapped: 84746240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 heartbeat osd_stat(store_statfs(0x4f538f000/0x0/0x4ffc00000, data 0x33cefcb/0x35fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:16.385798+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x56094f7b2400 session 0x560950de0b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x56094f762000 session 0x56094f896780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187785216 unmapped: 84746240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:17.385946+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187785216 unmapped: 84746240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:18.386127+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x56094f7b2400 session 0x560951225a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187785216 unmapped: 84746240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x5609504da000 session 0x5609504b01e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3150068 data_alloc: 234881024 data_used: 15904768
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:19.386255+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x5609517f9c00 session 0x5609512214a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fcc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b92000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 ms_handle_reset con 0x560951b92000 session 0x560951fcd4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187785216 unmapped: 84746240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:20.386391+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187785216 unmapped: 84746240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 408 ms_handle_reset con 0x56094f762000 session 0x5609521acd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 408 ms_handle_reset con 0x5609545fcc00 session 0x56095190b4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:21.386536+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187793408 unmapped: 84738048 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 408 heartbeat osd_stat(store_statfs(0x4f538d000/0x0/0x4ffc00000, data 0x33d0b3a/0x3600000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 408 ms_handle_reset con 0x56094f7b2400 session 0x5609517ad4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 408 ms_handle_reset con 0x5609517f9c00 session 0x5609521ac5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:22.386699+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7ce800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 408 ms_handle_reset con 0x56094f7ce800 session 0x5609515e30e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187793408 unmapped: 84738048 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 408 ms_handle_reset con 0x560957a0f800 session 0x5609521ad860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.762922287s of 10.958837509s, submitted: 44
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 ms_handle_reset con 0x56094f762000 session 0x5609521ac780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 ms_handle_reset con 0x5609504da000 session 0x56094f60cf00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:23.386857+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 ms_handle_reset con 0x56094f7b2400 session 0x560953c91680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187801600 unmapped: 84729856 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 ms_handle_reset con 0x5609517f9c00 session 0x560953c905a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f538b000/0x0/0x4ffc00000, data 0x33d26b9/0x3602000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:24.386998+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3155470 data_alloc: 234881024 data_used: 15904768
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 ms_handle_reset con 0x56094f762000 session 0x560950de1c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187809792 unmapped: 84721664 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 ms_handle_reset con 0x56094f7b2400 session 0x560951226780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:25.387157+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187842560 unmapped: 84688896 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 ms_handle_reset con 0x5609517f9c00 session 0x56094f3652c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:26.387315+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187842560 unmapped: 84688896 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:27.387524+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187842560 unmapped: 84688896 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:28.387656+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187842560 unmapped: 84688896 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:29.387811+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156738 data_alloc: 234881024 data_used: 15912960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 409 heartbeat osd_stat(store_statfs(0x4f538b000/0x0/0x4ffc00000, data 0x33d26c9/0x3603000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 410 ms_handle_reset con 0x560957a0f800 session 0x56094f897e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187842560 unmapped: 84688896 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:30.387957+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 410 ms_handle_reset con 0x560953ed7800 session 0x560951225a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 410 ms_handle_reset con 0x560951c59800 session 0x56095122c960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187858944 unmapped: 84672512 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:31.388128+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187867136 unmapped: 84664320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:32.388272+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 411 ms_handle_reset con 0x56094f762000 session 0x56094f8d50e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f5382000/0x0/0x4ffc00000, data 0x33d5d67/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 411 ms_handle_reset con 0x56094f7b2400 session 0x56094f778780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187867136 unmapped: 84664320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 411 ms_handle_reset con 0x5609517f9c00 session 0x56094f43e780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:33.388402+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 411 ms_handle_reset con 0x5609504dd800 session 0x56094f43eb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.354811668s of 10.574703217s, submitted: 41
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187867136 unmapped: 84664320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 411 heartbeat osd_stat(store_statfs(0x4f5382000/0x0/0x4ffc00000, data 0x33d5d67/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:34.388579+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3171462 data_alloc: 234881024 data_used: 15933440
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 412 ms_handle_reset con 0x56094f762000 session 0x56094f392960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187908096 unmapped: 84623360 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 412 ms_handle_reset con 0x560957a0f800 session 0x56095190b2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:35.389097+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 412 ms_handle_reset con 0x560952400400 session 0x560950de01e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187908096 unmapped: 84623360 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:36.389416+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 412 heartbeat osd_stat(store_statfs(0x4f5380000/0x0/0x4ffc00000, data 0x33d78d6/0x360d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187908096 unmapped: 84623360 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:37.389618+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f6000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188145664 unmapped: 84385792 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:38.389975+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 ms_handle_reset con 0x5609517f6000 session 0x560953c91860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188219392 unmapped: 84312064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:39.390182+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197880 data_alloc: 234881024 data_used: 17190912
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f5281000/0x0/0x4ffc00000, data 0x34d546f/0x370c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188219392 unmapped: 84312064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226d400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 ms_handle_reset con 0x56095226d400 session 0x56095015d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:40.390666+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188243968 unmapped: 84287488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:41.390847+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 ms_handle_reset con 0x56094f762000 session 0x5609500c7e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f5282000/0x0/0x4ffc00000, data 0x34d545f/0x370b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188243968 unmapped: 84287488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:42.390974+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f6000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 ms_handle_reset con 0x5609517f6000 session 0x56094f60cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188243968 unmapped: 84287488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:43.391198+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.201793671s of 10.004981995s, submitted: 49
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 heartbeat osd_stat(store_statfs(0x4f5283000/0x0/0x4ffc00000, data 0x34d545f/0x370b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 84148224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:44.391345+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3196792 data_alloc: 234881024 data_used: 17190912
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 ms_handle_reset con 0x560952400400 session 0x56094f77d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 84148224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:45.391564+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095225f400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 ms_handle_reset con 0x56095225f400 session 0x5609519c41e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188383232 unmapped: 84148224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 ms_handle_reset con 0x560951ba4800 session 0x560951fcc5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:46.391809+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 84131840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:47.391948+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 84131840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:48.392188+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f5280000/0x0/0x4ffc00000, data 0x34d6eb2/0x370d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 84131840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:49.392515+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199142 data_alloc: 234881024 data_used: 17195008
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f5280000/0x0/0x4ffc00000, data 0x34d6eb2/0x370d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 84131840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:50.392658+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 84131840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:51.392796+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188399616 unmapped: 84131840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:52.392952+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:53.393107+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f5281000/0x0/0x4ffc00000, data 0x34d6eb2/0x370d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.997742653s of 10.614993095s, submitted: 43
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:54.393313+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199318 data_alloc: 234881024 data_used: 17195008
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:55.393551+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 414 ms_handle_reset con 0x560951ba2800 session 0x56094f77c5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:56.393740+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:57.393934+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 414 heartbeat osd_stat(store_statfs(0x4f5281000/0x0/0x4ffc00000, data 0x34d6eb2/0x370d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 414 ms_handle_reset con 0x560954598400 session 0x560952189e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:58.394109+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:59.394214+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3199516 data_alloc: 234881024 data_used: 17195008
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dc000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 415 ms_handle_reset con 0x5609504dc000 session 0x560951243860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:00.394407+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 415 ms_handle_reset con 0x560951f2e800 session 0x56094f8970e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:01.394590+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f527c000/0x0/0x4ffc00000, data 0x34d8a91/0x3711000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:02.394750+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:03.394948+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 ms_handle_reset con 0x56094f7b5000 session 0x560951221860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188448768 unmapped: 84082688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:04.395121+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3206823 data_alloc: 234881024 data_used: 17211392
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 ms_handle_reset con 0x560951ba3400 session 0x5609519c54a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954151800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.776989937s of 10.735375404s, submitted: 24
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 84066304 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:05.395299+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 ms_handle_reset con 0x5609504da000 session 0x5609521adc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 84066304 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 heartbeat osd_stat(store_statfs(0x4f527a000/0x0/0x4ffc00000, data 0x34da662/0x3714000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:06.395475+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 84066304 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:07.395661+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 84066304 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:08.395804+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 ms_handle_reset con 0x560954151800 session 0x5609515d34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 84066304 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0e400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:09.395952+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 ms_handle_reset con 0x560957a0e400 session 0x56095053b4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504db800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205231 data_alloc: 234881024 data_used: 17211392
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 ms_handle_reset con 0x560952400400 session 0x56094f778d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 84066304 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:10.396092+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188465152 unmapped: 84066304 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:11.396595+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 heartbeat osd_stat(store_statfs(0x4f527b000/0x0/0x4ffc00000, data 0x34da600/0x3713000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 84033536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:12.396752+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 84033536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f5277000/0x0/0x4ffc00000, data 0x34dc063/0x3716000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:13.396913+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 84033536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 ms_handle_reset con 0x5609504db800 session 0x56095121fa40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:14.397053+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3208525 data_alloc: 234881024 data_used: 17219584
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.756429195s of 10.036939621s, submitted: 56
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 84033536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:15.397258+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188497920 unmapped: 84033536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:16.397404+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188506112 unmapped: 84025344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:17.397594+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 ms_handle_reset con 0x560953ed7c00 session 0x56095122c5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954151400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188522496 unmapped: 84008960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:18.397777+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f5278000/0x0/0x4ffc00000, data 0x34dc063/0x3716000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188522496 unmapped: 84008960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:19.397970+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3207645 data_alloc: 234881024 data_used: 17215488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188522496 unmapped: 84008960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:20.398175+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188547072 unmapped: 83984384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:21.398350+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 ms_handle_reset con 0x560954151400 session 0x5609521ac3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188547072 unmapped: 83984384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:22.398498+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f5375000/0x0/0x4ffc00000, data 0x33e0053/0x3619000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188612608 unmapped: 83918848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 ms_handle_reset con 0x5609504dd000 session 0x560953c90960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 ms_handle_reset con 0x560951ba3800 session 0x56094f60cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:23.398709+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188653568 unmapped: 83877888 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:24.398891+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3197175 data_alloc: 234881024 data_used: 17088512
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188653568 unmapped: 83877888 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cfc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.648281097s of 10.396991730s, submitted: 53
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:25.399086+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 418 ms_handle_reset con 0x560951ba3800 session 0x56094f779680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 418 ms_handle_reset con 0x56094f7cfc00 session 0x56095183a3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:26.399252+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:27.399400+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954150400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 419 ms_handle_reset con 0x5609517f9400 session 0x56094f3f61e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f536c000/0x0/0x4ffc00000, data 0x33e37cd/0x3620000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 419 handle_osd_map epochs [419,420], i have 419, src has [1,420]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 419 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 420 ms_handle_reset con 0x5609545fd000 session 0x56094f392000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188784640 unmapped: 83746816 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:28.399575+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188801024 unmapped: 83730432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:29.399738+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209573 data_alloc: 234881024 data_used: 17100800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 420 ms_handle_reset con 0x560954150400 session 0x56094f360960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cfc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f536a000/0x0/0x4ffc00000, data 0x33e585b/0x3624000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188801024 unmapped: 83730432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:30.399877+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 420 ms_handle_reset con 0x56094f7cfc00 session 0x5609515d23c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188801024 unmapped: 83730432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:31.400028+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 420 ms_handle_reset con 0x5609517f7400 session 0x56095190b2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 420 handle_osd_map epochs [420,421], i have 420, src has [1,421]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 83738624 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:32.400167+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188801024 unmapped: 83730432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:33.400379+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188801024 unmapped: 83730432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:34.400551+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3215119 data_alloc: 234881024 data_used: 17108992
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188817408 unmapped: 83714048 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:35.400787+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.528978348s of 10.462098122s, submitted: 73
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cf800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f5366000/0x0/0x4ffc00000, data 0x33e72be/0x3627000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188817408 unmapped: 83714048 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:36.401285+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 421 ms_handle_reset con 0x5609505ff400 session 0x56094f8da960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188817408 unmapped: 83714048 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:37.401524+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 421 ms_handle_reset con 0x56094f7cf800 session 0x56094f3652c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188817408 unmapped: 83714048 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f5366000/0x0/0x4ffc00000, data 0x33e72ce/0x3628000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:38.401665+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 422 ms_handle_reset con 0x5609517f7800 session 0x56094f60c1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188858368 unmapped: 83673088 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:39.401779+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3222201 data_alloc: 234881024 data_used: 17121280
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188858368 unmapped: 83673088 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:40.401934+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 423 ms_handle_reset con 0x560951f2fc00 session 0x560953c905a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188866560 unmapped: 83664896 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 423 heartbeat osd_stat(store_statfs(0x4f535d000/0x0/0x4ffc00000, data 0x33eaa8c/0x3630000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:41.402090+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 423 ms_handle_reset con 0x56094f762c00 session 0x56095195f680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cf800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 423 ms_handle_reset con 0x56094f7cf800 session 0x5609515d2b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cfc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 423 ms_handle_reset con 0x56094f7cfc00 session 0x5609521ad860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184385536 unmapped: 88145920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:42.402236+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 423 ms_handle_reset con 0x5609517f7400 session 0x5609519e8780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f6027000/0x0/0x4ffc00000, data 0x2720a8c/0x2966000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 424 ms_handle_reset con 0x5609505ff400 session 0x56095053a780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184385536 unmapped: 88145920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:43.402405+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184385536 unmapped: 88145920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:44.402617+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3056589 data_alloc: 218103808 data_used: 6356992
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184385536 unmapped: 88145920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:45.402859+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.523688793s of 10.124675751s, submitted: 85
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 88137728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 425 ms_handle_reset con 0x560951ba3800 session 0x56094f360b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:46.403048+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 426 ms_handle_reset con 0x56094f9a1000 session 0x560951228f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 88137728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:47.403226+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 88137728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:48.403403+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f601e000/0x0/0x4ffc00000, data 0x2725de1/0x296f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 427 ms_handle_reset con 0x560951ba3000 session 0x5609515e34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 88137728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:49.403609+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 427 ms_handle_reset con 0x560951ba2c00 session 0x56094f8d4780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063118 data_alloc: 218103808 data_used: 6361088
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 427 ms_handle_reset con 0x56094f9a1000 session 0x5609521ac000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 88137728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:50.403798+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 88137728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:51.403937+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 88137728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 428 heartbeat osd_stat(store_statfs(0x4f601e000/0x0/0x4ffc00000, data 0x2727aa4/0x2970000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:52.404120+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 428 ms_handle_reset con 0x560951ba3000 session 0x5609519aad20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 429 ms_handle_reset con 0x560951ba3800 session 0x56094f3610e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184393728 unmapped: 88137728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:53.404276+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184401920 unmapped: 88129536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 430 ms_handle_reset con 0x560957a0fc00 session 0x56094f392000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:54.404465+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 430 ms_handle_reset con 0x5609505ff400 session 0x56095195e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3076400 data_alloc: 218103808 data_used: 6369280
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 430 heartbeat osd_stat(store_statfs(0x4f6011000/0x0/0x4ffc00000, data 0x272cd0b/0x297a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184401920 unmapped: 88129536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:55.404683+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 430 ms_handle_reset con 0x56094f9a1000 session 0x5609512432c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.558703423s of 10.175553322s, submitted: 67
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184401920 unmapped: 88129536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 430 ms_handle_reset con 0x5609505ff400 session 0x560950de12c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:56.404817+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 184401920 unmapped: 88129536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:57.404974+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 431 heartbeat osd_stat(store_statfs(0x4f6014000/0x0/0x4ffc00000, data 0x272cd0b/0x297a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x726f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 432 ms_handle_reset con 0x560951ba3000 session 0x5609519c4960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185450496 unmapped: 87080960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:58.405129+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095240d000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 432 ms_handle_reset con 0x56095240d000 session 0x56095190b4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185450496 unmapped: 87080960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:59.405257+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 432 ms_handle_reset con 0x560951ba3800 session 0x56094f3654a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083622 data_alloc: 218103808 data_used: 6381568
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 433 ms_handle_reset con 0x5609505ff400 session 0x5609521acd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185466880 unmapped: 87064576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:00.405376+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 433 ms_handle_reset con 0x560951ba3000 session 0x560951242780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 433 handle_osd_map epochs [434,434], i have 434, src has [1,434]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 434 ms_handle_reset con 0x56094f9a1000 session 0x5609516f8b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 434 ms_handle_reset con 0x560957a0fc00 session 0x56094f3f61e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185507840 unmapped: 87023616 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:01.405542+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f5bf4000/0x0/0x4ffc00000, data 0x2733b1b/0x2988000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095240d000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 87015424 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:02.405731+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 185516032 unmapped: 87015424 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:03.405892+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186580992 unmapped: 85950464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 435 ms_handle_reset con 0x560957a0f000 session 0x5609521acb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:04.406066+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099603 data_alloc: 218103808 data_used: 6410240
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186580992 unmapped: 85950464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:05.406316+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 435 handle_osd_map epochs [435,436], i have 435, src has [1,436]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.281704426s of 10.011121750s, submitted: 95
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 436 ms_handle_reset con 0x560957a0f000 session 0x56095122d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186613760 unmapped: 85917696 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:06.406468+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 436 ms_handle_reset con 0x56095240d000 session 0x560953c912c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 436 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186630144 unmapped: 85901312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:07.406607+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 438 ms_handle_reset con 0x56094f9a1000 session 0x5609512292c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 438 ms_handle_reset con 0x5609505ff400 session 0x5609516f85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 438 heartbeat osd_stat(store_statfs(0x4f5be5000/0x0/0x4ffc00000, data 0x273a9b5/0x2996000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 438 ms_handle_reset con 0x56094f762c00 session 0x56095121fa40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a1000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186630144 unmapped: 85901312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:08.406817+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 438 ms_handle_reset con 0x56094f9a1000 session 0x560951242f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186630144 unmapped: 85901312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:09.407029+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3112092 data_alloc: 218103808 data_used: 6430720
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 438 handle_osd_map epochs [439,439], i have 439, src has [1,439]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186638336 unmapped: 85893120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:10.407186+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94bc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 439 ms_handle_reset con 0x56094f84e000 session 0x5609512425a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f753400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 439 ms_handle_reset con 0x56094f753400 session 0x5609521ac960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 85819392 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:11.407388+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186712064 unmapped: 85819392 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94a000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 440 ms_handle_reset con 0x56094f94a000 session 0x56094f396b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:12.407580+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 440 ms_handle_reset con 0x5609504dd800 session 0x5609512281e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 85770240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:13.407943+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 441 heartbeat osd_stat(store_statfs(0x4f5bdf000/0x0/0x4ffc00000, data 0x273e668/0x299d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 441 ms_handle_reset con 0x560953ed7c00 session 0x560950de1e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f753400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 441 ms_handle_reset con 0x56094f753400 session 0x56094f3645a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 441 ms_handle_reset con 0x56094f94bc00 session 0x56095053b4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186761216 unmapped: 85770240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:14.408145+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 441 ms_handle_reset con 0x56094f7b3800 session 0x56094f360b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3128369 data_alloc: 218103808 data_used: 6434816
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 85753856 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:15.408378+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c35c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dc000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 441 ms_handle_reset con 0x5609517f7c00 session 0x56094f778780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f753400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.671449661s of 10.164894104s, submitted: 88
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 441 ms_handle_reset con 0x5609517f7c00 session 0x5609517c65a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186834944 unmapped: 85696512 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:16.408505+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 ms_handle_reset con 0x56094f753400 session 0x56094f3923c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 ms_handle_reset con 0x56094f7b3800 session 0x560951242f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94bc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 ms_handle_reset con 0x56094f94bc00 session 0x560951fcde00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 ms_handle_reset con 0x5609504dc000 session 0x5609521adc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186941440 unmapped: 85590016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:17.408677+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f5bd5000/0x0/0x4ffc00000, data 0x2743eb1/0x29a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 heartbeat osd_stat(store_statfs(0x4f5bd5000/0x0/0x4ffc00000, data 0x2743eb1/0x29a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,2])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 186941440 unmapped: 85590016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:18.408827+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 ms_handle_reset con 0x560951c35c00 session 0x56094f60d2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f753400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 ms_handle_reset con 0x56094f753400 session 0x56095190b680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 handle_osd_map epochs [443,444], i have 443, src has [1,444]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 443 handle_osd_map epochs [444,444], i have 444, src has [1,444]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 444 ms_handle_reset con 0x56094f7b3800 session 0x56095053a960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b32400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 444 ms_handle_reset con 0x5609517f9400 session 0x5609516f8b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 85508096 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:19.409010+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3135621 data_alloc: 218103808 data_used: 6443008
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187064320 unmapped: 85467136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:20.409221+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 444 ms_handle_reset con 0x560952b32400 session 0x56094f3605a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187064320 unmapped: 85467136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:21.409473+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f753400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 444 heartbeat osd_stat(store_statfs(0x4f5bd6000/0x0/0x4ffc00000, data 0x274559c/0x29a8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187080704 unmapped: 85450752 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:22.409701+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187080704 unmapped: 85450752 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:23.409874+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 445 ms_handle_reset con 0x5609517f9400 session 0x56094f778960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187113472 unmapped: 85417984 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:24.410096+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3139227 data_alloc: 218103808 data_used: 6455296
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187113472 unmapped: 85417984 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:25.410281+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 446 ms_handle_reset con 0x56094f753400 session 0x56094f8d50e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 446 ms_handle_reset con 0x56094f7b3800 session 0x5609521acd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.174537659s of 10.016278267s, submitted: 122
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 446 ms_handle_reset con 0x5609504da800 session 0x56094f43f0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187113472 unmapped: 85417984 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:26.410403+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187138048 unmapped: 85393408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:27.410574+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5bcd000/0x0/0x4ffc00000, data 0x274a3f0/0x29af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187138048 unmapped: 85393408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:28.410736+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b0400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187154432 unmapped: 85377024 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:29.410867+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3143471 data_alloc: 218103808 data_used: 6451200
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187154432 unmapped: 85377024 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:30.411014+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187154432 unmapped: 85377024 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:31.411175+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5bce000/0x0/0x4ffc00000, data 0x274a38e/0x29ae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 447 handle_osd_map epochs [448,448], i have 448, src has [1,448]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 447 handle_osd_map epochs [448,448], i have 448, src has [1,448]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:32.411317+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187195392 unmapped: 85336064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 448 handle_osd_map epochs [448,449], i have 448, src has [1,449]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:33.411502+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 85319680 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 449 heartbeat osd_stat(store_statfs(0x4f5bc8000/0x0/0x4ffc00000, data 0x274d9ee/0x29b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:34.411689+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 85319680 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3149491 data_alloc: 218103808 data_used: 6451200
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:35.411905+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187211776 unmapped: 85319680 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:36.412078+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187236352 unmapped: 85295104 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 450 ms_handle_reset con 0x560953ed7c00 session 0x56095195ed20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:37.412243+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187244544 unmapped: 85286912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:38.412461+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187244544 unmapped: 85286912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 450 heartbeat osd_stat(store_statfs(0x4f5bc6000/0x0/0x4ffc00000, data 0x274f587/0x29b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.687645912s of 12.244584084s, submitted: 54
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 451 heartbeat osd_stat(store_statfs(0x4f5bc6000/0x0/0x4ffc00000, data 0x274f587/0x29b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:39.412635+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187244544 unmapped: 85286912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3154767 data_alloc: 218103808 data_used: 6451200
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:40.412819+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187244544 unmapped: 85286912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 451 ms_handle_reset con 0x5609517f7000 session 0x560953c914a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 451 handle_osd_map epochs [452,452], i have 452, src has [1,452]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 451 heartbeat osd_stat(store_statfs(0x4f5bc3000/0x0/0x4ffc00000, data 0x2750fea/0x29ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:41.413020+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187252736 unmapped: 85278720 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 452 ms_handle_reset con 0x56094f7b0400 session 0x560950de12c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:42.413221+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187269120 unmapped: 85262336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:43.413374+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187269120 unmapped: 85262336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 452 heartbeat osd_stat(store_statfs(0x4f5bc0000/0x0/0x4ffc00000, data 0x2752b67/0x29bd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:44.413702+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187301888 unmapped: 85229568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3160866 data_alloc: 218103808 data_used: 6451200
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:45.413897+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187301888 unmapped: 85229568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:46.414293+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187301888 unmapped: 85229568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 454 ms_handle_reset con 0x56094f7b5c00 session 0x560950de2780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 454 ms_handle_reset con 0x560950511800 session 0x56094f779e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f6000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 454 ms_handle_reset con 0x5609517f6000 session 0x56094f60c1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:47.414490+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187326464 unmapped: 85204992 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b0400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 455 ms_handle_reset con 0x56094f7b0400 session 0x5609519c5c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 455 ms_handle_reset con 0x560951ba4400 session 0x56094f392000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 455 heartbeat osd_stat(store_statfs(0x4f5bb7000/0x0/0x4ffc00000, data 0x2757e56/0x29c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:48.414802+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187351040 unmapped: 85180416 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 455 handle_osd_map epochs [455,456], i have 455, src has [1,456]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.479719639s of 10.277643204s, submitted: 61
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:49.415243+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187367424 unmapped: 85164032 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3170177 data_alloc: 218103808 data_used: 6455296
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:50.415492+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095240c800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 457 ms_handle_reset con 0x56094f7b5c00 session 0x56094f397860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:51.415886+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 457 heartbeat osd_stat(store_statfs(0x4f5bb3000/0x0/0x4ffc00000, data 0x275b5ce/0x29ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 457 handle_osd_map epochs [458,458], i have 458, src has [1,458]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 457 handle_osd_map epochs [458,458], i have 458, src has [1,458]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:52.416142+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 459 heartbeat osd_stat(store_statfs(0x4f5baf000/0x0/0x4ffc00000, data 0x275d183/0x29cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 459 ms_handle_reset con 0x5609517f9c00 session 0x560951221a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:53.416365+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:54.416572+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3177027 data_alloc: 218103808 data_used: 6459392
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 459 ms_handle_reset con 0x56094f7b4000 session 0x56095121e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:55.416843+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b0400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:56.417020+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 460 ms_handle_reset con 0x56094f7b0400 session 0x5609515d30e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 460 ms_handle_reset con 0x56095240c800 session 0x56094f3610e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 460 heartbeat osd_stat(store_statfs(0x4f5ba9000/0x0/0x4ffc00000, data 0x2760c2a/0x29d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:57.417372+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:58.417606+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 460 heartbeat osd_stat(store_statfs(0x4f5baa000/0x0/0x4ffc00000, data 0x2760c2a/0x29d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:59.417776+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187400192 unmapped: 85131264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954598400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.767468929s of 10.975623131s, submitted: 82
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 461 ms_handle_reset con 0x560954598400 session 0x56094f392d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182903 data_alloc: 218103808 data_used: 6459392
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:00.417977+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187416576 unmapped: 85114880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:01.418183+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187416576 unmapped: 85114880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:02.418328+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187416576 unmapped: 85114880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226c000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:03.418480+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187416576 unmapped: 85114880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 462 ms_handle_reset con 0x56095226c000 session 0x560951243680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:04.418614+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187416576 unmapped: 85114880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3185319 data_alloc: 218103808 data_used: 6463488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5ba6000/0x0/0x4ffc00000, data 0x2763a3e/0x29d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:05.418798+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187416576 unmapped: 85114880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5ba6000/0x0/0x4ffc00000, data 0x2763a3e/0x29d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:06.418958+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187416576 unmapped: 85114880 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 462 handle_osd_map epochs [462,463], i have 462, src has [1,463]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5ba6000/0x0/0x4ffc00000, data 0x2763a3e/0x29d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:07.419132+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187432960 unmapped: 85098496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 463 handle_osd_map epochs [463,464], i have 463, src has [1,464]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:08.419260+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187449344 unmapped: 85082112 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:09.419409+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187449344 unmapped: 85082112 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191267 data_alloc: 218103808 data_used: 6463488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:10.419636+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187449344 unmapped: 85082112 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:11.419849+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187449344 unmapped: 85082112 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:12.420021+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187449344 unmapped: 85082112 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f5ba0000/0x0/0x4ffc00000, data 0x27670ae/0x29dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.028909683s of 13.425889969s, submitted: 87
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:13.420161+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187473920 unmapped: 85057536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:14.420301+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187473920 unmapped: 85057536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194241 data_alloc: 218103808 data_used: 6463488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:15.420499+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187473920 unmapped: 85057536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:16.420636+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187473920 unmapped: 85057536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:17.420794+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187473920 unmapped: 85057536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f5b9d000/0x0/0x4ffc00000, data 0x2768b2d/0x29e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 465 handle_osd_map epochs [466,467], i have 466, src has [1,467]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:18.420961+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187498496 unmapped: 85032960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:19.421129+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187514880 unmapped: 85016576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f5b96000/0x0/0x4ffc00000, data 0x276c18d/0x29e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3204579 data_alloc: 218103808 data_used: 6463488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f5b92000/0x0/0x4ffc00000, data 0x276dd7a/0x29e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:20.421296+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187531264 unmapped: 85000192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:21.421453+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187539456 unmapped: 84992000 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:22.421631+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 187539456 unmapped: 84992000 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94bc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:23.421739+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.752764702s of 10.190410614s, submitted: 84
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189661184 unmapped: 82870272 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 471 ms_handle_reset con 0x56094f94bc00 session 0x5609515e34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:24.422145+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 472 ms_handle_reset con 0x5609504dd000 session 0x5609519e8780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189718528 unmapped: 82812928 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3221295 data_alloc: 218103808 data_used: 6475776
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:25.422295+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 472 heartbeat osd_stat(store_statfs(0x4f49e5000/0x0/0x4ffc00000, data 0x2774c26/0x29f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed6c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189734912 unmapped: 82796544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 472 ms_handle_reset con 0x560953ed6c00 session 0x5609515e34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226dc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:26.422464+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189759488 unmapped: 82771968 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 472 ms_handle_reset con 0x560957a0ec00 session 0x560952188b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:27.422646+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 472 ms_handle_reset con 0x56095226dc00 session 0x56094f3610e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188522496 unmapped: 84008960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 472 handle_osd_map epochs [472,473], i have 472, src has [1,473]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f49e9000/0x0/0x4ffc00000, data 0x2774bb4/0x29f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:28.422824+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188514304 unmapped: 84017152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f49e6000/0x0/0x4ffc00000, data 0x27765e1/0x29f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:29.422964+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188514304 unmapped: 84017152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3225509 data_alloc: 218103808 data_used: 6479872
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 474 ms_handle_reset con 0x5609505ff400 session 0x56094f778d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 474 ms_handle_reset con 0x560957a0fc00 session 0x56094f43f0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:30.423146+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188514304 unmapped: 84017152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:31.423284+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 474 ms_handle_reset con 0x5609505ff400 session 0x56094f8d50e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226dc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188514304 unmapped: 84017152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:32.423503+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188514304 unmapped: 84017152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 474 ms_handle_reset con 0x5609504dd000 session 0x560950de14a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 475 heartbeat osd_stat(store_statfs(0x4f49e0000/0x0/0x4ffc00000, data 0x2779c0f/0x29fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:33.423725+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188538880 unmapped: 83992576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952849000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.670568943s of 10.932803154s, submitted: 120
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:34.423887+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188547072 unmapped: 83984384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 475 ms_handle_reset con 0x56095226dc00 session 0x56094f778960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227258 data_alloc: 218103808 data_used: 6483968
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:35.424061+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188547072 unmapped: 83984384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:36.424169+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188547072 unmapped: 83984384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:37.424316+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 476 ms_handle_reset con 0x560952849000 session 0x560951220780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188588032 unmapped: 83943424 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 476 handle_osd_map epochs [476,477], i have 476, src has [1,477]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f49e2000/0x0/0x4ffc00000, data 0x2779c0f/0x29fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:38.424501+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 83910656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:39.424647+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 ms_handle_reset con 0x5609504dd000 session 0x5609516f8d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 83910656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3232711 data_alloc: 218103808 data_used: 6492160
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:40.424825+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f49db000/0x0/0x4ffc00000, data 0x277d26f/0x2a02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 83910656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f49db000/0x0/0x4ffc00000, data 0x277d26f/0x2a02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:41.424952+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 83910656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:42.425093+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 83910656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:43.425250+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cfc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 ms_handle_reset con 0x56094f7cfc00 session 0x56094f3603c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 83910656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:44.425494+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 83910656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b93000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 ms_handle_reset con 0x560951b93000 session 0x560950de10e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.654681206s of 10.599888802s, submitted: 46
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 ms_handle_reset con 0x56094f7b3800 session 0x5609512061e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3231831 data_alloc: 218103808 data_used: 6492160
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:45.425703+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188612608 unmapped: 83918848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:46.425825+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188612608 unmapped: 83918848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f49dc000/0x0/0x4ffc00000, data 0x277d26f/0x2a02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 ms_handle_reset con 0x560951ba5c00 session 0x5609521ade00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:47.425981+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188612608 unmapped: 83918848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cfc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 478 ms_handle_reset con 0x56094f7cfc00 session 0x56095053a960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:48.426129+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188620800 unmapped: 83910656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:49.426314+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188628992 unmapped: 83902464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 478 ms_handle_reset con 0x5609504dd000 session 0x56095190b680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244285 data_alloc: 218103808 data_used: 6504448
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x56094f7b3800 session 0x56094f360b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:50.426498+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188645376 unmapped: 83886080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x5609517f7000 session 0x56095053a780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 heartbeat osd_stat(store_statfs(0x4f49d3000/0x0/0x4ffc00000, data 0x2780c72/0x2a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:51.427378+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188645376 unmapped: 83886080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954599c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x560954599c00 session 0x56094f360f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:52.427515+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188645376 unmapped: 83886080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cfc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x56094f7cfc00 session 0x56094f3923c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:53.427609+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188645376 unmapped: 83886080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x5609517f7000 session 0x560951242780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x5609504dd000 session 0x560951228000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:54.427786+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x56094f7b3800 session 0x56094f60d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188645376 unmapped: 83886080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3243005 data_alloc: 218103808 data_used: 6504448
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x5609517f7c00 session 0x56094f77d0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 heartbeat osd_stat(store_statfs(0x4f49d4000/0x0/0x4ffc00000, data 0x2780c72/0x2a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:55.427953+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188645376 unmapped: 83886080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.381533623s of 11.260029793s, submitted: 47
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:56.428131+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x56094f7b3800 session 0x56095122cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188653568 unmapped: 83877888 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cfc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x56094f7cfc00 session 0x5609521acb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:57.428271+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f7000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x5609504dd000 session 0x5609519e83c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188669952 unmapped: 83861504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x5609517f7000 session 0x56094f896d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dc400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:58.428451+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188678144 unmapped: 83853312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 heartbeat osd_stat(store_statfs(0x4f49d3000/0x0/0x4ffc00000, data 0x2780c72/0x2a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 ms_handle_reset con 0x560951ba4000 session 0x56094f396000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:59.428736+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188702720 unmapped: 83828736 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f49d0000/0x0/0x4ffc00000, data 0x2782843/0x2a0d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,2])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247487 data_alloc: 218103808 data_used: 6512640
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:00.428887+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188710912 unmapped: 83820544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:01.429038+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f49d1000/0x0/0x4ffc00000, data 0x2782443/0x2a0c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 480 ms_handle_reset con 0x5609504dc400 session 0x56094f361680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:02.429186+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:03.429352+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:04.430568+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3244287 data_alloc: 218103808 data_used: 6512640
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:05.432333+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:06.432514+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f49d3000/0x0/0x4ffc00000, data 0x2782420/0x2a0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:07.432676+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:08.432968+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.929510117s of 12.223282814s, submitted: 61
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 480 handle_osd_map epochs [481,481], i have 481, src has [1,481]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f49d3000/0x0/0x4ffc00000, data 0x2782420/0x2a0b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,2])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:09.433142+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f49cf000/0x0/0x4ffc00000, data 0x2783e83/0x2a0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3248461 data_alloc: 218103808 data_used: 6520832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:10.433351+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f49cf000/0x0/0x4ffc00000, data 0x2783e83/0x2a0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:11.433536+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:12.433738+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:13.433885+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:14.434076+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3248461 data_alloc: 218103808 data_used: 6520832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f763400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:15.434253+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x56094f763400 session 0x560950de0960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095048bc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x56095048bc00 session 0x56095015d4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 83804160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0e400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f49cd000/0x0/0x4ffc00000, data 0x2783f02/0x2a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x560957a0e400 session 0x5609515d30e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x560951c34800 session 0x56094f392d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:16.434500+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f763400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x56094f763400 session 0x560951220f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188735488 unmapped: 83795968 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095048bc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x56095048bc00 session 0x56094f3f6780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dc400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x5609504dc400 session 0x560951207680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:17.434822+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0e400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x560957a0e400 session 0x56094f897680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:18.434965+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x56094f7b5c00 session 0x560953c90960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:19.435159+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x5609545fd000 session 0x56094f43e780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.599933624s of 11.193282127s, submitted: 37
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x560951f2fc00 session 0x56094f361a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3257272 data_alloc: 218103808 data_used: 6520832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:20.435325+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188719104 unmapped: 83812352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x56094f9a0800 session 0x5609504b14a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x560951ba3400 session 0x56094f896b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f49cd000/0x0/0x4ffc00000, data 0x2783f02/0x2a11000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:21.435504+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188702720 unmapped: 83828736 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x5609504da400 session 0x56094f60d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:22.435637+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x56094f9a0800 session 0x56094f360f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188776448 unmapped: 83755008 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:23.435807+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f49d0000/0x0/0x4ffc00000, data 0x2783e83/0x2a0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188776448 unmapped: 83755008 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:24.435958+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 83738624 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3254001 data_alloc: 218103808 data_used: 6520832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:25.436135+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 83738624 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:26.436258+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 83738624 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:27.436484+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f49d0000/0x0/0x4ffc00000, data 0x2783e83/0x2a0e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188792832 unmapped: 83738624 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x56094f762000 session 0x56094f43f0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:28.436645+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188833792 unmapped: 83697664 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:29.436785+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c35000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.785275459s of 10.004982948s, submitted: 48
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504db800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 ms_handle_reset con 0x560951c35000 session 0x56094f3610e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188858368 unmapped: 83673088 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 482 ms_handle_reset con 0x5609504db800 session 0x560951207860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3262958 data_alloc: 218103808 data_used: 6533120
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:30.436902+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188866560 unmapped: 83664896 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d4800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f763400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 482 ms_handle_reset con 0x56094f763400 session 0x5609519c54a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:31.437084+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 188923904 unmapped: 83607552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 482 ms_handle_reset con 0x56094f762000 session 0x56095053a3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 482 ms_handle_reset con 0x56094f9a0800 session 0x5609504b01e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x5609543d4800 session 0x5609515e34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:32.437221+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504db800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x5609504db800 session 0x56094f3652c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 83492864 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c35000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x560951c35000 session 0x560951fcde00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:33.437320+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f49c4000/0x0/0x4ffc00000, data 0x278761f/0x2a19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 83492864 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x56094f9a0800 session 0x5609500c7e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x56094f762000 session 0x5609519abe00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f31000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x560951f31000 session 0x56094f3605a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525cd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:34.437642+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x5609525cd000 session 0x5609519c5c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189120512 unmapped: 83410944 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3279781 data_alloc: 218103808 data_used: 6541312
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x5609517f8c00 session 0x5609521ac3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:35.437807+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x56094f762000 session 0x56094f365680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x56094f9a0800 session 0x56095183b680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f31000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x560951f31000 session 0x56094f397e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189366272 unmapped: 83165184 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:36.438017+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525cd000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x5609525cd000 session 0x5609515e3680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dc400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189399040 unmapped: 83132416 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x5609504dc400 session 0x560951fcc000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b93400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x560951ba3c00 session 0x5609512281e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 ms_handle_reset con 0x560951b93400 session 0x56094f8d52c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f49c5000/0x0/0x4ffc00000, data 0x278761f/0x2a19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x56094f762000 session 0x560951224b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:37.438199+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x56094f84e000 session 0x56095190ba40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189644800 unmapped: 82886656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x56094f9a0800 session 0x56094f8da780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x56094f84e000 session 0x56095015cb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x56094f762000 session 0x560951fcd2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:38.438349+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x56094f9a0800 session 0x5609521ac780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 189702144 unmapped: 82829312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f49c2000/0x0/0x4ffc00000, data 0x278918e/0x2a1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:39.438490+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x560951ba3400 session 0x56094f8da960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x560951c59c00 session 0x56094f8da1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190480384 unmapped: 82051072 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3275466 data_alloc: 218103808 data_used: 6549504
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:40.438626+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190480384 unmapped: 82051072 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.631992340s of 11.278617859s, submitted: 140
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 ms_handle_reset con 0x56094f762000 session 0x5609517c65a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 485 ms_handle_reset con 0x56094f84e000 session 0x560952189680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:41.438732+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f49c6000/0x0/0x4ffc00000, data 0x278915e/0x2a18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190529536 unmapped: 82001920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:42.438893+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 485 ms_handle_reset con 0x56094f9a0800 session 0x56095183ab40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190603264 unmapped: 81928192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:43.439031+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f49c2000/0x0/0x4ffc00000, data 0x278ad2f/0x2a1b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 485 ms_handle_reset con 0x560951ba3400 session 0x560951fcc000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190603264 unmapped: 81928192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:44.439196+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190619648 unmapped: 81911808 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3278168 data_alloc: 218103808 data_used: 6553600
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:45.439386+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190619648 unmapped: 81911808 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:46.442406+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190619648 unmapped: 81911808 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:47.442790+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190619648 unmapped: 81911808 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:48.442970+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f49c0000/0x0/0x4ffc00000, data 0x278c782/0x2a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 81879040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x560951ba2c00 session 0x56095183bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:49.443113+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 81879040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3281142 data_alloc: 218103808 data_used: 6553600
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:50.443232+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f762000 session 0x56095183b0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f84e000 session 0x56095183a3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0e400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x560957a0e400 session 0x56094f360960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190660608 unmapped: 81870848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.011940956s of 10.133857727s, submitted: 59
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f49c0000/0x0/0x4ffc00000, data 0x278c7e4/0x2a1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f9a0800 session 0x5609515d34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x560951ba3400 session 0x5609517ad4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:51.443352+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f762000 session 0x56094f361680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190701568 unmapped: 81829888 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f84e000 session 0x560952188960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:52.443504+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190709760 unmapped: 81821696 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f9a0800 session 0x560951226b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:53.443671+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x560951ba3400 session 0x56095190b860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190726144 unmapped: 81805312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:54.443790+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f8f9c00 session 0x5609512212c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f49bf000/0x0/0x4ffc00000, data 0x278c7e4/0x2a1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190726144 unmapped: 81805312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f762000 session 0x56095121eb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285151 data_alloc: 218103808 data_used: 6553600
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:55.443984+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f84e000 session 0x5609519e8780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190734336 unmapped: 81797120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f9a0800 session 0x560951fcc1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:56.444137+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190734336 unmapped: 81797120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x560951ba3400 session 0x560951228960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x560950511800 session 0x5609519330e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:57.444316+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 81764352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f762000 session 0x560951fcd860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:58.444460+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56094f84e000 session 0x560951fcd0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 81764352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f49c1000/0x0/0x4ffc00000, data 0x278c782/0x2a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:59.444692+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 81764352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3283885 data_alloc: 218103808 data_used: 6553600
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:00.444878+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 81764352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:01.445004+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 81764352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f49c1000/0x0/0x4ffc00000, data 0x278c782/0x2a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:02.445152+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 81764352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:03.445314+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190767104 unmapped: 81764352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:04.445555+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.066365242s of 13.672158241s, submitted: 72
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x560951f2e800 session 0x560951fcd0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190783488 unmapped: 81747968 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285713 data_alloc: 218103808 data_used: 6553600
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:05.445760+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190783488 unmapped: 81747968 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:06.445927+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095226d000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 ms_handle_reset con 0x56095226d000 session 0x5609519c5c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190791680 unmapped: 81739776 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:07.446085+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190791680 unmapped: 81739776 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f49bf000/0x0/0x4ffc00000, data 0x278c7a2/0x2a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:08.446281+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dc800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 487 ms_handle_reset con 0x56094ef10c00 session 0x560950de0b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190832640 unmapped: 81698816 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:09.446515+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 487 handle_osd_map epochs [487,488], i have 487, src has [1,488]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 488 ms_handle_reset con 0x5609504dc800 session 0x56094f3605a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 488 ms_handle_reset con 0x560951ba5400 session 0x5609519abe00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 488 ms_handle_reset con 0x560951c59800 session 0x5609519330e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190881792 unmapped: 81649664 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3300873 data_alloc: 218103808 data_used: 6569984
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:10.446630+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190881792 unmapped: 81649664 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets getting new tickets!
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:11.446900+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _finish_auth 0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:11.447926+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 489 ms_handle_reset con 0x56094f84ec00 session 0x5609519e8780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f8400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 489 ms_handle_reset con 0x5609517f8400 session 0x5609512212c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190898176 unmapped: 81633280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 489 heartbeat osd_stat(store_statfs(0x4f49b5000/0x0/0x4ffc00000, data 0x2791abf/0x2a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:12.447090+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 489 handle_osd_map epochs [489,490], i have 489, src has [1,490]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 490 ms_handle_reset con 0x56094f84ec00 session 0x56095190b860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dc800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 490 ms_handle_reset con 0x5609504dc800 session 0x560952188960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190922752 unmapped: 81608704 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:13.447324+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f49b3000/0x0/0x4ffc00000, data 0x279363a/0x2a29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190930944 unmapped: 81600512 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:14.447480+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 490 heartbeat osd_stat(store_statfs(0x4f49b3000/0x0/0x4ffc00000, data 0x279363a/0x2a29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190930944 unmapped: 81600512 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:15.447679+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301417 data_alloc: 218103808 data_used: 6561792
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190930944 unmapped: 81600512 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:16.447825+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190930944 unmapped: 81600512 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:17.448039+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190930944 unmapped: 81600512 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.238730431s of 13.446644783s, submitted: 41
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:18.448205+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fcc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 491 ms_handle_reset con 0x5609545fcc00 session 0x5609515d34a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 81575936 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:19.448463+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 81575936 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 491 heartbeat osd_stat(store_statfs(0x4f49b1000/0x0/0x4ffc00000, data 0x27950d5/0x2a2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:20.448616+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303719 data_alloc: 218103808 data_used: 6561792
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b92400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190971904 unmapped: 81559552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 491 ms_handle_reset con 0x56094f8f8400 session 0x560953c90d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 491 ms_handle_reset con 0x560952400000 session 0x5609515e3a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f8000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:21.448802+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 491 ms_handle_reset con 0x56095048a400 session 0x56095122da40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952400000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 190980096 unmapped: 81551360 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:22.449006+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504db800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 492 ms_handle_reset con 0x5609504db800 session 0x5609500c7e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191029248 unmapped: 81502208 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:23.449216+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191029248 unmapped: 81502208 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f31800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:24.449412+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 493 ms_handle_reset con 0x560951f31800 session 0x560953c90960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 81485824 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:25.449690+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3311483 data_alloc: 218103808 data_used: 6561792
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f49aa000/0x0/0x4ffc00000, data 0x2798831/0x2a33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 493 ms_handle_reset con 0x560951b92400 session 0x56094f360960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191045632 unmapped: 81485824 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f49aa000/0x0/0x4ffc00000, data 0x2798831/0x2a33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:26.449861+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191062016 unmapped: 81469440 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 493 ms_handle_reset con 0x56094f84ec00 session 0x56095183ab40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:27.449999+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191062016 unmapped: 81469440 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:28.450170+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191062016 unmapped: 81469440 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.754658699s of 11.036609650s, submitted: 33
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 493 ms_handle_reset con 0x560951f2fc00 session 0x56094f897680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:29.450292+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191062016 unmapped: 81469440 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 493 handle_osd_map epochs [494,494], i have 493, src has [1,494]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950510000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:30.450704+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 494 ms_handle_reset con 0x560950484400 session 0x560950de32c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3320310 data_alloc: 218103808 data_used: 6574080
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191135744 unmapped: 81395712 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 495 ms_handle_reset con 0x560950510000 session 0x560951207680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 495 heartbeat osd_stat(store_statfs(0x4f49a6000/0x0/0x4ffc00000, data 0x279a410/0x2a37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:31.450883+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 495 ms_handle_reset con 0x5609504da800 session 0x5609517c65a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191184896 unmapped: 81346560 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:32.451045+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191184896 unmapped: 81346560 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 495 handle_osd_map epochs [495,496], i have 495, src has [1,496]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 496 handle_osd_map epochs [496,496], i have 496, src has [1,496]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 496 ms_handle_reset con 0x56094f84ec00 session 0x560951224b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950484400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:33.451194+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 496 ms_handle_reset con 0x560950484400 session 0x5609516f85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191315968 unmapped: 81215488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 496 heartbeat osd_stat(store_statfs(0x4f49a0000/0x0/0x4ffc00000, data 0x279db5e/0x2a3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:34.451397+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191315968 unmapped: 81215488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:35.451700+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3325452 data_alloc: 218103808 data_used: 6586368
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 496 ms_handle_reset con 0x56094f9a0800 session 0x56094f8d5680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191315968 unmapped: 81215488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:36.451850+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191315968 unmapped: 81215488 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f94b400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f753400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 496 ms_handle_reset con 0x56094f753400 session 0x56094f77cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:37.451972+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 496 handle_osd_map epochs [497,497], i have 496, src has [1,497]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f753400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 497 ms_handle_reset con 0x56094f753400 session 0x5609519aad20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191389696 unmapped: 81141760 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 498 ms_handle_reset con 0x56094f84ec00 session 0x56094f896d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 498 heartbeat osd_stat(store_statfs(0x4f499b000/0x0/0x4ffc00000, data 0x279f7bb/0x2a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 498 ms_handle_reset con 0x56094f94b400 session 0x56094f43f4a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:38.452114+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 498 heartbeat osd_stat(store_statfs(0x4f499b000/0x0/0x4ffc00000, data 0x279f7bb/0x2a42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b2000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191455232 unmapped: 81076224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:39.452272+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.229943275s of 10.668668747s, submitted: 84
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 81043456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 499 ms_handle_reset con 0x56094f7b2000 session 0x56094f8da3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954151800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:40.452373+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340585 data_alloc: 218103808 data_used: 6606848
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 499 ms_handle_reset con 0x560954151800 session 0x56094f896d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191520768 unmapped: 81010688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 499 heartbeat osd_stat(store_statfs(0x4f4994000/0x0/0x4ffc00000, data 0x27a2edf/0x2a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:41.452580+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191520768 unmapped: 81010688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f753400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:42.452741+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 499 heartbeat osd_stat(store_statfs(0x4f4995000/0x0/0x4ffc00000, data 0x27a2edf/0x2a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191553536 unmapped: 80977920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:43.452916+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191553536 unmapped: 80977920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:44.453067+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 499 handle_osd_map epochs [500,500], i have 499, src has [1,500]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191569920 unmapped: 80961536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:45.453253+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3341743 data_alloc: 218103808 data_used: 6610944
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 500 ms_handle_reset con 0x56094f753400 session 0x56095122da40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191569920 unmapped: 80961536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:46.453544+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191578112 unmapped: 80953344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b3400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:47.453744+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f4994000/0x0/0x4ffc00000, data 0x27a4a6a/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 80936960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:48.453901+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 80936960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:49.454114+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 80936960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:50.454292+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340606 data_alloc: 218103808 data_used: 6615040
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 80936960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:51.454510+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 80936960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:52.454715+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 80936960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:53.454861+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f4995000/0x0/0x4ffc00000, data 0x27a4a6a/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191594496 unmapped: 80936960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:54.455054+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191602688 unmapped: 80928768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:55.455242+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340606 data_alloc: 218103808 data_used: 6615040
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 80920576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:56.455488+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 80920576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:57.456008+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f4995000/0x0/0x4ffc00000, data 0x27a4a6a/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f4995000/0x0/0x4ffc00000, data 0x27a4a6a/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 80920576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:58.456242+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 80920576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:59.456519+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 80920576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:00.456752+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340606 data_alloc: 218103808 data_used: 6615040
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 80920576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:01.456941+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191610880 unmapped: 80920576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:02.457147+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191619072 unmapped: 80912384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f4995000/0x0/0x4ffc00000, data 0x27a4a6a/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:03.457330+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191619072 unmapped: 80912384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:04.457486+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191619072 unmapped: 80912384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:05.457715+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3340606 data_alloc: 218103808 data_used: 6615040
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 500 heartbeat osd_stat(store_statfs(0x4f4995000/0x0/0x4ffc00000, data 0x27a4a6a/0x2a49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191619072 unmapped: 80912384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:06.457976+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 17.921861649s, txc = 0x560950676000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_commit, latency = 17.921804428s
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency slow operation observed for kv_sync, latency = 17.921804428s
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.752054691s of 26.635204315s, submitted: 93
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 80904192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:07.458137+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 500 handle_osd_map epochs [501,501], i have 500, src has [1,501]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191627264 unmapped: 80904192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:08.458352+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191668224 unmapped: 80863232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 501 handle_osd_map epochs [501,502], i have 501, src has [1,502]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:09.458533+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 501 handle_osd_map epochs [502,502], i have 502, src has [1,502]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191700992 unmapped: 80830464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 502 heartbeat osd_stat(store_statfs(0x4f498d000/0x0/0x4ffc00000, data 0x27a80ba/0x2a4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:10.458683+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3348426 data_alloc: 218103808 data_used: 6623232
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191700992 unmapped: 80830464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:11.458843+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191700992 unmapped: 80830464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:12.459139+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191700992 unmapped: 80830464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:13.459350+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191700992 unmapped: 80830464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 502 handle_osd_map epochs [503,503], i have 502, src has [1,503]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:14.459558+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 80814080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:15.459753+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3361032 data_alloc: 218103808 data_used: 6631424
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191717376 unmapped: 80814080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 503 heartbeat osd_stat(store_statfs(0x4f498b000/0x0/0x4ffc00000, data 0x27a9b39/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:16.459945+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.450307131s of 10.100664139s, submitted: 58
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 503 heartbeat osd_stat(store_statfs(0x4f498b000/0x0/0x4ffc00000, data 0x27a9b39/0x2a52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191741952 unmapped: 80789504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:17.460055+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191741952 unmapped: 80789504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:18.460212+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191741952 unmapped: 80789504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 503 ms_handle_reset con 0x56094f7b3400 session 0x560953c90d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:19.460340+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191741952 unmapped: 80789504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:20.460534+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3358393 data_alloc: 218103808 data_used: 6627328
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191741952 unmapped: 80789504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:21.460754+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 503 heartbeat osd_stat(store_statfs(0x4f498d000/0x0/0x4ffc00000, data 0x27a9ad7/0x2a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191741952 unmapped: 80789504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:22.460949+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191741952 unmapped: 80789504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:23.461130+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191758336 unmapped: 80773120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:24.461269+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 504 handle_osd_map epochs [505,505], i have 504, src has [1,505]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191766528 unmapped: 80764928 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:25.461482+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3365499 data_alloc: 218103808 data_used: 6635520
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191791104 unmapped: 80740352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:26.461645+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191791104 unmapped: 80740352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 505 heartbeat osd_stat(store_statfs(0x4f4986000/0x0/0x4ffc00000, data 0x27ad0a9/0x2a56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:27.461846+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191799296 unmapped: 80732160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:28.461982+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191799296 unmapped: 80732160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:29.462117+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191799296 unmapped: 80732160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:30.462274+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3365499 data_alloc: 218103808 data_used: 6635520
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191799296 unmapped: 80732160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:31.462386+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 505 heartbeat osd_stat(store_statfs(0x4f4986000/0x0/0x4ffc00000, data 0x27ad0a9/0x2a56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191799296 unmapped: 80732160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:32.462557+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191799296 unmapped: 80732160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:33.462713+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 505 handle_osd_map epochs [506,506], i have 505, src has [1,506]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.931665421s of 16.849456787s, submitted: 43
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 506 ms_handle_reset con 0x56094f7b4000 session 0x560951fccb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191807488 unmapped: 80723968 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:34.462841+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191815680 unmapped: 80715776 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:35.463052+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3367801 data_alloc: 218103808 data_used: 6635520
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191815680 unmapped: 80715776 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:36.463467+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 506 heartbeat osd_stat(store_statfs(0x4f4984000/0x0/0x4ffc00000, data 0x27aeb0c/0x2a59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191815680 unmapped: 80715776 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:37.463586+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 507 ms_handle_reset con 0x560951ba3800 session 0x560950de1a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525cc000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191832064 unmapped: 80699392 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:38.463679+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191840256 unmapped: 80691200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:39.463809+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191840256 unmapped: 80691200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:40.463931+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370775 data_alloc: 218103808 data_used: 6635520
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191840256 unmapped: 80691200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 507 ms_handle_reset con 0x5609525cc000 session 0x560950de01e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:41.464068+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 507 heartbeat osd_stat(store_statfs(0x4f4981000/0x0/0x4ffc00000, data 0x27b06dd/0x2a5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191840256 unmapped: 80691200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:42.464162+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191856640 unmapped: 80674816 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:43.464249+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191873024 unmapped: 80658432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:44.464351+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 507 heartbeat osd_stat(store_statfs(0x4f4981000/0x0/0x4ffc00000, data 0x27b06dd/0x2a5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191873024 unmapped: 80658432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:45.464542+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370775 data_alloc: 218103808 data_used: 6635520
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191873024 unmapped: 80658432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:46.464711+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191873024 unmapped: 80658432 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:47.464885+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 507 handle_osd_map epochs [508,508], i have 507, src has [1,508]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.046224594s of 14.649478912s, submitted: 32
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 508 ms_handle_reset con 0x56094f8f8c00 session 0x56094f43ed20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191905792 unmapped: 80625664 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:48.465245+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191905792 unmapped: 80625664 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:49.465480+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 508 heartbeat osd_stat(store_statfs(0x4f497d000/0x0/0x4ffc00000, data 0x27b21a2/0x2a60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191905792 unmapped: 80625664 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:50.465653+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 508 handle_osd_map epochs [509,509], i have 508, src has [1,509]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3378416 data_alloc: 218103808 data_used: 6635520
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 509 ms_handle_reset con 0x560954164800 session 0x560951242b40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191922176 unmapped: 80609280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 509 heartbeat osd_stat(store_statfs(0x4f497a000/0x0/0x4ffc00000, data 0x27b3d1f/0x2a63000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x881f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:51.465826+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191922176 unmapped: 80609280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:52.466074+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 509 ms_handle_reset con 0x5609504da000 session 0x5609512265a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191922176 unmapped: 80609280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:53.466277+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 509 handle_osd_map epochs [510,510], i have 509, src has [1,510]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 510 ms_handle_reset con 0x5609517f9c00 session 0x5609512425a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191946752 unmapped: 80584704 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:54.466487+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7cfc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 510 ms_handle_reset con 0x56094f7cfc00 session 0x5609500c6d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 510 ms_handle_reset con 0x56094f8f8c00 session 0x560950de3e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191963136 unmapped: 80568320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:55.466661+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3380606 data_alloc: 218103808 data_used: 6635520
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 510 ms_handle_reset con 0x5609504da000 session 0x56094f77c3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 191987712 unmapped: 80543744 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:56.466836+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 510 heartbeat osd_stat(store_statfs(0x4f4568000/0x0/0x4ffc00000, data 0x27b588e/0x2a65000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192004096 unmapped: 80527360 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:57.467076+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 510 handle_osd_map epochs [511,511], i have 510, src has [1,511]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.582767487s of 10.015730858s, submitted: 48
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 511 ms_handle_reset con 0x5609517f9c00 session 0x56094f361c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192020480 unmapped: 80510976 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:58.467328+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 511 ms_handle_reset con 0x560954164800 session 0x56095121e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 511 handle_osd_map epochs [512,512], i have 511, src has [1,512]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 512 ms_handle_reset con 0x560951c34400 session 0x56094f897680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192094208 unmapped: 80437248 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:59.467514+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 512 ms_handle_reset con 0x56094f8f8c00 session 0x560953c90780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 512 ms_handle_reset con 0x5609504da000 session 0x560951fcd2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192143360 unmapped: 80388096 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:00.467700+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387968 data_alloc: 218103808 data_used: 6647808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 512 heartbeat osd_stat(store_statfs(0x4f4562000/0x0/0x4ffc00000, data 0x27b906a/0x2a6c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192143360 unmapped: 80388096 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:01.467848+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 512 ms_handle_reset con 0x5609517f9c00 session 0x56095183af00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192176128 unmapped: 80355328 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:02.468068+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 512 handle_osd_map epochs [513,513], i have 512, src has [1,513]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f455e000/0x0/0x4ffc00000, data 0x27baae9/0x2a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192192512 unmapped: 80338944 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:03.468401+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 513 heartbeat osd_stat(store_statfs(0x4f455e000/0x0/0x4ffc00000, data 0x27baae9/0x2a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 192208896 unmapped: 80322560 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:04.468677+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 513 handle_osd_map epochs [514,514], i have 513, src has [1,514]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 514 ms_handle_reset con 0x560954164800 session 0x56094f7790e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 514 ms_handle_reset con 0x56095289fc00 session 0x56095183a960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193265664 unmapped: 79265792 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:05.468906+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 514 handle_osd_map epochs [515,515], i have 514, src has [1,515]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3400191 data_alloc: 218103808 data_used: 6668288
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 515 ms_handle_reset con 0x56094f7b0800 session 0x560951228000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193298432 unmapped: 79233024 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:06.469217+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 515 heartbeat osd_stat(store_statfs(0x4f4558000/0x0/0x4ffc00000, data 0x27be263/0x2a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 515 ms_handle_reset con 0x56094f8f8c00 session 0x5609519aad20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504da000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 515 ms_handle_reset con 0x5609504da000 session 0x56094f43f0e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193306624 unmapped: 79224832 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:07.469407+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 515 handle_osd_map epochs [516,516], i have 515, src has [1,516]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 516 ms_handle_reset con 0x5609517f9c00 session 0x56095183a780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193331200 unmapped: 79200256 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:08.469849+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84f800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.973458290s of 11.184897423s, submitted: 121
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193347584 unmapped: 79183872 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:09.470012+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193347584 unmapped: 79183872 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:10.470407+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 516 heartbeat osd_stat(store_statfs(0x4f4556000/0x0/0x4ffc00000, data 0x27bfc90/0x2a78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 516 handle_osd_map epochs [517,517], i have 516, src has [1,517]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3408959 data_alloc: 218103808 data_used: 6672384
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 517 ms_handle_reset con 0x56094f84f800 session 0x56094f77d2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193355776 unmapped: 79175680 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:11.470667+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 517 ms_handle_reset con 0x56094f7b0800 session 0x5609519325a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193445888 unmapped: 79085568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:12.470851+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 517 heartbeat osd_stat(store_statfs(0x4f4553000/0x0/0x4ffc00000, data 0x27c1829/0x2a7b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 517 handle_osd_map epochs [518,518], i have 517, src has [1,518]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193445888 unmapped: 79085568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:13.471073+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193478656 unmapped: 79052800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:14.471236+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 518 ms_handle_reset con 0x56094f8f8c00 session 0x560951933680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193503232 unmapped: 79028224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:15.471533+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289f400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412482 data_alloc: 218103808 data_used: 6688768
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 518 ms_handle_reset con 0x56095289f400 session 0x56094f361680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095386f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 518 heartbeat osd_stat(store_statfs(0x4f4550000/0x0/0x4ffc00000, data 0x27c3432/0x2a7e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 79020032 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:16.471697+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 79020032 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:17.471824+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193511424 unmapped: 79020032 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:18.472035+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 518 handle_osd_map epochs [519,519], i have 518, src has [1,519]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 519 heartbeat osd_stat(store_statfs(0x4f454d000/0x0/0x4ffc00000, data 0x27c4e85/0x2a80000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.939658642s of 10.036760330s, submitted: 61
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 519 ms_handle_reset con 0x56095386f000 session 0x56094f397e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193576960 unmapped: 78954496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:19.472239+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952849c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 519 ms_handle_reset con 0x560952849c00 session 0x5609512250e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193576960 unmapped: 78954496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:20.472369+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415954 data_alloc: 218103808 data_used: 6692864
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:21.472589+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193593344 unmapped: 78938112 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 519 handle_osd_map epochs [520,520], i have 519, src has [1,520]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 520 ms_handle_reset con 0x56094f7b0800 session 0x56094f43e5a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:22.472707+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193642496 unmapped: 78888960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 520 ms_handle_reset con 0x56094f8f8c00 session 0x5609516f8d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289f400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 520 handle_osd_map epochs [521,521], i have 520, src has [1,521]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 521 ms_handle_reset con 0x56095289f400 session 0x560951225860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:23.472883+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 78856192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095386f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 521 ms_handle_reset con 0x56095386f000 session 0x5609515e23c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 521 ms_handle_reset con 0x56094f9a0800 session 0x560950de14a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:24.473172+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 78856192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 521 heartbeat osd_stat(store_statfs(0x4f4548000/0x0/0x4ffc00000, data 0x27c85d3/0x2a86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:25.473390+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 78856192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420149 data_alloc: 218103808 data_used: 6692864
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:26.473556+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 78856192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 521 heartbeat osd_stat(store_statfs(0x4f4548000/0x0/0x4ffc00000, data 0x27c85d3/0x2a86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:27.473823+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 78856192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:28.474084+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 78856192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:29.474297+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193675264 unmapped: 78856192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 521 heartbeat osd_stat(store_statfs(0x4f4548000/0x0/0x4ffc00000, data 0x27c85d3/0x2a86000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 521 handle_osd_map epochs [522,522], i have 521, src has [1,522]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.880600929s of 11.047994614s, submitted: 40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:30.474570+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193708032 unmapped: 78823424 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:31.474749+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193716224 unmapped: 78815232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:32.474937+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193716224 unmapped: 78815232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:33.475132+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193716224 unmapped: 78815232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:34.475524+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193716224 unmapped: 78815232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:35.475718+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193716224 unmapped: 78815232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:36.475909+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193716224 unmapped: 78815232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:37.476145+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193716224 unmapped: 78815232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:38.476277+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:39.476477+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:40.476621+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:41.476765+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:42.476989+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:43.477185+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:44.477491+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:45.477733+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:46.478070+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193724416 unmapped: 78807040 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:47.478467+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193732608 unmapped: 78798848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:48.478615+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193732608 unmapped: 78798848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:49.478823+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193732608 unmapped: 78798848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:50.478991+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193732608 unmapped: 78798848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:51.479176+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193732608 unmapped: 78798848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:52.479338+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193732608 unmapped: 78798848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:53.479511+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 78790656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:54.479692+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 78790656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:55.479906+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 78790656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:56.480036+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193748992 unmapped: 78782464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:57.480193+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193748992 unmapped: 78782464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:58.480365+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193748992 unmapped: 78782464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:59.480665+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193748992 unmapped: 78782464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:00.480916+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193748992 unmapped: 78782464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:01.481064+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193748992 unmapped: 78782464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:02.481232+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193748992 unmapped: 78782464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:03.481390+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193748992 unmapped: 78782464 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:04.481625+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 78774272 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:05.481897+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 78774272 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:06.482093+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 78774272 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:07.482263+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 78774272 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:08.482569+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 78774272 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:09.482743+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 78774272 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:10.482907+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 78774272 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:11.483154+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193765376 unmapped: 78766080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:12.483505+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193765376 unmapped: 78766080 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:13.483796+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 78757888 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:14.484070+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 78757888 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:15.484376+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193773568 unmapped: 78757888 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:16.484593+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193781760 unmapped: 78749696 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:17.484803+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193781760 unmapped: 78749696 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:18.485033+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193781760 unmapped: 78749696 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:19.485334+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 78741504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:20.485553+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 78741504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:21.485789+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 78741504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:22.486073+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 78741504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:23.486499+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 78741504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:24.486757+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 78741504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:25.487092+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 78741504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:26.487319+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193789952 unmapped: 78741504 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:27.487549+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 78733312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:28.487863+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 78733312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:29.488015+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 78733312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:30.488159+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 78733312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:31.488500+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 78733312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:32.488794+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193798144 unmapped: 78733312 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:33.489065+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193806336 unmapped: 78725120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:34.489344+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193806336 unmapped: 78725120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:35.489749+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193806336 unmapped: 78725120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424323 data_alloc: 218103808 data_used: 6701056
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:36.489957+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095240d000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 193806336 unmapped: 78725120 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56095240d000 session 0x56094f8db860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b92000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951b92000 session 0x560950de1680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:37.490357+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 194543616 unmapped: 77987840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:38.490649+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 194543616 unmapped: 77987840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:39.490952+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 194543616 unmapped: 77987840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:40.491139+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 194543616 unmapped: 77987840 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3426083 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:41.491350+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 194551808 unmapped: 77979648 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:42.491600+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 194551808 unmapped: 77979648 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:43.491754+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 73.052391052s of 73.129081726s, submitted: 26
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:44.492036+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560950511000 session 0x56094f397e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:45.492215+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430518 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:46.492362+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:47.492518+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4504000/0x0/0x4ffc00000, data 0x280a099/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:48.492697+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4504000/0x0/0x4ffc00000, data 0x280a099/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:49.492895+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:50.493105+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430518 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4504000/0x0/0x4ffc00000, data 0x280a099/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:51.493247+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:52.493409+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4504000/0x0/0x4ffc00000, data 0x280a099/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:53.493647+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4504000/0x0/0x4ffc00000, data 0x280a099/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195878912 unmapped: 76652544 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:54.493848+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195887104 unmapped: 76644352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:55.494020+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195887104 unmapped: 76644352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3430518 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:56.494163+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4504000/0x0/0x4ffc00000, data 0x280a099/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195887104 unmapped: 76644352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba3c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951ba3c00 session 0x5609519325a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f31000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:57.494352+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.956257820s of 13.996915817s, submitted: 6
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4504000/0x0/0x4ffc00000, data 0x280a099/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195887104 unmapped: 76644352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:58.494543+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 76636160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:59.494750+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 76636160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:00.494940+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 76636160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca099/0x2a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3427208 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951f31000 session 0x56094f77d2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:01.495127+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 76636160 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:02.495549+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195903488 unmapped: 76627968 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:03.495779+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195682304 unmapped: 76849152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4143000/0x0/0x4ffc00000, data 0x2bca060/0x2e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,3])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:04.497401+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195682304 unmapped: 76849152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:05.497758+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195682304 unmapped: 76849152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457413 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:06.528819+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204079104 unmapped: 68452352 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4144000/0x0/0x4ffc00000, data 0x2bca060/0x2e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,0,0,0,0,1,0,1,3,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:07.528993+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.098222256s of 10.003301620s, submitted: 40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195698688 unmapped: 76832768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:08.529220+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560950511000 session 0x560951228000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195706880 unmapped: 76824576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504db400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609504db400 session 0x560950de3e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:09.529530+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:10.529855+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640309 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:11.530202+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:12.530478+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:13.530720+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:14.530929+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:15.531166+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640309 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:16.531377+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:17.531512+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:18.531692+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:19.531905+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:20.532085+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640309 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:21.532294+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:22.532495+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:23.532695+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:24.532857+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:25.533017+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640309 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:26.533232+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:27.533415+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609505ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609505ff400 session 0x5609512425a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:28.533614+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:29.533836+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:30.533978+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640309 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:31.534187+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:32.534381+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:33.534503+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:34.534719+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:35.534911+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640309 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:36.535078+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:37.535308+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:38.535528+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:39.535770+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:40.535966+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3640309 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:41.536223+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560953ed7400 session 0x5609512265a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195657728 unmapped: 76873728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560957a0ec00 session 0x56094f43ed20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504db400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:42.536548+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 33.954441071s of 35.283782959s, submitted: 15
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:43.536742+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609504db400 session 0x560950de01e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289f800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:44.536889+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:45.537093+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641762 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:46.537300+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:47.537493+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:48.537715+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:49.537854+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:50.537969+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641762 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:51.538080+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:52.538207+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:53.538473+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:54.538643+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:55.538871+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:56.539038+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3641762 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f274b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:57.539226+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 195649536 unmapped: 76881920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.253245354s of 15.682510376s, submitted: 6
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:58.539371+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205004800 unmapped: 67526656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:59.539513+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 60973056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:00.539688+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 60973056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f239b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:01.539833+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3663498 data_alloc: 218103808 data_used: 8146944
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f239b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 60973056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:02.540019+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 60973056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:03.540158+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f239b000/0x0/0x4ffc00000, data 0x45c3098/0x4883000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 60973056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:04.540281+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204718080 unmapped: 67813376 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:05.540476+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205766656 unmapped: 66764800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:06.540637+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3715554 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 67878912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:07.540841+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 67878912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:08.541039+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.243215084s of 10.583021164s, submitted: 102
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 67878912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:09.541202+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f1e46000/0x0/0x4ffc00000, data 0x4ec8098/0x5188000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 67878912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:10.541355+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 114K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.86 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4453 writes, 12K keys, 4453 commit groups, 1.0 writes per commit group, ingest: 10.10 MB, 0.02 MB/s
                                           Interval WAL: 4453 writes, 1741 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204652544 unmapped: 67878912 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:11.541540+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769730 data_alloc: 218103808 data_used: 8372224
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204955648 unmapped: 67575808 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56094ef10000 session 0x560953c90d20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56095289f800 session 0x560951242780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560957a0e000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:12.541648+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560957a0e000 session 0x56095195fc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f3000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:13.541791+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f3000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:14.541951+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:15.542234+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:16.542849+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3766934 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:17.543356+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:18.543924+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:19.544110+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f3000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:20.544273+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f3000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:21.544678+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3766934 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:22.545475+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f3000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:23.545761+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:24.546112+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:25.546403+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:26.547236+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3766934 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f3000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:27.547486+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:28.547775+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:29.547976+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:30.548125+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:31.548455+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3766934 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:32.548701+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f3000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:33.549023+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:34.549468+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:35.549681+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:36.549875+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f3000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3766934 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560954164000 session 0x560951228780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:37.550328+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56094ef10000 session 0x560951227c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:38.550561+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:39.550769+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56094f762000 session 0x56095195e1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.861545563s of 30.615922928s, submitted: 53
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951ba5000 session 0x56095121fe00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f84ec00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609543d5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:40.550965+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:41.551159+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770729 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:42.551336+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:43.551481+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:44.551601+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:45.551743+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:46.551860+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770729 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204906496 unmapped: 67624960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:47.552015+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204914688 unmapped: 67616768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:48.552177+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204914688 unmapped: 67616768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:49.552334+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204914688 unmapped: 67616768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:50.552473+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204914688 unmapped: 67616768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:51.552628+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3770729 data_alloc: 218103808 data_used: 8376320
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204914688 unmapped: 67616768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:52.552765+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204914688 unmapped: 67616768 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:53.553099+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.128628731s of 14.159121513s, submitted: 5
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204922880 unmapped: 67608576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:54.553965+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204931072 unmapped: 67600384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:55.554292+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204931072 unmapped: 67600384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:56.555029+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3790481 data_alloc: 234881024 data_used: 10235904
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204931072 unmapped: 67600384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:57.555880+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204931072 unmapped: 67600384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:58.556336+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:59.556878+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:00.557134+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:01.557603+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791313 data_alloc: 234881024 data_used: 10240000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:02.558065+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:03.558535+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:04.558848+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:05.559139+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:06.559307+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791313 data_alloc: 234881024 data_used: 10240000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204939264 unmapped: 67592192 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:07.559700+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 67584000 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:08.559875+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.407077789s of 15.433992386s, submitted: 6
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 67584000 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:09.560253+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 67584000 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:10.560639+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 67584000 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:11.560898+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3790449 data_alloc: 234881024 data_used: 10235904
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 67584000 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:12.561217+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204947456 unmapped: 67584000 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:13.561400+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204955648 unmapped: 67575808 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:14.561597+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204955648 unmapped: 67575808 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:15.564556+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204972032 unmapped: 67559424 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:16.564704+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3790097 data_alloc: 234881024 data_used: 10235904
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204972032 unmapped: 67559424 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:17.564853+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204980224 unmapped: 67551232 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:18.565011+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.461207390s of 10.007097244s, submitted: 54
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204996608 unmapped: 67534848 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:19.565155+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205004800 unmapped: 67526656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:20.565330+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205004800 unmapped: 67526656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:21.565452+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f1000/0x0/0x4ffc00000, data 0x561b0cb/0x58dd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3790097 data_alloc: 234881024 data_used: 10235904
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205004800 unmapped: 67526656 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:22.565643+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56094f84ec00 session 0x56094f3f7a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609543d5400 session 0x56094f8daf00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 67297280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:23.565745+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:24.565939+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 67297280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f2000/0x0/0x4ffc00000, data 0x561b0bb/0x58dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56094ef10000 session 0x5609519e90e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:25.566163+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 67297280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:26.566362+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 67297280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3801244 data_alloc: 234881024 data_used: 12267520
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f16f2000/0x0/0x4ffc00000, data 0x561b098/0x58db000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:27.566532+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 67297280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:28.566688+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 67297280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:29.566827+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 67297280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:30.566989+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b32c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 205234176 unmapped: 67297280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560952b32c00 session 0x5609521883c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609519af400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.502923012s of 11.320322037s, submitted: 54
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609519af400 session 0x560953c90000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:31.567121+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447788 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:32.567278+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:33.567462+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:34.567645+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:35.567837+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:36.567956+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447788 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:37.568101+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:38.568305+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:39.568543+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:40.568729+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:41.568949+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447788 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:42.569180+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203513856 unmapped: 69017600 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951c59800 session 0x56095195e960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba2800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951ba2800 session 0x560951fcda40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:43.569367+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:44.569581+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:45.569842+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:46.570090+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448588 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:47.570256+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:48.570403+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095386f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.337261200s of 18.435108185s, submitted: 25
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:49.570631+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca046/0x2a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:50.570858+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56095386f000 session 0x560951fccd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca046/0x2a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:51.571055+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca046/0x2a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450240 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:52.571230+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:53.571394+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:54.571642+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:55.571891+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:56.572145+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca046/0x2a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3450240 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:57.572401+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:58.572622+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:59.572802+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:00.573035+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:01.573214+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203522048 unmapped: 69009408 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2e800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951f2e800 session 0x5609500c63c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.801617622s of 12.871981621s, submitted: 2
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4544000/0x0/0x4ffc00000, data 0x27ca046/0x2a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951ba5000 session 0x5609512281e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3448412 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609544f6400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:02.573412+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 216129536 unmapped: 56401920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609544f6400 session 0x5609516f9e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609517f6400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:03.573615+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609517f6400 session 0x56094f3923c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203571200 unmapped: 68960256 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f1544000/0x0/0x4ffc00000, data 0x57ca046/0x5a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f1544000/0x0/0x4ffc00000, data 0x57ca046/0x5a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:04.573807+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:05.574032+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:06.574217+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f1544000/0x0/0x4ffc00000, data 0x57ca046/0x5a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788313 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:07.574527+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f1544000/0x0/0x4ffc00000, data 0x57ca046/0x5a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:08.574714+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f1544000/0x0/0x4ffc00000, data 0x57ca046/0x5a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:09.574905+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:10.575048+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:11.575235+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788313 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:12.575465+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f1544000/0x0/0x4ffc00000, data 0x57ca046/0x5a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:13.575657+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:14.575822+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:15.576068+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:16.576306+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3788313 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:17.576509+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:18.576788+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f1544000/0x0/0x4ffc00000, data 0x57ca046/0x5a8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203579392 unmapped: 68952064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:19.576966+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609523ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.677957535s of 17.731662750s, submitted: 26
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609523ff400 session 0x560954cc9860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203898880 unmapped: 68632576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560954164800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:20.577151+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203898880 unmapped: 68632576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:21.577311+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203898880 unmapped: 68632576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793174 data_alloc: 218103808 data_used: 7491584
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:22.577571+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 203898880 unmapped: 68632576 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f151f000/0x0/0x4ffc00000, data 0x57ee069/0x5aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:23.577725+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 204513280 unmapped: 68018176 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:24.577863+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:25.578022+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f151f000/0x0/0x4ffc00000, data 0x57ee069/0x5aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f151f000/0x0/0x4ffc00000, data 0x57ee069/0x5aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:26.578148+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861174 data_alloc: 234881024 data_used: 17129472
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:27.578295+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:28.578444+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:29.578573+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f151f000/0x0/0x4ffc00000, data 0x57ee069/0x5aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:30.578724+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:31.578849+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3861174 data_alloc: 234881024 data_used: 17129472
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:32.578953+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:33.579114+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 207478784 unmapped: 65052672 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.682613373s of 14.719784737s, submitted: 6
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f140f000/0x0/0x4ffc00000, data 0x57ee069/0x5aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [0,0,0,1,1])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:34.579291+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214745088 unmapped: 57786368 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f13cf000/0x0/0x4ffc00000, data 0x57ee069/0x5aaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:35.579507+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215908352 unmapped: 56623104 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:36.579632+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214876160 unmapped: 57655296 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3978514 data_alloc: 234881024 data_used: 17903616
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:37.579748+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215171072 unmapped: 57360384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:38.579874+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215171072 unmapped: 57360384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f0687000/0x0/0x4ffc00000, data 0x6536069/0x67f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:39.580033+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215171072 unmapped: 57360384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f0687000/0x0/0x4ffc00000, data 0x6536069/0x67f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:40.580195+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215171072 unmapped: 57360384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:41.580303+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215171072 unmapped: 57360384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3978514 data_alloc: 234881024 data_used: 17903616
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:42.580542+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215171072 unmapped: 57360384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:43.580672+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215171072 unmapped: 57360384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f0687000/0x0/0x4ffc00000, data 0x6536069/0x67f7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:44.580841+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215171072 unmapped: 57360384 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951ba5400 session 0x5609504b14a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.356616020s of 10.802846909s, submitted: 73
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560954164800 session 0x56095121eb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:45.580997+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560952b32000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560952b32000 session 0x560953c90780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:46.581147+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966430 data_alloc: 234881024 data_used: 17813504
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:47.581314+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fb000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:48.581487+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:49.581621+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:50.581766+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:51.581898+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fb000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966430 data_alloc: 234881024 data_used: 17813504
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:52.582069+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:53.582202+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:54.582512+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fb000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:55.583123+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fb000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:56.583241+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fb000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966430 data_alloc: 234881024 data_used: 17813504
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:57.583357+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fb000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:58.583495+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:59.583625+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:00.583721+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:01.583840+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fb000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3966430 data_alloc: 234881024 data_used: 17813504
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:02.583985+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:03.584618+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fb000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:04.584778+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213467136 unmapped: 59064320 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953839000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 20.156423569s of 20.248016357s, submitted: 17
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560953839000 session 0x560954cc85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:05.585703+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609523ff400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213630976 unmapped: 58900480 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:06.585833+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213630976 unmapped: 58900480 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3970931 data_alloc: 234881024 data_used: 17813504
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:07.586012+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213630976 unmapped: 58900480 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:08.586166+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213630976 unmapped: 58900480 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:09.586306+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:10.586416+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:11.586600+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3971571 data_alloc: 234881024 data_used: 17891328
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:12.586767+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:13.587009+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:14.587206+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:15.587499+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:16.587635+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3971571 data_alloc: 234881024 data_used: 17891328
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:17.587774+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:18.587919+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213655552 unmapped: 58875904 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:19.588071+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.111109734s of 14.149739265s, submitted: 9
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 213671936 unmapped: 58859520 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:20.588221+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:21.588343+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3985263 data_alloc: 234881024 data_used: 19206144
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:22.588481+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:23.588652+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:24.588820+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:25.589009+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:26.589140+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3985263 data_alloc: 234881024 data_used: 19206144
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:27.589364+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:28.589525+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:29.589705+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:30.589928+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:31.590104+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3985263 data_alloc: 234881024 data_used: 19206144
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:32.590248+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:33.590395+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:34.590529+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:35.590729+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:36.591052+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3985263 data_alloc: 234881024 data_used: 19206144
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:37.591209+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:38.591359+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:39.591546+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:40.591726+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:41.591916+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3985263 data_alloc: 234881024 data_used: 19206144
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:42.592114+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:43.592256+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214016000 unmapped: 58515456 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07d8000/0x0/0x4ffc00000, data 0x6536046/0x67f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:44.592488+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214048768 unmapped: 58482688 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.643457413s of 25.657037735s, submitted: 2
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951ba5400 session 0x560951932960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x5609523ff400 session 0x56094f365680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:45.592639+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56094f7b4400 session 0x56095122d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214056960 unmapped: 58474496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:46.592795+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214056960 unmapped: 58474496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3980399 data_alloc: 234881024 data_used: 19607552
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:47.592978+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214056960 unmapped: 58474496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fc000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:48.593189+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214056960 unmapped: 58474496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:49.593400+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214056960 unmapped: 58474496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f07fc000/0x0/0x4ffc00000, data 0x6512046/0x67d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:50.593629+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f8000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 214056960 unmapped: 58474496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x56094f8f8000 session 0x56094f393680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ed7400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560953ed7400 session 0x56094f5f05a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:51.593767+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:52.593936+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:53.594111+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:54.594278+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:55.594495+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:56.594688+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:57.594899+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:58.595086+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:59.595311+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:00.595489+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:01.595647+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:02.595853+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:03.596112+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:04.596321+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:05.596550+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:06.596722+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:07.596944+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:08.597185+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:09.597379+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:10.597629+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:11.597792+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:12.598015+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:13.598218+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:14.598415+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:15.598690+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:16.598921+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:17.599073+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:18.599326+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:19.599547+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:20.599725+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:21.599932+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:22.600187+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:23.600403+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:24.600648+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:25.600891+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:26.601047+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:27.601225+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:28.601416+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:29.601635+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:30.601826+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:31.602031+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:32.602211+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472631 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4545000/0x0/0x4ffc00000, data 0x27ca036/0x2a89000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:33.602451+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:34.602666+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209248256 unmapped: 63283200 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b92000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 49.882171631s of 50.097713470s, submitted: 58
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:35.602925+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:36.603084+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 ms_handle_reset con 0x560951b92000 session 0x560950de0960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:37.603274+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 heartbeat osd_stat(store_statfs(0x4f4504000/0x0/0x4ffc00000, data 0x280a099/0x2aca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3477946 data_alloc: 218103808 data_used: 7487488
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:38.603542+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:39.603745+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _renew_subs
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 522 handle_osd_map epochs [523,523], i have 522, src has [1,523]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560950511800 session 0x56095190bc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f44ff000/0x0/0x4ffc00000, data 0x280bc79/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:40.603900+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f44ff000/0x0/0x4ffc00000, data 0x280bc79/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:41.604033+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c59000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560951c59000 session 0x5609521ada40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:42.604208+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482813 data_alloc: 218103808 data_used: 7499776
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:43.604345+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:44.604580+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:45.604790+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:46.604965+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4500000/0x0/0x4ffc00000, data 0x280bc79/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:47.605111+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3482813 data_alloc: 218103808 data_used: 7499776
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:48.605269+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209510400 unmapped: 63021056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:49.605396+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.198623657s of 14.383108139s, submitted: 14
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094f7b5400 session 0x5609521ac1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f30800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560951f30800 session 0x5609519c50e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b5400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094f7b5400 session 0x56094f896780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560950511800 session 0x56094f3601e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951b92000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560951b92000 session 0x560951243a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:50.605586+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:51.605807+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f3984000/0x0/0x4ffc00000, data 0x3387c79/0x364a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f3984000/0x0/0x4ffc00000, data 0x3387c79/0x364a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:52.606000+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f3984000/0x0/0x4ffc00000, data 0x3387c79/0x364a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574296 data_alloc: 218103808 data_used: 7499776
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:53.606200+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:54.606509+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:55.606718+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:56.606930+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f3984000/0x0/0x4ffc00000, data 0x3387c79/0x364a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:57.607511+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094ef10c00 session 0x5609515d3860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3574296 data_alloc: 218103808 data_used: 7499776
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2fc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f3984000/0x0/0x4ffc00000, data 0x3387c79/0x364a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560951f2fc00 session 0x56094f60d680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:58.607692+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209887232 unmapped: 62644224 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951c34000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560951c34000 session 0x5609516f85a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:59.607955+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.618270874s of 10.128105164s, submitted: 32
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094f9a0000 session 0x560951932f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209895424 unmapped: 62636032 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:00.608175+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095048a400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2f000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 209895424 unmapped: 62636032 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:01.608346+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:02.608572+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654855 data_alloc: 234881024 data_used: 18317312
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:03.608726+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f3983000/0x0/0x4ffc00000, data 0x3387c89/0x364b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:04.610003+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:05.610658+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:06.611135+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:07.611298+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3654855 data_alloc: 234881024 data_used: 18317312
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:08.611484+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f3983000/0x0/0x4ffc00000, data 0x3387c89/0x364b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:09.611624+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:10.611846+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f3983000/0x0/0x4ffc00000, data 0x3387c89/0x364b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 212361216 unmapped: 60170240 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:11.612259+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.168493271s of 12.213139534s, submitted: 4
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 216481792 unmapped: 56049664 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:12.612531+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3711693 data_alloc: 234881024 data_used: 19374080
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215539712 unmapped: 56991744 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:13.612782+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f32b8000/0x0/0x4ffc00000, data 0x3a51c89/0x3d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:14.612966+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:15.613247+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f32a4000/0x0/0x4ffc00000, data 0x3a65c89/0x3d29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:16.613447+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f32a4000/0x0/0x4ffc00000, data 0x3a65c89/0x3d29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:17.613757+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3723969 data_alloc: 234881024 data_used: 19513344
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:18.613961+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:19.614117+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:20.614316+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:21.614478+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f32a4000/0x0/0x4ffc00000, data 0x3a65c89/0x3d29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:22.614710+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3723969 data_alloc: 234881024 data_used: 19513344
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:23.614871+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:24.615003+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950511800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:25.615172+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:26.615304+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:27.615509+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3724609 data_alloc: 234881024 data_used: 19771392
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f32a4000/0x0/0x4ffc00000, data 0x3a65c89/0x3d29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560950511800 session 0x5609500c65a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:28.615662+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 215547904 unmapped: 56983552 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:29.615812+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.275606155s of 17.562976837s, submitted: 68
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56095048a400 session 0x5609521adc20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560951f2f000 session 0x56094f3f6780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f9a0000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f32a5000/0x0/0x4ffc00000, data 0x3a65c89/0x3d29000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094f9a0000 session 0x56095183a780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:30.616003+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:31.616215+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:32.616393+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498912 data_alloc: 218103808 data_used: 7757824
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:33.616612+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:34.616821+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:35.617036+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f8f9c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094f8f9c00 session 0x56094f393c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095289f800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4500000/0x0/0x4ffc00000, data 0x280bc79/0x2ace000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56095289f800 session 0x56094f8da3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:36.617236+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950485400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560950485400 session 0x56094f43e960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609544f7800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x5609544f7800 session 0x5609517c6f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:37.617380+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498360 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:38.617527+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:39.617632+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:40.617792+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:41.617988+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:42.618138+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498360 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:43.618284+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:44.618416+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:45.618639+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:46.618812+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:47.618988+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3498360 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:48.619157+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:49.619340+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210616320 unmapped: 61915136 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094ef10c00 session 0x56094f779680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950485800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560950485800 session 0x56094f60cd20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:50.619501+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:51.619669+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:52.619797+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497560 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:53.619938+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:54.620075+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:55.620276+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525ccc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 26.091075897s of 26.317043304s, submitted: 66
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:56.620631+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x5609525ccc00 session 0x56094f364f00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:57.621000+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497560 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:58.621186+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:59.621347+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:00.621532+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:01.621718+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:02.621936+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497560 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:03.622123+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:04.622300+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:05.622526+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:06.622721+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:07.622994+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3497560 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:08.623224+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56095048a800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56095048a800 session 0x56094f43e1e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609545fc800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.414994240s of 13.426722527s, submitted: 1
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f4501000/0x0/0x4ffc00000, data 0x280bc16/0x2acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x5609545fc800 session 0x5609519c4960
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:09.623356+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210821120 unmapped: 61710336 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094ef10c00 session 0x56095053b2c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950485800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:10.623524+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560950485800 session 0x56094f8da3c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210853888 unmapped: 61677568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:11.623701+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210853888 unmapped: 61677568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:12.623860+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210862080 unmapped: 61669376 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834656 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:13.624101+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210862080 unmapped: 61669376 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:14.624302+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210878464 unmapped: 61652992 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f1501000/0x0/0x4ffc00000, data 0x580bc16/0x5acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:15.624560+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210878464 unmapped: 61652992 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:16.624742+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210878464 unmapped: 61652992 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:17.624946+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210878464 unmapped: 61652992 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f1501000/0x0/0x4ffc00000, data 0x580bc16/0x5acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834656 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:18.625203+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210878464 unmapped: 61652992 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:19.626064+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210886656 unmapped: 61644800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:20.626299+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210886656 unmapped: 61644800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f1501000/0x0/0x4ffc00000, data 0x580bc16/0x5acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:21.626524+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210886656 unmapped: 61644800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:22.626762+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210886656 unmapped: 61644800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3834656 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:23.626912+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210886656 unmapped: 61644800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f1501000/0x0/0x4ffc00000, data 0x580bc16/0x5acd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:24.627055+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 210886656 unmapped: 61644800 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951f2f800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.518033028s of 16.168064117s, submitted: 14
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:25.627240+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560951f2f800 session 0x56094f393c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211042304 unmapped: 61489152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:26.627350+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211042304 unmapped: 61489152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609525ccc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f14dd000/0x0/0x4ffc00000, data 0x582fc16/0x5af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:27.627558+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211042304 unmapped: 61489152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3837572 data_alloc: 218103808 data_used: 7761920
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f14dd000/0x0/0x4ffc00000, data 0x582fc16/0x5af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:28.627712+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211042304 unmapped: 61489152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:29.627859+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211042304 unmapped: 61489152 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:30.628000+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:31.628167+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f14dd000/0x0/0x4ffc00000, data 0x582fc16/0x5af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:32.628308+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3849892 data_alloc: 218103808 data_used: 8368128
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:33.628496+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:34.628615+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:35.628786+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f14dd000/0x0/0x4ffc00000, data 0x582fc16/0x5af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:36.628911+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:37.629026+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3849892 data_alloc: 218103808 data_used: 8368128
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:38.629162+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:39.629298+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 211050496 unmapped: 61480960 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:40.629470+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.066954613s of 15.131612778s, submitted: 1
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 226164736 unmapped: 46366720 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:41.629589+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 48472064 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f118d000/0x0/0x4ffc00000, data 0x582fc16/0x5af1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:42.629781+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3912520 data_alloc: 234881024 data_used: 10039296
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:43.629999+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:44.630136+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:45.630349+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:46.630512+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:47.630749+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f0f4b000/0x0/0x4ffc00000, data 0x5a71c16/0x5d33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3912520 data_alloc: 234881024 data_used: 10039296
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:48.630977+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:49.631138+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:50.631333+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:51.631502+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:52.631713+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f0f4b000/0x0/0x4ffc00000, data 0x5a71c16/0x5d33000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3912520 data_alloc: 234881024 data_used: 10039296
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:53.631880+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 224165888 unmapped: 48365568 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.010009766s of 13.330776215s, submitted: 93
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:54.632060+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x5609525ccc00 session 0x56095183a780
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x560953ad9400 session 0x5609515e23c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:55.632252+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 ms_handle_reset con 0x56094ef10c00 session 0x56094f5f05a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:56.632529+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:57.632720+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 heartbeat osd_stat(store_statfs(0x4f12bf000/0x0/0x4ffc00000, data 0x5a4dc16/0x5d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3884540 data_alloc: 234881024 data_used: 9953280
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:58.632929+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:59.633108+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:00.633256+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:01.633458+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953838c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:02.633659+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 523 handle_osd_map epochs [523,524], i have 523, src has [1,524]
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3888714 data_alloc: 234881024 data_used: 9961472
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:03.633807+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bf000/0x0/0x4ffc00000, data 0x5a4dc16/0x5d0f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218177536 unmapped: 54353920 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x560953838c00 session 0x56094f365680
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:04.633975+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560955455000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.334465981s of 10.724861145s, submitted: 12
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x560955455000 session 0x56095121eb40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f805/0x5d14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:05.634211+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:06.634350+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:07.634539+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3891987 data_alloc: 234881024 data_used: 9965568
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:08.634703+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:09.634936+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f805/0x5d14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:10.635173+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:11.635382+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:12.635606+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3891987 data_alloc: 234881024 data_used: 9965568
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:13.635757+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:14.635936+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f805/0x5d14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:15.636131+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953ad9000
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x560953ad9000 session 0x56094f3923c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:16.636308+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:17.636478+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x56094f7b4c00 session 0x5609516f9e00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x56094ef10c00 session 0x5609512281e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3891987 data_alloc: 234881024 data_used: 9965568
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f7b4c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.866265297s of 13.874635696s, submitted: 2
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:18.636613+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x56094f7b4c00 session 0x56094f3601e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x5609504dcc00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:19.636780+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:20.636923+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b8000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:21.637065+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218185728 unmapped: 54345728 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b8000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:22.637166+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3895105 data_alloc: 234881024 data_used: 10088448
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:23.637389+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:24.637541+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b8000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:25.637718+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b8000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:26.637913+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:27.638071+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b8000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b8000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3895105 data_alloc: 234881024 data_used: 10088448
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:28.638233+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:29.638395+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b8000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:30.638614+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:31.638762+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:32.638976+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 218202112 unmapped: 54329344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.485795021s of 14.490269661s, submitted: 1
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3902505 data_alloc: 234881024 data_used: 12906496
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:33.639101+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219570176 unmapped: 52961280 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:34.639265+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:35.639510+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:36.639647+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:37.639821+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:38.639995+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3915625 data_alloc: 234881024 data_used: 14413824
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:39.640130+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:40.640297+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:41.640731+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:42.640906+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:43.641197+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3915657 data_alloc: 234881024 data_used: 14413824
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:44.641408+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:45.641648+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:46.641846+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12b9000/0x0/0x4ffc00000, data 0x5a4f815/0x5d15000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:47.642065+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 219750400 unmapped: 52781056 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.097948074s of 15.333056450s, submitted: 5
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:48.642275+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3952025 data_alloc: 234881024 data_used: 14413824
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f0db2000/0x0/0x4ffc00000, data 0x5f56815/0x621c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:49.642502+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:50.642682+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:51.642953+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f0db2000/0x0/0x4ffc00000, data 0x5f56815/0x621c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:52.643094+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:53.643283+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3952025 data_alloc: 234881024 data_used: 14413824
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:54.643453+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:55.643752+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f0db2000/0x0/0x4ffc00000, data 0x5f56815/0x621c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:56.643936+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:57.644195+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f0db2000/0x0/0x4ffc00000, data 0x5f56815/0x621c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:58.644485+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3952025 data_alloc: 234881024 data_used: 14413824
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 220733440 unmapped: 51798016 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f0db2000/0x0/0x4ffc00000, data 0x5f56815/0x621c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:59.644645+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x5609504dcc00 session 0x560951243a40
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.467883110s of 11.582426071s, submitted: 4
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x56094f762400 session 0x5609515d3c20
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221265920 unmapped: 51265536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560950485400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x560950485400 session 0x5609500c65a0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f0db2000/0x0/0x4ffc00000, data 0x5f56815/0x621c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:00.644800+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221265920 unmapped: 51265536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:01.644977+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221265920 unmapped: 51265536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:02.645303+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221265920 unmapped: 51265536 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:03.645571+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3963164 data_alloc: 234881024 data_used: 16506880
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f0db3000/0x0/0x4ffc00000, data 0x5f56805/0x621b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094ef10c00
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x56094ef10c00 session 0x5609519ab860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x56094f762400
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221274112 unmapped: 51257344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x56094f762400 session 0x56094f7792c0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:04.645795+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:05.646116+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560951ba7800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x560951ba7800 session 0x560954cc90e0
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: handle_auth_request added challenge on 0x560953838800
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 ms_handle_reset con 0x560953838800 session 0x560950de1860
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:06.646539+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:07.646759+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:08.646941+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:09.647113+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:10.647246+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:11.647452+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:12.647650+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:13.647776+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:14.647954+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:15.648147+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:16.648385+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:17.648557+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:18.648837+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:19.649082+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:20.649394+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:21.649632+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:22.649915+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:23.650118+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:24.650332+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:25.650617+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:26.650776+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:27.651120+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:28.651540+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:29.651792+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:30.651962+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:31.652171+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:32.652466+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:33.652662+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:34.652885+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:35.653099+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:36.653285+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:37.653456+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:38.653652+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:39.653809+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:40.653969+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:41.654102+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:42.654275+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:43.654520+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:44.654747+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:45.654995+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:46.655224+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:47.655381+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:48.655560+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:49.655735+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:50.655910+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:51.656077+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:52.656264+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:53.656626+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:54.656813+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:55.657034+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:56.657197+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:57.657352+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:58.657478+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:59.657629+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:00.657804+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:01.657940+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:02.658110+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:03.658262+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:04.658514+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:05.658753+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:06.658968+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:07.659145+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:08.659268+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:09.659381+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:10.659527+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:11.659668+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:12.659839+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:13.659996+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:14.660159+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:15.660370+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:16.660499+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:17.660741+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:18.660903+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:19.661048+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:20.661225+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:21.661390+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:22.661488+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:23.661632+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:24.661769+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:25.662001+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:26.662144+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:27.662280+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:28.662417+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:29.662577+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:30.662708+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:31.662860+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:32.663038+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:33.663232+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:34.663466+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:35.663686+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:36.663826+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:37.663933+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:38.664075+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:39.664240+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:40.664393+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:41.664595+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:42.664726+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:43.664856+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:18 compute-0 ceph-osd[89585]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:18 compute-0 ceph-osd[89585]: bluestore.MempoolThread(0x56094df4fb60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3923288 data_alloc: 234881024 data_used: 15351808
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221224960 unmapped: 51306496 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:44.664995+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: osd.1 524 heartbeat osd_stat(store_statfs(0x4f12bb000/0x0/0x4ffc00000, data 0x5a4f7a2/0x5d13000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x8c2f9c6), peers [0,2] op hist [])
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'config diff' '{prefix=config diff}'
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221274112 unmapped: 51257344 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'config show' '{prefix=config show}'
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:45.665168+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221192192 unmapped: 51339264 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:46.665297+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: prioritycache tune_memory target: 4294967296 mapped: 221003776 unmapped: 51527680 heap: 272531456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: tick
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_tickets
Nov 22 04:25:18 compute-0 ceph-osd[89585]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:47.665437+0000)
Nov 22 04:25:18 compute-0 ceph-osd[89585]: do_command 'log dump' '{prefix=log dump}'
Nov 22 04:25:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Nov 22 04:25:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1018277905' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 04:25:18 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:25:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Nov 22 04:25:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3134093698' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 04:25:18 compute-0 rsyslogd[1007]: imjournal from <np0005531666:ceph-osd>: begin to drop messages due to rate-limiting
Nov 22 04:25:18 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Nov 22 04:25:18 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2390333919' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Nov 22 04:25:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2928694874' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Nov 22 04:25:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773593037' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3251852599' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/413808923' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1018277905' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3134093698' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2390333919' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mon[75011]: pgmap v2335: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:19 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2928694874' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19381 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:19 compute-0 nova_compute[253461]: 2025-11-22 04:25:19.543 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:19 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Nov 22 04:25:19 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1961346892' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 04:25:19 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19385 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Nov 22 04:25:20 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183162032' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19389 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3773593037' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mon[75011]: from='client.19381 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1961346892' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mon[75011]: from='client.19385 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2183162032' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mon[75011]: from='client.19389 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:20 compute-0 nova_compute[253461]: 2025-11-22 04:25:20.441 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:20 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19391 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19393 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19395 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:20 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:21 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:21 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19401 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:21 compute-0 ceph-mon[75011]: from='client.19391 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:21 compute-0 ceph-mon[75011]: from='client.19393 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:21 compute-0 ceph-mon[75011]: from='client.19395 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:21 compute-0 ceph-mon[75011]: from='client.19397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:21 compute-0 ceph-mon[75011]: pgmap v2336: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:21 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19403 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:21 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Nov 22 04:25:21 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405739760' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 22 04:25:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0) v1
Nov 22 04:25:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446708326' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:24.351758+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 274 ms_handle_reset con 0x560371d5b400 session 0x560370adf0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ecfe400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 274 ms_handle_reset con 0x56036ecfe400 session 0x56036ef32960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 274 heartbeat osd_stat(store_statfs(0x4f851e000/0x0/0x4ffc00000, data 0x2219309/0x239f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136749056 unmapped: 43212800 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:25.352019+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 275 ms_handle_reset con 0x56036f0d5400 session 0x56036fb77e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136757248 unmapped: 43204608 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 276 ms_handle_reset con 0x560370d25400 session 0x5603719892c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 276 ms_handle_reset con 0x56036f0d5800 session 0x560371943a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 276 ms_handle_reset con 0x560371135c00 session 0x560371989e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:26.352137+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136781824 unmapped: 43180032 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:27.352284+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ecfe400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 276 ms_handle_reset con 0x56036ecfe400 session 0x5603719425a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 276 ms_handle_reset con 0x56036f0d5400 session 0x56036f8145a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2124690 data_alloc: 234881024 data_used: 11096064
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136806400 unmapped: 43155456 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:28.352519+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 277 ms_handle_reset con 0x56036f0d5800 session 0x56037199d0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 277 heartbeat osd_stat(store_statfs(0x4f8518000/0x0/0x4ffc00000, data 0x221c9ad/0x23a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 277 ms_handle_reset con 0x560370d25400 session 0x560373340d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 43147264 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:29.352679+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136814592 unmapped: 43147264 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ecfe400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:30.352915+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 277 ms_handle_reset con 0x56036ecfe400 session 0x5603733414a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 278 ms_handle_reset con 0x56036f0d5400 session 0x560373341e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136830976 unmapped: 43130880 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:31.353125+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 278 heartbeat osd_stat(store_statfs(0x4f8516000/0x0/0x4ffc00000, data 0x221e546/0x23a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 278 ms_handle_reset con 0x56036f0d5800 session 0x5603726f74a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 278 ms_handle_reset con 0x560371135c00 session 0x560370af5680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136855552 unmapped: 43106304 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:32.353246+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2127667 data_alloc: 234881024 data_used: 11100160
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.318467140s of 10.092455864s, submitted: 205
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 278 ms_handle_reset con 0x560371421800 session 0x56036fcbfc20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136855552 unmapped: 43106304 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ecfe400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:33.353351+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 279 ms_handle_reset con 0x56036ecfe400 session 0x56036f814f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136863744 unmapped: 43098112 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:34.353498+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 279 ms_handle_reset con 0x56036f0d5400 session 0x5603719432c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 279 ms_handle_reset con 0x56036f0d5800 session 0x5603726f74a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136871936 unmapped: 43089920 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:35.353786+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 279 ms_handle_reset con 0x560371135c00 session 0x56036ef323c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 279 handle_osd_map epochs [279,280], i have 279, src has [1,280]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136880128 unmapped: 43081728 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:36.353974+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 280 ms_handle_reset con 0x560371d5b400 session 0x5603708ba960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136880128 unmapped: 43081728 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:37.354104+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 280 heartbeat osd_stat(store_statfs(0x4f8510000/0x0/0x4ffc00000, data 0x222372d/0x23ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2134539 data_alloc: 234881024 data_used: 11112448
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 281 ms_handle_reset con 0x560371d5b400 session 0x560370adfa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136888320 unmapped: 43073536 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:38.354304+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 281 ms_handle_reset con 0x5603711ac400 session 0x56036fc69c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 281 ms_handle_reset con 0x5603711ad000 session 0x5603726f6f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ecfe400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 282 ms_handle_reset con 0x56036f0d5400 session 0x56036ef32780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 282 ms_handle_reset con 0x56036ecfe400 session 0x560372529860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136142848 unmapped: 43819008 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:39.354456+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136159232 unmapped: 43802624 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:40.354709+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 282 heartbeat osd_stat(store_statfs(0x4f9118000/0x0/0x4ffc00000, data 0x1615f23/0x17a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 283 ms_handle_reset con 0x56036f0d5400 session 0x56036f887860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136167424 unmapped: 43794432 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:41.354932+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136167424 unmapped: 43794432 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 283 ms_handle_reset con 0x5603711ac400 session 0x56036ef3eb40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:42.355094+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 283 heartbeat osd_stat(store_statfs(0x4f9116000/0x0/0x4ffc00000, data 0x1617abc/0x17a7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2033428 data_alloc: 218103808 data_used: 7856128
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ad000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.646157265s of 10.068873405s, submitted: 123
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136167424 unmapped: 43794432 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 284 ms_handle_reset con 0x5603711ad000 session 0x5603726e83c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:43.355251+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 284 ms_handle_reset con 0x56036f0d5800 session 0x56036eeb01e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 284 ms_handle_reset con 0x560371135c00 session 0x56036eeb0780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 284 ms_handle_reset con 0x56036f0d5400 session 0x56036fca2780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 285 ms_handle_reset con 0x56036f0d5800 session 0x5603719892c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 285 ms_handle_reset con 0x560371d5b400 session 0x56036fb77c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 43720704 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:44.355638+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 285 ms_handle_reset con 0x5603711ac400 session 0x5603719e2f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ad000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370950400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136241152 unmapped: 43720704 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:45.355778+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 286 ms_handle_reset con 0x560370950400 session 0x56036fc68960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 286 heartbeat osd_stat(store_statfs(0x4f910b000/0x0/0x4ffc00000, data 0x161d0d4/0x17af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136273920 unmapped: 43687936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:46.355967+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 286 ms_handle_reset con 0x56036f0d5400 session 0x56036ef3fa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 286 ms_handle_reset con 0x5603711ad000 session 0x5603709af680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 286 handle_osd_map epochs [287,287], i have 287, src has [1,287]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 43663360 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:47.356147+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 287 ms_handle_reset con 0x56036f0d5800 session 0x560371a00780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2042679 data_alloc: 218103808 data_used: 7880704
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 43663360 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:48.356280+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 288 ms_handle_reset con 0x5603711ac400 session 0x560372d905a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 288 ms_handle_reset con 0x560371d5b400 session 0x560372d90b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136339456 unmapped: 43622400 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:49.356483+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 288 ms_handle_reset con 0x560371d5b400 session 0x560372d910e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 43614208 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:50.356707+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 43614208 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:51.356915+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 288 heartbeat osd_stat(store_statfs(0x4f910b000/0x0/0x4ffc00000, data 0x16207ea/0x17b3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 43614208 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:52.357124+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2044501 data_alloc: 218103808 data_used: 7888896
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 43614208 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:53.357307+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 43614208 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:54.357645+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.040554047s of 11.408233643s, submitted: 187
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 288 ms_handle_reset con 0x56036f0d5400 session 0x560372529e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136347648 unmapped: 43614208 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:55.357915+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136364032 unmapped: 43597824 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:56.358098+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 289 ms_handle_reset con 0x5603711ac400 session 0x5603719e2d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ad000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec0400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 290 ms_handle_reset con 0x5603711ad000 session 0x5603725290e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 43573248 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 290 heartbeat osd_stat(store_statfs(0x4f9102000/0x0/0x4ffc00000, data 0x1623e64/0x17ba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:57.358306+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2059574 data_alloc: 218103808 data_used: 7897088
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 291 ms_handle_reset con 0x560373ec0400 session 0x56036ef223c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 43573248 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 291 ms_handle_reset con 0x56036f0d5800 session 0x560372528000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:58.358468+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 291 heartbeat osd_stat(store_statfs(0x4f9101000/0x0/0x4ffc00000, data 0x1623ec6/0x17bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 43573248 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:58:59.358717+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 291 heartbeat osd_stat(store_statfs(0x4f9100000/0x0/0x4ffc00000, data 0x1625a43/0x17be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 43565056 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:00.358895+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 292 ms_handle_reset con 0x5603711ac400 session 0x56036ef3f0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 43565056 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:01.362019+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 293 ms_handle_reset con 0x56036f0d5400 session 0x5603719e23c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 43532288 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:02.362199+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ad000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 293 ms_handle_reset con 0x560371d5b400 session 0x5603709aef00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2071204 data_alloc: 218103808 data_used: 7913472
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 43499520 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:03.362338+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 294 ms_handle_reset con 0x5603711ad000 session 0x560373341a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 43499520 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:04.362472+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.346848488s of 10.536837578s, submitted: 104
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 294 ms_handle_reset con 0x56036f0d5800 session 0x56036fd343c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 43491328 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:05.362584+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 295 heartbeat osd_stat(store_statfs(0x4f90f3000/0x0/0x4ffc00000, data 0x162ad80/0x17c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 295 ms_handle_reset con 0x5603711ac400 session 0x5603719e3e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 43442176 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 296 ms_handle_reset con 0x560371d5b400 session 0x56036e9d2d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:06.362720+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 296 ms_handle_reset con 0x56036f0d5400 session 0x56036fb772c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec0c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 43401216 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 297 ms_handle_reset con 0x560373ec0c00 session 0x5603719e21e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:07.362853+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 297 ms_handle_reset con 0x560373ec0800 session 0x56036ef3c1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 297 ms_handle_reset con 0x56036f0d5400 session 0x560371942960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2081675 data_alloc: 218103808 data_used: 7921664
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 43368448 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:08.362986+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 43368448 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:09.363131+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 43368448 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:10.363306+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 297 ms_handle_reset con 0x56036f0d5800 session 0x560370ade1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371d5b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 298 ms_handle_reset con 0x5603711ac400 session 0x560372d90780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 298 heartbeat osd_stat(store_statfs(0x4f90ee000/0x0/0x4ffc00000, data 0x16301c7/0x17d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136617984 unmapped: 43343872 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:11.363473+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 299 ms_handle_reset con 0x560371d5b400 session 0x5603719c0b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 43327488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:12.363622+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 299 heartbeat osd_stat(store_statfs(0x4f90e6000/0x0/0x4ffc00000, data 0x1633969/0x17d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2091152 data_alloc: 218103808 data_used: 7933952
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136634368 unmapped: 43327488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:13.363794+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 43302912 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:14.363931+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 300 ms_handle_reset con 0x56036f0d5400 session 0x5603725290e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 43302912 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:15.364073+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 300 ms_handle_reset con 0x56036f0d5800 session 0x560372d90b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 300 ms_handle_reset con 0x5603711ac400 session 0x560372d905a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 300 heartbeat osd_stat(store_statfs(0x4f90e4000/0x0/0x4ffc00000, data 0x1635566/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.599867821s of 11.006446838s, submitted: 118
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 138231808 unmapped: 41730048 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 ms_handle_reset con 0x560373ec0800 session 0x5603709af680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:16.364216+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 ms_handle_reset con 0x560373ec1000 session 0x5603719892c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 ms_handle_reset con 0x560373ec1000 session 0x56036eeb0780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 ms_handle_reset con 0x56036f0d5400 session 0x5603726e83c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 ms_handle_reset con 0x56036f0d5800 session 0x5603726f6f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 heartbeat osd_stat(store_statfs(0x4f90e4000/0x0/0x4ffc00000, data 0x1635566/0x17d9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 ms_handle_reset con 0x5603711ac400 session 0x560370adfa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 138248192 unmapped: 41713664 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:17.364338+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2126565 data_alloc: 218103808 data_used: 7999488
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 138248192 unmapped: 41713664 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:18.364503+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 ms_handle_reset con 0x560373ec0800 session 0x56036ef323c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 heartbeat osd_stat(store_statfs(0x4f8dc2000/0x0/0x4ffc00000, data 0x195508f/0x1afc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 138248192 unmapped: 41713664 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:19.364644+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 302 ms_handle_reset con 0x56036f0d5400 session 0x5603719432c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 302 ms_handle_reset con 0x56036f0d5800 session 0x5603726e94a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 302 ms_handle_reset con 0x5603711ac400 session 0x5603726f7a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 40828928 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:20.364831+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 303 ms_handle_reset con 0x560373ec1000 session 0x560371942000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 303 ms_handle_reset con 0x560373ec1400 session 0x560373ceb680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 303 ms_handle_reset con 0x56036f0d5400 session 0x560372528f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 40828928 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:21.364950+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 304 heartbeat osd_stat(store_statfs(0x4f8bc0000/0x0/0x4ffc00000, data 0x1b52c54/0x1cfc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x533f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 304 ms_handle_reset con 0x56036f0d5800 session 0x5603709aed20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 304 ms_handle_reset con 0x5603711ac400 session 0x56036fcbe000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139132928 unmapped: 40828928 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:22.365080+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 305 ms_handle_reset con 0x560373ec1c00 session 0x560370b17e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 305 ms_handle_reset con 0x56037388a800 session 0x56036fca3c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 305 ms_handle_reset con 0x56036f0d5400 session 0x560372528f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 305 ms_handle_reset con 0x56036f0d5800 session 0x560373ceb680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2165807 data_alloc: 218103808 data_used: 8015872
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139059200 unmapped: 40902656 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:23.365275+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388b000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 306 ms_handle_reset con 0x56037388b000 session 0x560370adfa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139206656 unmapped: 40755200 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:24.365469+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 ms_handle_reset con 0x56037388a400 session 0x5603709ae1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 ms_handle_reset con 0x56037388ac00 session 0x56036f8145a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 ms_handle_reset con 0x560373ec1000 session 0x56037199d680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139706368 unmapped: 40255488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:25.365603+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 ms_handle_reset con 0x56036f0d5400 session 0x56036ef530e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 heartbeat osd_stat(store_statfs(0x4f7c39000/0x0/0x4ffc00000, data 0x26c200f/0x2872000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 ms_handle_reset con 0x56037388a400 session 0x56036eeb1860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388b000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 ms_handle_reset con 0x56036f0d5800 session 0x56036f0d94a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371132000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 ms_handle_reset con 0x560371132000 session 0x56036fca2780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139706368 unmapped: 40255488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:26.365711+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.801262856s of 10.336345673s, submitted: 141
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 308 ms_handle_reset con 0x56036f0d5400 session 0x5603708bb680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 308 ms_handle_reset con 0x56037388b000 session 0x56036eeb0f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 308 heartbeat osd_stat(store_statfs(0x4f7c38000/0x0/0x4ffc00000, data 0x26c201f/0x2873000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 308 ms_handle_reset con 0x560373ec1000 session 0x56036f0d9a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 308 ms_handle_reset con 0x56037388a400 session 0x56036ee2fe00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371971c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139558912 unmapped: 40402944 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:27.366927+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 308 ms_handle_reset con 0x560371971c00 session 0x56036fca34a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2317042 data_alloc: 234881024 data_used: 11210752
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139657216 unmapped: 40304640 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:28.367105+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 308 ms_handle_reset con 0x560370d73800 session 0x56036ef33e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371971c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 309 ms_handle_reset con 0x56036f0d5400 session 0x560370af5c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 309 ms_handle_reset con 0x560371971c00 session 0x56036ef32960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 309 ms_handle_reset con 0x56037388a400 session 0x56036eeb14a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 309 ms_handle_reset con 0x56036f0d5800 session 0x560373cea5a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 40280064 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:29.367240+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139681792 unmapped: 40280064 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:30.367469+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 310 ms_handle_reset con 0x56036f0d5800 session 0x560372528960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 310 ms_handle_reset con 0x56036f0d5400 session 0x5603708ed2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139706368 unmapped: 40255488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:31.367575+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139706368 unmapped: 40255488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:32.367753+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 311 handle_osd_map epochs [312,312], i have 312, src has [1,312]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 312 heartbeat osd_stat(store_statfs(0x4f7900000/0x0/0x4ffc00000, data 0x29f893c/0x2bae000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 312 ms_handle_reset con 0x560370d73800 session 0x5603726f70e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2328396 data_alloc: 234881024 data_used: 11239424
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 40206336 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:33.367878+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371971c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 139755520 unmapped: 40206336 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:34.368108+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 312 handle_osd_map epochs [313,313], i have 313, src has [1,313]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 313 ms_handle_reset con 0x560371971c00 session 0x5603708ecd20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 313 heartbeat osd_stat(store_statfs(0x4f78fc000/0x0/0x4ffc00000, data 0x29fa529/0x2bb1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 140992512 unmapped: 38969344 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:35.368210+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 315 ms_handle_reset con 0x56037388a400 session 0x560373cead20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 315 ms_handle_reset con 0x56037388a400 session 0x56036f0d83c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 144662528 unmapped: 35299328 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:36.368364+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.075366020s of 10.094217300s, submitted: 277
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 144867328 unmapped: 35094528 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:37.368545+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2402840 data_alloc: 234881024 data_used: 11231232
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:38.368704+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 144867328 unmapped: 35094528 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 315 heartbeat osd_stat(store_statfs(0x4f7181000/0x0/0x4ffc00000, data 0x31698dc/0x3322000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:39.368841+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 144867328 unmapped: 35094528 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:40.369092+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 144867328 unmapped: 35094528 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:41.369334+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 144113664 unmapped: 35848192 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:42.369516+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 144113664 unmapped: 35848192 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 316 ms_handle_reset con 0x56036f0d5400 session 0x56036fd34f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2404100 data_alloc: 234881024 data_used: 11235328
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:43.369661+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 144146432 unmapped: 35815424 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371971c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 316 ms_handle_reset con 0x560371971c00 session 0x560371942960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 316 heartbeat osd_stat(store_statfs(0x4f7185000/0x0/0x4ffc00000, data 0x316d3ba/0x3328000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:44.369826+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145580032 unmapped: 34381824 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388b000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 317 ms_handle_reset con 0x560373ec1000 session 0x560370ade960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371131c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:45.369930+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145588224 unmapped: 34373632 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 317 handle_osd_map epochs [318,318], i have 318, src has [1,318]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 318 ms_handle_reset con 0x56037388b000 session 0x56036ef3c1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:46.370089+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145596416 unmapped: 34365440 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.896649361s of 10.056362152s, submitted: 72
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:47.370211+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145661952 unmapped: 34299904 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 319 ms_handle_reset con 0x56036f0d5400 session 0x56036efdef00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 319 heartbeat osd_stat(store_statfs(0x4f7180000/0x0/0x4ffc00000, data 0x3170ab4/0x332e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2453959 data_alloc: 234881024 data_used: 14508032
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 319 heartbeat osd_stat(store_statfs(0x4f6f78000/0x0/0x4ffc00000, data 0x3375885/0x3535000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:48.370357+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145711104 unmapped: 34250752 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 319 ms_handle_reset con 0x560371131c00 session 0x56036fd35c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:49.370497+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145711104 unmapped: 34250752 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371971c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 320 ms_handle_reset con 0x560371971c00 session 0x56036f8150e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 320 heartbeat osd_stat(store_statfs(0x4f6f78000/0x0/0x4ffc00000, data 0x3375885/0x3535000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:50.371069+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145711104 unmapped: 34250752 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:51.371200+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145711104 unmapped: 34250752 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:52.371350+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145711104 unmapped: 34250752 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 321 ms_handle_reset con 0x56037388a400 session 0x56036f8be000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2458727 data_alloc: 234881024 data_used: 14508032
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:53.371499+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145711104 unmapped: 34250752 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 322 ms_handle_reset con 0x560373ec1000 session 0x56036ee2f4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371131c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 322 ms_handle_reset con 0x560371131c00 session 0x560373340d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 322 ms_handle_reset con 0x56036f0d5400 session 0x5603719881e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371971c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 322 heartbeat osd_stat(store_statfs(0x4f6f72000/0x0/0x4ffc00000, data 0x3379027/0x353b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 322 ms_handle_reset con 0x560371971c00 session 0x560371988780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:54.371657+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 156696576 unmapped: 23265280 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 323 ms_handle_reset con 0x56037388a400 session 0x56036f0d8f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:55.371802+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 158998528 unmapped: 20963328 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 323 handle_osd_map epochs [323,324], i have 323, src has [1,324]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 324 ms_handle_reset con 0x560373ec1000 session 0x56036fc68b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:56.371927+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 159006720 unmapped: 20955136 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 324 heartbeat osd_stat(store_statfs(0x4f5abf000/0x0/0x4ffc00000, data 0x4b3a418/0x49e9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.158513069s of 10.242209435s, submitted: 216
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:57.372057+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151986176 unmapped: 27975680 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2686984 data_alloc: 234881024 data_used: 22740992
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 325 heartbeat osd_stat(store_statfs(0x4f5ab1000/0x0/0x4ffc00000, data 0x4b4aecf/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:58.372159+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151986176 unmapped: 27975680 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T03:59:59.372281+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151986176 unmapped: 27975680 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:00.372492+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151994368 unmapped: 27967488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 325 heartbeat osd_stat(store_statfs(0x4f5ab1000/0x0/0x4ffc00000, data 0x4b4aecf/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:01.372659+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 27951104 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 325 heartbeat osd_stat(store_statfs(0x4f5ab1000/0x0/0x4ffc00000, data 0x4b4aecf/0x49fb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:02.372805+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 27951104 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2690582 data_alloc: 234881024 data_used: 22757376
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 heartbeat osd_stat(store_statfs(0x4f5aaf000/0x0/0x4ffc00000, data 0x4b4c932/0x49fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:03.372918+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152010752 unmapped: 27951104 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 ms_handle_reset con 0x56036f0d5800 session 0x560372e78780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 ms_handle_reset con 0x560370d73800 session 0x560372e78b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 ms_handle_reset con 0x56036f0d5400 session 0x5603719e23c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371131c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 ms_handle_reset con 0x560371131c00 session 0x56036fca2b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371971c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 ms_handle_reset con 0x560371971c00 session 0x5603708ec1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 ms_handle_reset con 0x56036f0d5400 session 0x56036fd34b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:04.373062+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 27934720 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 ms_handle_reset con 0x56036f0d5800 session 0x5603708bb2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:05.373205+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152027136 unmapped: 27934720 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 heartbeat osd_stat(store_statfs(0x4f5aaf000/0x0/0x4ffc00000, data 0x4b4c971/0x49fe000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 ms_handle_reset con 0x560370d73800 session 0x560372e78f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:06.373352+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152035328 unmapped: 27926528 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371131c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.646925926s of 10.024452209s, submitted: 57
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:07.373678+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 327 handle_osd_map epochs [327,327], i have 327, src has [1,327]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 327 ms_handle_reset con 0x560371131c00 session 0x560373cea000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152043520 unmapped: 27918336 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2696809 data_alloc: 234881024 data_used: 22761472
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:08.373777+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d6e800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196dc00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152051712 unmapped: 27910144 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 327 ms_handle_reset con 0x56037196dc00 session 0x5603726f74a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:09.373881+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 152059904 unmapped: 27901952 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x56036f0d5400 session 0x5603719c1a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x56037388a400 session 0x56036ef530e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x560370d18000 session 0x5603719c1680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371131c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x560371131c00 session 0x56036fd354a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x560370d6e800 session 0x56037199d680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x56036f0d5400 session 0x56036fcbf0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x56036f0d5800 session 0x560371943680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x560370d73800 session 0x560371942b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:10.374033+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 150405120 unmapped: 29556736 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371131c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x560371131c00 session 0x5603726f7860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 heartbeat osd_stat(store_statfs(0x4f7195000/0x0/0x4ffc00000, data 0x314ec60/0x3318000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x56037388a400 session 0x56036fd343c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:11.374154+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 150421504 unmapped: 29540352 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 ms_handle_reset con 0x56037388a400 session 0x56036ef32960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:12.374284+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151699456 unmapped: 28262400 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2508215 data_alloc: 234881024 data_used: 23535616
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:13.374402+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151699456 unmapped: 28262400 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 handle_osd_map epochs [328,329], i have 328, src has [1,329]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 328 handle_osd_map epochs [329,329], i have 329, src has [1,329]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 329 ms_handle_reset con 0x56036f0d5400 session 0x56036ef23a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:14.374516+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151699456 unmapped: 28262400 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 329 ms_handle_reset con 0x56036f0d5800 session 0x560373340000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 329 heartbeat osd_stat(store_statfs(0x4f7190000/0x0/0x4ffc00000, data 0x31508a7/0x331d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:15.374630+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151699456 unmapped: 28262400 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 329 handle_osd_map epochs [329,330], i have 329, src has [1,330]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 329 handle_osd_map epochs [330,330], i have 330, src has [1,330]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 330 ms_handle_reset con 0x560370d73800 session 0x5603708cef00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371131c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:16.374749+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151740416 unmapped: 28221440 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 331 ms_handle_reset con 0x560371131c00 session 0x5603708ec3c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 331 heartbeat osd_stat(store_statfs(0x4f7189000/0x0/0x4ffc00000, data 0x3153fbb/0x3323000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.418564796s of 10.003454208s, submitted: 237
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 331 ms_handle_reset con 0x56036f0d5400 session 0x56036f8145a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:17.374857+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151773184 unmapped: 28188672 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 331 heartbeat osd_stat(store_statfs(0x4f718b000/0x0/0x4ffc00000, data 0x3153f49/0x3321000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2515791 data_alloc: 234881024 data_used: 23543808
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:18.374970+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151773184 unmapped: 28188672 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:19.375107+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151773184 unmapped: 28188672 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:20.375273+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 332 ms_handle_reset con 0x56036f0d5800 session 0x56036ee2f2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151773184 unmapped: 28188672 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:21.375393+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 151773184 unmapped: 28188672 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 333 ms_handle_reset con 0x560370d73800 session 0x56036fca21e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:22.375505+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 156540928 unmapped: 23420928 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2571211 data_alloc: 234881024 data_used: 26279936
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 333 ms_handle_reset con 0x56037388a400 session 0x56036eeb1680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:23.375601+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 156794880 unmapped: 23166976 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 333 heartbeat osd_stat(store_statfs(0x4f6e75000/0x0/0x4ffc00000, data 0x3468725/0x3639000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x574f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:24.375761+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 156827648 unmapped: 23134208 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:25.375907+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 153640960 unmapped: 26320896 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 334 ms_handle_reset con 0x560370d9ac00 session 0x560370ade960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 334 ms_handle_reset con 0x560370d9ac00 session 0x56036eeb0d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:26.376028+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 153665536 unmapped: 26296320 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.035279274s of 10.328807831s, submitted: 90
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 335 handle_osd_map epochs [336,336], i have 336, src has [1,336]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:27.376146+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154746880 unmapped: 25214976 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 336 ms_handle_reset con 0x56036f0d5400 session 0x56036eeb0780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2584428 data_alloc: 234881024 data_used: 26296320
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:28.376279+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154877952 unmapped: 25083904 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 336 heartbeat osd_stat(store_statfs(0x4f5c5e000/0x0/0x4ffc00000, data 0x34db922/0x36b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:29.376415+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 336 ms_handle_reset con 0x56036f0d5800 session 0x5603708bbc20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154902528 unmapped: 25059328 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:30.376610+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154902528 unmapped: 25059328 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 336 heartbeat osd_stat(store_statfs(0x4f5c5f000/0x0/0x4ffc00000, data 0x34db8c0/0x36af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 336 ms_handle_reset con 0x560370d73800 session 0x5603719c1860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:31.376775+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154943488 unmapped: 25018368 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:32.376957+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154943488 unmapped: 25018368 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2583160 data_alloc: 234881024 data_used: 26292224
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:33.377094+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 336 handle_osd_map epochs [336,337], i have 336, src has [1,337]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 337 ms_handle_reset con 0x56037388a400 session 0x5603719c0b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154566656 unmapped: 25395200 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 337 ms_handle_reset con 0x56037388a400 session 0x560371943860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:34.377212+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 337 ms_handle_reset con 0x56036f0d5400 session 0x5603708efc20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154566656 unmapped: 25395200 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 338 heartbeat osd_stat(store_statfs(0x4f5c59000/0x0/0x4ffc00000, data 0x34de469/0x36b4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:35.377343+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 338 ms_handle_reset con 0x56036f0d5800 session 0x56036fd34f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154583040 unmapped: 25378816 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 338 ms_handle_reset con 0x560370d73800 session 0x5603733401e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:36.377472+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154583040 unmapped: 25378816 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 339 heartbeat osd_stat(store_statfs(0x4f5c57000/0x0/0x4ffc00000, data 0x34e003a/0x36b6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 339 ms_handle_reset con 0x560370d9ac00 session 0x560371a01a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:37.377612+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.355963707s of 10.325695038s, submitted: 68
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154697728 unmapped: 25264128 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2598749 data_alloc: 234881024 data_used: 26300416
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 340 ms_handle_reset con 0x560370d9ac00 session 0x56036ee2e780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:38.377738+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154722304 unmapped: 25239552 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 340 heartbeat osd_stat(store_statfs(0x4f5c47000/0x0/0x4ffc00000, data 0x34e86b4/0x36c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:39.377856+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154722304 unmapped: 25239552 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 340 ms_handle_reset con 0x56036f0d5400 session 0x5603708ce000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 341 ms_handle_reset con 0x56036f0d5800 session 0x560371989680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:40.378011+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154738688 unmapped: 25223168 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 341 ms_handle_reset con 0x560370d73800 session 0x56036ef33e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 341 heartbeat osd_stat(store_statfs(0x4f5c49000/0x0/0x4ffc00000, data 0x34ea233/0x36c4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 341 ms_handle_reset con 0x56037388a400 session 0x560370adfe00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:41.378194+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 25206784 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:42.378352+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 341 ms_handle_reset con 0x56037388a400 session 0x56036f8861e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 25206784 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2600271 data_alloc: 234881024 data_used: 26312704
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:43.378498+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154755072 unmapped: 25206784 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:44.378639+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154771456 unmapped: 25190400 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 342 ms_handle_reset con 0x56036f0d5400 session 0x5603708ede00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:45.378812+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154779648 unmapped: 25182208 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 342 heartbeat osd_stat(store_statfs(0x4f5c45000/0x0/0x4ffc00000, data 0x34ebe3e/0x36c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:46.378928+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 343 ms_handle_reset con 0x560370d73800 session 0x56036fcbf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154812416 unmapped: 25149440 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 343 ms_handle_reset con 0x56036f0d5800 session 0x560371943680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:47.379135+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154836992 unmapped: 25124864 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2609208 data_alloc: 234881024 data_used: 26320896
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:48.379288+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154845184 unmapped: 25116672 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.422988892s of 11.155424118s, submitted: 97
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 343 heartbeat osd_stat(store_statfs(0x4f5c43000/0x0/0x4ffc00000, data 0x34ed85f/0x36ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 343 handle_osd_map epochs [344,344], i have 344, src has [1,344]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:49.379407+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154918912 unmapped: 25042944 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:50.379565+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 ms_handle_reset con 0x560370d9ac00 session 0x56037199d0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154918912 unmapped: 25042944 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 ms_handle_reset con 0x56036f0d5400 session 0x5603719c1680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:51.379709+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154918912 unmapped: 25042944 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:52.379859+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 ms_handle_reset con 0x56036f0d5800 session 0x5603719c1a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154918912 unmapped: 25042944 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2613847 data_alloc: 234881024 data_used: 26796032
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:53.380100+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157507584 unmapped: 22454272 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 ms_handle_reset con 0x560370d73800 session 0x560371988780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 heartbeat osd_stat(store_statfs(0x4f5c41000/0x0/0x4ffc00000, data 0x34ef430/0x36cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:54.380248+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 ms_handle_reset con 0x560370d9ac00 session 0x56037199c5a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 22421504 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:55.380372+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157540352 unmapped: 22421504 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 heartbeat osd_stat(store_statfs(0x4f5a3e000/0x0/0x4ffc00000, data 0x36f0640/0x38d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 344 handle_osd_map epochs [345,345], i have 345, src has [1,345]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 345 ms_handle_reset con 0x56037388a400 session 0x5603726e9c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 346 ms_handle_reset con 0x56037388a400 session 0x5603726e9e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 346 ms_handle_reset con 0x560370d18000 session 0x560370af4780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:56.380543+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 346 ms_handle_reset con 0x56036f0d5800 session 0x560371943c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157663232 unmapped: 22298624 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 346 handle_osd_map epochs [346,347], i have 346, src has [1,347]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x56036f0d5400 session 0x5603708ba000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x560370d73800 session 0x56037199c5a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:57.380694+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x560370d73800 session 0x5603733401e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157736960 unmapped: 22224896 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x56036f0d5800 session 0x5603719c0b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x56036f0d5400 session 0x5603708efc20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x560370d18000 session 0x5603708bbc20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2616737 data_alloc: 251658240 data_used: 29179904
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:58.380800+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x56037388a400 session 0x56036eeb0780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157745152 unmapped: 22216704 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.880726337s of 10.280069351s, submitted: 107
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x56036f0d5800 session 0x56036ee2f4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:00:59.380898+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x560370d18000 session 0x56036f8be000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 ms_handle_reset con 0x560370d73800 session 0x56036f8150e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157786112 unmapped: 22175744 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 heartbeat osd_stat(store_statfs(0x4f5dbb000/0x0/0x4ffc00000, data 0x33707ab/0x3553000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:00.381047+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157786112 unmapped: 22175744 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 348 ms_handle_reset con 0x560370d9ac00 session 0x56036ef33e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:01.381162+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157802496 unmapped: 22159360 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 349 ms_handle_reset con 0x560370d9b400 session 0x56036ef530e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 349 ms_handle_reset con 0x560371973400 session 0x56036eeb12c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:02.381320+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 349 ms_handle_reset con 0x560370d18000 session 0x560370af4d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 155402240 unmapped: 24559616 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x56036f0d5800 session 0x5603726e9680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:03.381476+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2461334 data_alloc: 234881024 data_used: 21340160
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x56037388a400 session 0x560370ade960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x56036f0d5400 session 0x56036eeb1680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 155402240 unmapped: 24559616 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x56036f0d5800 session 0x56036f8145a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x560370d18000 session 0x560373340000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:04.381642+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x560371973400 session 0x560371943680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 155410432 unmapped: 24551424 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x5603711ac400 session 0x5603726e94a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x560373ec1c00 session 0x5603709afa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 heartbeat osd_stat(store_statfs(0x4f6e12000/0x0/0x4ffc00000, data 0x23188ec/0x24fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:05.381780+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 ms_handle_reset con 0x56036f0d5800 session 0x560372e790e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 149176320 unmapped: 30785536 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:06.381943+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 149176320 unmapped: 30785536 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 351 ms_handle_reset con 0x560370d18000 session 0x56036eeb0f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 351 ms_handle_reset con 0x5603711ac400 session 0x5603708ecf00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:07.382086+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 149200896 unmapped: 30760960 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:08.382223+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2335310 data_alloc: 234881024 data_used: 10293248
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 149200896 unmapped: 30760960 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 351 heartbeat osd_stat(store_statfs(0x4f789b000/0x0/0x4ffc00000, data 0x188e387/0x1a72000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:09.382379+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 149200896 unmapped: 30760960 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.255012512s of 11.141064644s, submitted: 179
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 351 ms_handle_reset con 0x560371973400 session 0x56036e9d2960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:10.382622+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147259392 unmapped: 32702464 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373ec1c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 ms_handle_reset con 0x560373ec1c00 session 0x56036f8bf0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:11.382780+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 ms_handle_reset con 0x56036f0d5800 session 0x56036fd345a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 148226048 unmapped: 31735808 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 heartbeat osd_stat(store_statfs(0x4f7a9a000/0x0/0x4ffc00000, data 0x168ec42/0x1873000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:12.382907+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 heartbeat osd_stat(store_statfs(0x4f7a9a000/0x0/0x4ffc00000, data 0x168ec42/0x1873000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 148226048 unmapped: 31735808 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 ms_handle_reset con 0x560370d18000 session 0x560373ceba40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:13.383019+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2322494 data_alloc: 218103808 data_used: 8720384
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 148226048 unmapped: 31735808 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 heartbeat osd_stat(store_statfs(0x4f7a9a000/0x0/0x4ffc00000, data 0x168ec42/0x1873000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x68ef9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:14.383155+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 148226048 unmapped: 31735808 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 ms_handle_reset con 0x5603711ac400 session 0x5603708ba780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:15.383280+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 ms_handle_reset con 0x560371973400 session 0x560370af43c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 148226048 unmapped: 31735808 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 ms_handle_reset con 0x56037388a400 session 0x5603708ecf00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:16.383405+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 heartbeat osd_stat(store_statfs(0x4f71e8000/0x0/0x4ffc00000, data 0x1b31c42/0x1d16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146784256 unmapped: 33177600 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:17.383596+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146784256 unmapped: 33177600 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:18.383703+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2364663 data_alloc: 218103808 data_used: 8720384
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146784256 unmapped: 33177600 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:19.383822+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146784256 unmapped: 33177600 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:20.384014+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146784256 unmapped: 33177600 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 353 heartbeat osd_stat(store_statfs(0x4f71e4000/0x0/0x4ffc00000, data 0x1b337bf/0x1d19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:21.384163+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146784256 unmapped: 33177600 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 353 heartbeat osd_stat(store_statfs(0x4f71e4000/0x0/0x4ffc00000, data 0x1b337bf/0x1d19000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:22.384288+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.875423431s of 12.428100586s, submitted: 59
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 353 ms_handle_reset con 0x56037388a400 session 0x5603709afa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146792448 unmapped: 33169408 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:23.384446+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2369783 data_alloc: 218103808 data_used: 8728576
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146792448 unmapped: 33169408 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:24.384579+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146792448 unmapped: 33169408 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:25.384717+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 353 ms_handle_reset con 0x560370d18000 session 0x560372529a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146808832 unmapped: 33153024 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 353 ms_handle_reset con 0x5603711ac400 session 0x5603726f65a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 354 ms_handle_reset con 0x560370d73800 session 0x560371943e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:26.384821+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 354 heartbeat osd_stat(store_statfs(0x4f71df000/0x0/0x4ffc00000, data 0x1b353ae/0x1d1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146841600 unmapped: 33120256 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 ms_handle_reset con 0x560371973400 session 0x560371a012c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 ms_handle_reset con 0x560370d18000 session 0x56036f8152c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 ms_handle_reset con 0x56036f0d5800 session 0x5603726e94a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:27.384993+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 33095680 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:28.385165+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2383765 data_alloc: 218103808 data_used: 8736768
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 33095680 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 ms_handle_reset con 0x560370d73800 session 0x56036eeb12c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 ms_handle_reset con 0x5603711ac400 session 0x56036ef530e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 heartbeat osd_stat(store_statfs(0x4f71da000/0x0/0x4ffc00000, data 0x1b37426/0x1d22000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:29.385295+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 33095680 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 ms_handle_reset con 0x56037388a400 session 0x56036f8150e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:30.385430+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146866176 unmapped: 33095680 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 ms_handle_reset con 0x56036f0d5800 session 0x560372d90f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:31.385539+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 ms_handle_reset con 0x56037388a400 session 0x56036f8be000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 heartbeat osd_stat(store_statfs(0x4f71d6000/0x0/0x4ffc00000, data 0x1b390bb/0x1d27000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 ms_handle_reset con 0x560370d18000 session 0x56036eeb0f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146874368 unmapped: 33087488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:32.385694+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146874368 unmapped: 33087488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.745751381s of 10.417073250s, submitted: 63
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 ms_handle_reset con 0x560370d73800 session 0x56036fd345a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:33.385808+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2397979 data_alloc: 234881024 data_used: 10358784
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 33062912 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 heartbeat osd_stat(store_statfs(0x4f71d9000/0x0/0x4ffc00000, data 0x1b38b4e/0x1d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:34.385914+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 ms_handle_reset con 0x560370d9ac00 session 0x56036f0d8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 33062912 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 ms_handle_reset con 0x5603711ac400 session 0x5603709aeb40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:35.386024+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 ms_handle_reset con 0x56036f0d5800 session 0x56036fca4b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 ms_handle_reset con 0x560370d18000 session 0x56036fd35e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 33062912 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 heartbeat osd_stat(store_statfs(0x4f71da000/0x0/0x4ffc00000, data 0x1b38b4e/0x1d24000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:36.386183+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 33062912 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 357 ms_handle_reset con 0x560370d73800 session 0x56036ef3e780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 357 ms_handle_reset con 0x560370d9b400 session 0x56036fb774a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 357 ms_handle_reset con 0x56037388a400 session 0x56036ef33860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 357 heartbeat osd_stat(store_statfs(0x4f71d7000/0x0/0x4ffc00000, data 0x1b3a54f/0x1d26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:37.386297+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 33062912 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 357 ms_handle_reset con 0x56036f0d5800 session 0x56036fb76d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 357 ms_handle_reset con 0x560370d18000 session 0x560373cea1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:38.386469+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2421559 data_alloc: 234881024 data_used: 13524992
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 33062912 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:39.386666+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145850368 unmapped: 34111488 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 358 ms_handle_reset con 0x560370d9b400 session 0x56036eeb0000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 359 ms_handle_reset con 0x560370d73800 session 0x56036fb761e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 359 heartbeat osd_stat(store_statfs(0x4f71d5000/0x0/0x4ffc00000, data 0x1b3c0be/0x1d28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:40.386829+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 359 ms_handle_reset con 0x560370d73800 session 0x560370b16780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145866752 unmapped: 34095104 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:41.386963+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145866752 unmapped: 34095104 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:42.387136+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 145866752 unmapped: 34095104 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 359 ms_handle_reset con 0x56036f0d5800 session 0x560371a01a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:43.387281+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2356473 data_alloc: 218103808 data_used: 8785920
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 359 ms_handle_reset con 0x560370d18000 session 0x560371a01860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147464192 unmapped: 32497664 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:44.387469+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147464192 unmapped: 32497664 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.828115463s of 12.341340065s, submitted: 128
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 359 ms_handle_reset con 0x560370d9b400 session 0x560371a00b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:45.387638+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 359 heartbeat osd_stat(store_statfs(0x4f7674000/0x0/0x4ffc00000, data 0x169ac9f/0x1889000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147464192 unmapped: 32497664 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:46.387780+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147464192 unmapped: 32497664 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:47.387914+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 ms_handle_reset con 0x56037388a400 session 0x5603726e9860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147480576 unmapped: 32481280 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 ms_handle_reset con 0x56037388a400 session 0x5603726e8b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 heartbeat osd_stat(store_statfs(0x4f7670000/0x0/0x4ffc00000, data 0x169c764/0x188d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 ms_handle_reset con 0x56036f0d5800 session 0x56036fd35c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 heartbeat osd_stat(store_statfs(0x4f6fa3000/0x0/0x4ffc00000, data 0x1d6972b/0x1f5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:48.388036+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2418358 data_alloc: 218103808 data_used: 8269824
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 ms_handle_reset con 0x560370d18000 session 0x560371a01a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 32423936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:49.388373+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 32423936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:50.388673+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 heartbeat osd_stat(store_statfs(0x4f6fa3000/0x0/0x4ffc00000, data 0x1d69764/0x1f5a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 32423936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:51.388872+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 32423936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:52.388998+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 32423936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:53.389140+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 ms_handle_reset con 0x560370d73800 session 0x56036fb761e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2420156 data_alloc: 218103808 data_used: 8269824
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 32423936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:54.389293+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 32423936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 heartbeat osd_stat(store_statfs(0x4f6fa2000/0x0/0x4ffc00000, data 0x1d697c6/0x1f5b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:55.389474+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147537920 unmapped: 32423936 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.387634277s of 10.761851311s, submitted: 78
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 ms_handle_reset con 0x560370d9b400 session 0x56036eeb0000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:56.389620+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 heartbeat osd_stat(store_statfs(0x4f6fa1000/0x0/0x4ffc00000, data 0x1d69838/0x1f5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 ms_handle_reset con 0x56036f0d5800 session 0x56036fca34a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x560370d18000 session 0x560370896780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x560370d73800 session 0x56037199cb40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x560371966c00 session 0x56036ef33860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x5603711ac400 session 0x5603708970e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147619840 unmapped: 32342016 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 heartbeat osd_stat(store_statfs(0x4f6fa1000/0x0/0x4ffc00000, data 0x1d69838/0x1f5d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x56036f0d5800 session 0x56037199de00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x560370d18000 session 0x560370896b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x560370d73800 session 0x56036fcbe960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x5603711ac400 session 0x560371a010e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:57.439202+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 ms_handle_reset con 0x560371966c00 session 0x56036ef33680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x56037388a400 session 0x560372e79e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x560370d9b400 session 0x56036fb76d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147709952 unmapped: 32251904 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:58.439491+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2461873 data_alloc: 218103808 data_used: 8282112
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 147709952 unmapped: 32251904 heap: 179961856 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x560370d18000 session 0x5603719c0780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x56036f0d5800 session 0x56036fb774a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:01:59.439788+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 169877504 unmapped: 68861952 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x560371966400 session 0x56036f8bed20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:00.440015+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x560370d18000 session 0x56036fca4b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 157335552 unmapped: 81403904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x56036f0d5800 session 0x560370adf0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:01.440275+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 149331968 unmapped: 89407488 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:02.440524+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x56037388a400 session 0x5603708ecd20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 heartbeat osd_stat(store_statfs(0x4eefe8000/0x0/0x4ffc00000, data 0x9d1ef55/0x9f16000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 153862144 unmapped: 84877312 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x5603709c5c00 session 0x5603708ede00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:03.440632+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773953 data_alloc: 218103808 data_used: 8941568
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 155213824 unmapped: 83525632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:04.440825+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 154157056 unmapped: 84582400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:05.440978+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 168673280 unmapped: 70066176 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.133316517s of 10.022562981s, submitted: 329
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:06.441090+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 167337984 unmapped: 71401472 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x560370d73800 session 0x56036ef3e780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 ms_handle_reset con 0x5603711ac400 session 0x560372e79860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:07.441214+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 160088064 unmapped: 78651392 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:08.441412+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4849287 data_alloc: 234881024 data_used: 18907136
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 363 heartbeat osd_stat(store_statfs(0x4e23e7000/0x0/0x4ffc00000, data 0x1691ef78/0x16b17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 363 ms_handle_reset con 0x560370d73800 session 0x5603733403c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 160096256 unmapped: 78643200 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:09.441579+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 363 ms_handle_reset con 0x56036f0d5800 session 0x56036fb772c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 363 heartbeat osd_stat(store_statfs(0x4e23e5000/0x0/0x4ffc00000, data 0x16920ad7/0x16b18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 160112640 unmapped: 78626816 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:10.441763+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 160129024 unmapped: 78610432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:11.441884+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 160137216 unmapped: 78602240 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:12.442059+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 364 ms_handle_reset con 0x560370d18000 session 0x560372529a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 160186368 unmapped: 78553088 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:13.442181+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 365 ms_handle_reset con 0x56037388a400 session 0x56036ef530e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 365 ms_handle_reset con 0x5603709c5c00 session 0x560370ade1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4858596 data_alloc: 234881024 data_used: 18915328
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 160219136 unmapped: 78520320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:14.442326+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 161636352 unmapped: 77103104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:15.442646+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 366 ms_handle_reset con 0x560370d18000 session 0x56036f8150e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 366 heartbeat osd_stat(store_statfs(0x4e1d8b000/0x0/0x4ffc00000, data 0x16f7627b/0x17172000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 164823040 unmapped: 73916416 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.216381073s of 10.013324738s, submitted: 173
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 367 ms_handle_reset con 0x560370d73800 session 0x5603719430e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 367 ms_handle_reset con 0x5603711ac400 session 0x560371989e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 367 ms_handle_reset con 0x56036f0d5800 session 0x56036ee2f2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:16.442814+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 162922496 unmapped: 75816960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:17.442982+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 367 ms_handle_reset con 0x560370d18000 session 0x5603708ecf00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193855488 unmapped: 44883968 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:18.443096+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5256007 data_alloc: 234881024 data_used: 20680704
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 168787968 unmapped: 69951488 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:19.443258+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 164921344 unmapped: 73818112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:20.443550+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 166436864 unmapped: 72302592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 367 heartbeat osd_stat(store_statfs(0x4db337000/0x0/0x4ffc00000, data 0x1d9bdf7e/0x1dbbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:21.444305+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 170876928 unmapped: 67862528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 367 heartbeat osd_stat(store_statfs(0x4d9f37000/0x0/0x4ffc00000, data 0x1edbdf7e/0x1efbf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:22.444481+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 175456256 unmapped: 63283200 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:23.444618+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6306951 data_alloc: 234881024 data_used: 20504576
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 175882240 unmapped: 62857216 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:24.444782+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 367 heartbeat osd_stat(store_statfs(0x4d233c000/0x0/0x4ffc00000, data 0x269c0f7e/0x26bc2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 167804928 unmapped: 70934528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:25.444914+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 368 ms_handle_reset con 0x5603711ac400 session 0x5603719c12c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.154548645s of 10.002182007s, submitted: 189
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177831936 unmapped: 60907520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:26.445070+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 368 heartbeat osd_stat(store_statfs(0x4ce337000/0x0/0x4ffc00000, data 0x2a9c2b5d/0x2abc6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 368 handle_osd_map epochs [369,369], i have 369, src has [1,369]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 369 ms_handle_reset con 0x560370d73800 session 0x56036ef3da40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 369 ms_handle_reset con 0x56037388a400 session 0x5603708bb0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 369 ms_handle_reset con 0x5603709c5c00 session 0x56037199cf00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 169787392 unmapped: 68952064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:27.445196+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 169787392 unmapped: 68952064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:28.445386+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 369 ms_handle_reset con 0x5603709c5c00 session 0x5603725281e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 370 ms_handle_reset con 0x560370d18000 session 0x56036fd35a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7382674 data_alloc: 234881024 data_used: 20508672
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 169803776 unmapped: 68935680 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:29.445595+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 ms_handle_reset con 0x560370d73800 session 0x56036eeb05a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 169844736 unmapped: 68894720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:30.445770+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 169844736 unmapped: 68894720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:31.445928+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 169861120 unmapped: 68878336 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:32.446052+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 ms_handle_reset con 0x5603711ac400 session 0x56036ef234a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037388a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 heartbeat osd_stat(store_statfs(0x4cc330000/0x0/0x4ffc00000, data 0x2c9c78c7/0x2cbcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 ms_handle_reset con 0x56037388a400 session 0x5603708ed0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65126400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:33.446202+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7387718 data_alloc: 234881024 data_used: 21094400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 heartbeat osd_stat(store_statfs(0x4cc330000/0x0/0x4ffc00000, data 0x2c9c78c7/0x2cbcc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173613056 unmapped: 65126400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:34.446391+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 ms_handle_reset con 0x5603709c5c00 session 0x560371943e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 66650112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 ms_handle_reset con 0x560370d73800 session 0x56036e2823c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 ms_handle_reset con 0x560370d18000 session 0x560370af5860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:35.446540+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172089344 unmapped: 66650112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.197322845s of 10.240974426s, submitted: 50
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:36.446676+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 372 ms_handle_reset con 0x5603711ac400 session 0x56036eeb0f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037098bc00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172097536 unmapped: 66641920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:37.446853+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 372 ms_handle_reset con 0x56037098bc00 session 0x5603708ec1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 372 heartbeat osd_stat(store_statfs(0x4cc32d000/0x0/0x4ffc00000, data 0x2c9c933a/0x2cbd0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178405376 unmapped: 60334080 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 372 ms_handle_reset con 0x5603709c5c00 session 0x56036ef33e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:38.447036+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 372 ms_handle_reset con 0x560370d18000 session 0x5603733401e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7434997 data_alloc: 234881024 data_used: 21118976
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172318720 unmapped: 66420736 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:39.447184+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 372 ms_handle_reset con 0x5603711ac400 session 0x5603719432c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172326912 unmapped: 66412544 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 372 heartbeat osd_stat(store_statfs(0x4cbd54000/0x0/0x4ffc00000, data 0x2cfa3339/0x2d1aa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:40.447362+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 373 ms_handle_reset con 0x56037196a400 session 0x560372e78f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371968800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179126272 unmapped: 59613184 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:41.447518+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171008000 unmapped: 67731456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 373 ms_handle_reset con 0x560371968800 session 0x56037088c000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:42.447753+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 374 ms_handle_reset con 0x5603709c5000 session 0x56037199da40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 374 ms_handle_reset con 0x560370d18000 session 0x5603726f6b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 374 ms_handle_reset con 0x560370d73800 session 0x5603709aed20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171376640 unmapped: 67362816 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:43.447888+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7534436 data_alloc: 234881024 data_used: 21217280
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171417600 unmapped: 67321856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 374 heartbeat osd_stat(store_statfs(0x4cb2de000/0x0/0x4ffc00000, data 0x2da11147/0x2dc1e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:44.448014+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 67313664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:45.448176+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 67313664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:46.448308+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171425792 unmapped: 67313664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.167875290s of 10.915669441s, submitted: 65
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:47.448408+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171466752 unmapped: 67272704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 374 heartbeat osd_stat(store_statfs(0x4cb24f000/0x0/0x4ffc00000, data 0x2dfd8147/0x2dcaf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:48.448586+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7590156 data_alloc: 234881024 data_used: 21221376
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171466752 unmapped: 67272704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:49.448766+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171466752 unmapped: 67272704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:50.449053+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171474944 unmapped: 67264512 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:51.449284+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171474944 unmapped: 67264512 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:52.449558+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 374 handle_osd_map epochs [375,375], i have 375, src has [1,375]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171483136 unmapped: 67256320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:53.449755+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7592666 data_alloc: 234881024 data_used: 21229568
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171483136 unmapped: 67256320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 375 heartbeat osd_stat(store_statfs(0x4cb248000/0x0/0x4ffc00000, data 0x2dfdbcc4/0x2dcb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:54.449954+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171483136 unmapped: 67256320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:55.450186+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171573248 unmapped: 67166208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:56.450331+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 175185920 unmapped: 63553536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:57.450488+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.981887341s of 10.370623589s, submitted: 24
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 175185920 unmapped: 63553536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 375 heartbeat osd_stat(store_statfs(0x4caf3a000/0x0/0x4ffc00000, data 0x2e2ebcc4/0x2dfc4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:58.450647+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7632632 data_alloc: 234881024 data_used: 23289856
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 175259648 unmapped: 63479808 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:02:59.450805+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 375 heartbeat osd_stat(store_statfs(0x4caf39000/0x0/0x4ffc00000, data 0x2e372cc4/0x2dfc5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 175259648 unmapped: 63479808 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:00.450975+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171532288 unmapped: 67207168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:01.451159+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171622400 unmapped: 67117056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:02.451287+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171630592 unmapped: 67108864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:03.451475+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 375 heartbeat osd_stat(store_statfs(0x4caf1e000/0x0/0x4ffc00000, data 0x2e38bcc4/0x2dfdf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [0,0,0,0,0,0,3])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7650212 data_alloc: 234881024 data_used: 24252416
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171638784 unmapped: 67100672 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:04.451597+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171638784 unmapped: 67100672 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:05.451936+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171638784 unmapped: 67100672 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:06.452128+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 375 heartbeat osd_stat(store_statfs(0x4caf1b000/0x0/0x4ffc00000, data 0x2e38ccc4/0x2dfe2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196c000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171655168 unmapped: 67084288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:07.452267+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.734245777s of 10.064086914s, submitted: 47
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172941312 unmapped: 65798144 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 ms_handle_reset con 0x56037196c000 session 0x56036ef32780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:08.452381+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d74c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7743631 data_alloc: 234881024 data_used: 24268800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 ms_handle_reset con 0x560370d74c00 session 0x5603708ee780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172941312 unmapped: 65798144 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 heartbeat osd_stat(store_statfs(0x4ca4bb000/0x0/0x4ffc00000, data 0x2ede9905/0x2ea42000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:09.452563+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d74c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 ms_handle_reset con 0x560370d74c00 session 0x56036eeb03c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173096960 unmapped: 65642496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:10.452916+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 ms_handle_reset con 0x5603709c5000 session 0x5603708ba780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 65732608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:11.453093+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 65732608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:12.453356+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 65732608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:13.453531+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7753325 data_alloc: 234881024 data_used: 24289280
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 65732608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:14.453722+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 heartbeat osd_stat(store_statfs(0x4ca49d000/0x0/0x4ffc00000, data 0x2ee8d905/0x2ea60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 65732608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:15.453886+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 65732608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:16.454040+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173006848 unmapped: 65732608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:17.454165+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 heartbeat osd_stat(store_statfs(0x4ca49d000/0x0/0x4ffc00000, data 0x2ee8d905/0x2ea60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 heartbeat osd_stat(store_statfs(0x4ca49d000/0x0/0x4ffc00000, data 0x2ee8d905/0x2ea60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173015040 unmapped: 65724416 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:18.454335+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 heartbeat osd_stat(store_statfs(0x4ca49d000/0x0/0x4ffc00000, data 0x2ee8d905/0x2ea60000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7753325 data_alloc: 234881024 data_used: 24289280
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 ms_handle_reset con 0x560370d18000 session 0x5603708efc20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d73800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.790527344s of 11.651076317s, submitted: 62
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 173015040 unmapped: 65724416 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:19.454486+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 ms_handle_reset con 0x5603709c5c00 session 0x560372e78780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 376 ms_handle_reset con 0x560370d73800 session 0x5603708efa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 ms_handle_reset con 0x5603711ac400 session 0x5603719892c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d74c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 174170112 unmapped: 64569344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:20.454691+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 ms_handle_reset con 0x560370d18000 session 0x5603733414a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 ms_handle_reset con 0x56037196a400 session 0x5603708cef00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 174186496 unmapped: 64552960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:21.454954+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 180183040 unmapped: 58556416 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:22.455092+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196c000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 heartbeat osd_stat(store_statfs(0x4ca498000/0x0/0x4ffc00000, data 0x2ee654b5/0x2ea3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 ms_handle_reset con 0x56037196c000 session 0x5603726f7680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196ac00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 ms_handle_reset con 0x56037196ac00 session 0x5603719c14a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 ms_handle_reset con 0x560370d70c00 session 0x56036eeb1a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181436416 unmapped: 57303040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:23.455262+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7799374 data_alloc: 251658240 data_used: 33521664
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181436416 unmapped: 57303040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:24.455403+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 ms_handle_reset con 0x560370d18000 session 0x56036f0d92c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181436416 unmapped: 57303040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:25.455542+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 ms_handle_reset con 0x5603711ac400 session 0x560372e79680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 heartbeat osd_stat(store_statfs(0x4ca7d3000/0x0/0x4ffc00000, data 0x2eb55462/0x2e72b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181510144 unmapped: 57229312 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:26.455683+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 ms_handle_reset con 0x560370d9b400 session 0x56036eeb1c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 ms_handle_reset con 0x560371966400 session 0x5603709aeb40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 ms_handle_reset con 0x560371966400 session 0x560370b16780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181510144 unmapped: 57229312 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:27.455889+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182001664 unmapped: 56737792 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:28.456764+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 7781301 data_alloc: 251658240 data_used: 33906688
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181452800 unmapped: 57286656 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:29.456926+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.941602707s of 10.224821091s, submitted: 88
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 ms_handle_reset con 0x56037196a400 session 0x56036fb770e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 ms_handle_reset con 0x560370d18000 session 0x5603708961e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 ms_handle_reset con 0x560370d70c00 session 0x5603709af0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 heartbeat osd_stat(store_statfs(0x4cb10e000/0x0/0x4ffc00000, data 0x2d60a000/0x2d715000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:30.457099+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177758208 unmapped: 60981248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711ac400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 ms_handle_reset con 0x5603711ac400 session 0x56036eeb03c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 ms_handle_reset con 0x560370d9b400 session 0x56036fb772c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:31.457237+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177758208 unmapped: 60981248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:32.457383+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177102848 unmapped: 61636608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:33.457531+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178888704 unmapped: 59850752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 379 heartbeat osd_stat(store_statfs(0x4cb795000/0x0/0x4ffc00000, data 0x2d86ba11/0x2d768000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7631413 data_alloc: 234881024 data_used: 22937600
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:34.457642+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178888704 unmapped: 59850752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 380 ms_handle_reset con 0x560370d18000 session 0x56036fb77a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:35.457770+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177971200 unmapped: 60768256 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:36.457971+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177971200 unmapped: 60768256 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 380 heartbeat osd_stat(store_statfs(0x4cb2c5000/0x0/0x4ffc00000, data 0x2dd3b5e2/0x2dc39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 380 ms_handle_reset con 0x560370d70c00 session 0x56036ef33e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:37.458112+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177971200 unmapped: 60768256 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:38.458247+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177971200 unmapped: 60768256 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 380 heartbeat osd_stat(store_statfs(0x4cb2c5000/0x0/0x4ffc00000, data 0x2dd3b5e2/0x2dc39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7643796 data_alloc: 234881024 data_used: 23064576
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:39.458389+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177971200 unmapped: 60768256 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:40.458582+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177971200 unmapped: 60768256 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 380 ms_handle_reset con 0x560371966400 session 0x5603708ed680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 380 ms_handle_reset con 0x56037196a400 session 0x56036eeb12c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:41.458706+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176848896 unmapped: 61890560 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.924961090s of 12.488080025s, submitted: 93
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:42.458837+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176939008 unmapped: 61800448 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d70c00 session 0x5603719430e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:43.458991+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176939008 unmapped: 61800448 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7650570 data_alloc: 234881024 data_used: 23191552
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:44.459152+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177070080 unmapped: 61669376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 heartbeat osd_stat(store_statfs(0x4cb2b5000/0x0/0x4ffc00000, data 0x2dd49054/0x2dc49000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d18000 session 0x56036fcbf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d9b400 session 0x56036f8150e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:45.459301+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177070080 unmapped: 61669376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560371966400 session 0x560373cea1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196a400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x56037196a400 session 0x56036f0d8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d18000 session 0x5603719e3680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:46.459486+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177266688 unmapped: 61472768 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d70c00 session 0x5603719434a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d9b400 session 0x560371988b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 heartbeat osd_stat(store_statfs(0x4ca809000/0x0/0x4ffc00000, data 0x2e7f5055/0x2e6f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:47.459638+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177274880 unmapped: 61464576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 heartbeat osd_stat(store_statfs(0x4ca809000/0x0/0x4ffc00000, data 0x2e7f5055/0x2e6f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560371966400 session 0x560370adf0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196c000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:48.459770+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177299456 unmapped: 61440000 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x56037196c000 session 0x56036fb77860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7730082 data_alloc: 234881024 data_used: 23187456
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:49.459920+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177299456 unmapped: 61440000 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d18000 session 0x56036ef3c1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d70c00 session 0x560371988b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:50.460077+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 61407232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 ms_handle_reset con 0x560370d9b400 session 0x56036eeb03c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:51.460212+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177332224 unmapped: 61407232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.469083786s of 10.001662254s, submitted: 93
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 382 ms_handle_reset con 0x560371966400 session 0x5603708961e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:52.460337+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177340416 unmapped: 61399040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 383 ms_handle_reset con 0x5603711c1400 session 0x56036fb770e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 383 heartbeat osd_stat(store_statfs(0x4ca8a4000/0x0/0x4ffc00000, data 0x2e759776/0x2e659000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x6cff9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:53.460514+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177348608 unmapped: 61390848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 383 ms_handle_reset con 0x560370d18000 session 0x560370b16780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7720699 data_alloc: 234881024 data_used: 23113728
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:54.460625+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177348608 unmapped: 61390848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x5603709c5000 session 0x56036fca25a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x5603709c5c00 session 0x5603708cf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x560370d74c00 session 0x5603708ced20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:55.460738+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177389568 unmapped: 61349888 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x560370d70c00 session 0x56036fb76d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x560370d70c00 session 0x56036fca43c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x5603709c5c00 session 0x560371942000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x5603709c5000 session 0x56036eeb1c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:56.460881+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 171761664 unmapped: 66977792 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x560370d18000 session 0x5603719423c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d74c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x560370d74c00 session 0x560371a012c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:57.461015+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 170352640 unmapped: 68386816 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x56037094f400 session 0x56036fb76780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d74c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x5603709c5c00 session 0x5603719c14a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 heartbeat osd_stat(store_statfs(0x4caf93000/0x0/0x4ffc00000, data 0x2db522b2/0x2db59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:58.461135+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 167690240 unmapped: 71049216 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7468030 data_alloc: 218103808 data_used: 9027584
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:03:59.461262+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 167690240 unmapped: 71049216 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 ms_handle_reset con 0x560370d70c00 session 0x560371a001e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9b400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 385 ms_handle_reset con 0x5603711c1400 session 0x5603708ba1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:00.461492+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 170565632 unmapped: 68173824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 385 heartbeat osd_stat(store_statfs(0x4cb603000/0x0/0x4ffc00000, data 0x2d4de28f/0x2d4e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 386 ms_handle_reset con 0x560370d9b400 session 0x560371a00000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 386 ms_handle_reset con 0x560370d18000 session 0x56036ef32780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:01.461639+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172498944 unmapped: 66240512 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.136457443s of 10.001025200s, submitted: 163
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:02.461794+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 66174976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 387 ms_handle_reset con 0x56037094f400 session 0x56036ef334a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:03.461930+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 66174976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7623860 data_alloc: 234881024 data_used: 18644992
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:04.462071+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 66174976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:05.468576+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172564480 unmapped: 66174976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 388 heartbeat osd_stat(store_statfs(0x4caf31000/0x0/0x4ffc00000, data 0x2de0ff93/0x2dbbc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:06.468972+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 389 ms_handle_reset con 0x5603709c5c00 session 0x56036ef33c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 389 ms_handle_reset con 0x560370d70c00 session 0x560370af5680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172597248 unmapped: 66142208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:07.469142+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172597248 unmapped: 66142208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 389 ms_handle_reset con 0x5603711c1400 session 0x56037199c000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:08.469776+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172597248 unmapped: 66142208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 389 heartbeat osd_stat(store_statfs(0x4cc03c000/0x0/0x4ffc00000, data 0x2caa6514/0x2cab0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7488404 data_alloc: 234881024 data_used: 18649088
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:09.469950+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172613632 unmapped: 66125824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 390 ms_handle_reset con 0x56037094f400 session 0x56037199c960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:10.508399+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 172613632 unmapped: 66125824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 390 ms_handle_reset con 0x5603709c5c00 session 0x56037199cf00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:11.508597+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176717824 unmapped: 62021632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 heartbeat osd_stat(store_statfs(0x4cb58e000/0x0/0x4ffc00000, data 0x2d549be4/0x2d559000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x710f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.599118233s of 10.283295631s, submitted: 157
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:12.508729+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178503680 unmapped: 60235776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 heartbeat osd_stat(store_statfs(0x4cc5aa000/0x0/0x4ffc00000, data 0x2d563bf4/0x2d574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,4,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:13.508908+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178708480 unmapped: 60030976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8053107 data_alloc: 234881024 data_used: 19156992
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:14.509102+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 42844160 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:15.509281+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183672832 unmapped: 55066624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 heartbeat osd_stat(store_statfs(0x4c39ba000/0x0/0x4ffc00000, data 0x36163bf4/0x36174000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:16.509450+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 188284928 unmapped: 50454528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:17.509642+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184352768 unmapped: 54386688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:18.509804+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 50012160 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9333331 data_alloc: 234881024 data_used: 19156992
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:19.509976+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 180699136 unmapped: 58040320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:20.510175+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x560370d74c00 session 0x560371a014a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 189767680 unmapped: 48971776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x5603709c5000 session 0x5603726f7860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:21.510287+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 heartbeat osd_stat(store_statfs(0x4b75ba000/0x0/0x4ffc00000, data 0x42563bf4/0x42574000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [0,0,0,0,0,1,0,2])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 185827328 unmapped: 52912128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x560370d70c00 session 0x560373ceb680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x5603711c1400 session 0x560372d91c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x560370d18000 session 0x56036fcbe960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.611917734s of 10.125125885s, submitted: 147
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x56037094f400 session 0x560371942f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:22.510463+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x5603709c5000 session 0x560372e783c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181764096 unmapped: 56975360 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:23.510630+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181772288 unmapped: 56967168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7688581 data_alloc: 234881024 data_used: 19030016
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x5603709c5c00 session 0x5603708ba1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:24.510789+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 ms_handle_reset con 0x56037094f400 session 0x560371943860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 59539456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 392 ms_handle_reset con 0x5603709c5000 session 0x560370897e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:25.510960+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 59539456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 392 ms_handle_reset con 0x560370d18000 session 0x56036ef33680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 392 heartbeat osd_stat(store_statfs(0x4cc5dd000/0x0/0x4ffc00000, data 0x2d541743/0x2d550000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:26.511095+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179208192 unmapped: 59531264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 392 heartbeat osd_stat(store_statfs(0x4cc5df000/0x0/0x4ffc00000, data 0x2d5416e1/0x2d54f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:27.511288+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179208192 unmapped: 59531264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:28.511718+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 393 heartbeat osd_stat(store_statfs(0x4cc5e0000/0x0/0x4ffc00000, data 0x2d54167f/0x2d54e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 59514880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 393 ms_handle_reset con 0x5603711c1400 session 0x56036f8bed20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d74c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7688794 data_alloc: 234881024 data_used: 19054592
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:29.511871+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 394 ms_handle_reset con 0x560370d74c00 session 0x5603708ed4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 394 ms_handle_reset con 0x56037094f400 session 0x5603708ef680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:30.512042+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 394 ms_handle_reset con 0x560370d18000 session 0x56036f0d92c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:31.512195+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 395 ms_handle_reset con 0x5603711c1400 session 0x560373ceb2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179257344 unmapped: 59482112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 395 ms_handle_reset con 0x560371966400 session 0x5603708ba960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 396 ms_handle_reset con 0x5603709c5000 session 0x5603709ae780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.317027092s of 10.001580238s, submitted: 275
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:32.512345+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179642368 unmapped: 59097088 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 396 ms_handle_reset con 0x56037094f400 session 0x5603708ce000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:33.512541+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179675136 unmapped: 59064320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5325132 data_alloc: 234881024 data_used: 19095552
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 396 heartbeat osd_stat(store_statfs(0x4e24a8000/0x0/0x4ffc00000, data 0x1746460d/0x17685000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:34.512747+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 397 ms_handle_reset con 0x560370d18000 session 0x560373ceb4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179699712 unmapped: 59039744 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:35.512842+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179085312 unmapped: 59654144 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 398 ms_handle_reset con 0x5603711c1400 session 0x56036fca25a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:36.512988+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179118080 unmapped: 59621376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:37.513239+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179118080 unmapped: 59621376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:38.513501+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179118080 unmapped: 59621376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 398 heartbeat osd_stat(store_statfs(0x4f54a2000/0x0/0x4ffc00000, data 0x2c67c2b/0x2e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3120856 data_alloc: 234881024 data_used: 19111936
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:39.513729+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179118080 unmapped: 59621376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:40.513945+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179118080 unmapped: 59621376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 399 heartbeat osd_stat(store_statfs(0x4f54a2000/0x0/0x4ffc00000, data 0x2c67c2b/0x2e8a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:41.514199+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179118080 unmapped: 59621376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 399 heartbeat osd_stat(store_statfs(0x4f6ca0000/0x0/0x4ffc00000, data 0x2c696ae/0x2e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:42.514505+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179118080 unmapped: 59621376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:43.514732+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179118080 unmapped: 59621376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.805088997s of 11.544718742s, submitted: 242
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3131830 data_alloc: 234881024 data_used: 19361792
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:44.515068+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:45.515282+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:46.515573+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 399 heartbeat osd_stat(store_statfs(0x4f6ca1000/0x0/0x4ffc00000, data 0x2c696ae/0x2e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:47.515763+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:48.516068+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3135990 data_alloc: 234881024 data_used: 19709952
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:49.516264+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 399 heartbeat osd_stat(store_statfs(0x4f6ca1000/0x0/0x4ffc00000, data 0x2c696ae/0x2e8d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:50.516476+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:51.516711+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 400 heartbeat osd_stat(store_statfs(0x4f6c91000/0x0/0x4ffc00000, data 0x2c7722b/0x2e9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:52.516876+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:53.517128+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.194447517s of 10.245265961s, submitted: 22
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3142308 data_alloc: 234881024 data_used: 19718144
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:54.517366+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 401 ms_handle_reset con 0x560371966400 session 0x5603733405a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:55.517518+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179159040 unmapped: 59580416 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:56.517721+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371420800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179208192 unmapped: 59531264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 401 heartbeat osd_stat(store_statfs(0x4f6c8d000/0x0/0x4ffc00000, data 0x2c79e0a/0x2ea1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:57.517959+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 402 ms_handle_reset con 0x560371420800 session 0x56036f8861e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 59523072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:58.518119+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 59523072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 402 ms_handle_reset con 0x56037094f400 session 0x5603708bb0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151445 data_alloc: 234881024 data_used: 19730432
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:04:59.518269+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 59514880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 402 ms_handle_reset con 0x560370d18000 session 0x56036ef232c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:00.518504+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 402 heartbeat osd_stat(store_statfs(0x4f6c88000/0x0/0x4ffc00000, data 0x2c7b9f9/0x2ea6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179224576 unmapped: 59514880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:01.518691+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 402 ms_handle_reset con 0x5603711c1400 session 0x56036ef23e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179232768 unmapped: 59506688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 402 ms_handle_reset con 0x560371966400 session 0x56036f814f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:02.518859+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179232768 unmapped: 59506688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f0d5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 402 ms_handle_reset con 0x56036f0d5000 session 0x5603708cf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:03.519030+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179232768 unmapped: 59506688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.904194832s of 10.003091812s, submitted: 22
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 403 ms_handle_reset con 0x56037094f400 session 0x56037199c960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3155703 data_alloc: 234881024 data_used: 19738624
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:04.519185+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 403 ms_handle_reset con 0x5603711c0800 session 0x56036f0d9a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179249152 unmapped: 59490304 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:05.519379+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 403 ms_handle_reset con 0x5603711c1400 session 0x5603709ae1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 24K writes, 95K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 24K writes, 8556 syncs, 2.82 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 54K keys, 13K commit groups, 1.0 writes per commit group, ingest: 32.29 MB, 0.05 MB/s
                                           Interval WAL: 13K writes, 5655 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 403 heartbeat osd_stat(store_statfs(0x4f6c83000/0x0/0x4ffc00000, data 0x2c7d5da/0x2eaa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179249152 unmapped: 59490304 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 ms_handle_reset con 0x560371966400 session 0x5603719e32c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:06.519541+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 ms_handle_reset con 0x560370d18000 session 0x56036f8863c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:07.519755+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 ms_handle_reset con 0x56037094f400 session 0x560372e78d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 ms_handle_reset con 0x560370d18000 session 0x560372d90780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:08.519910+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f6c80000/0x0/0x4ffc00000, data 0x2c7f1ab/0x2ead000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179273728 unmapped: 59465728 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f6c83000/0x0/0x4ffc00000, data 0x2c7f139/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3156860 data_alloc: 234881024 data_used: 19750912
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 ms_handle_reset con 0x5603711c0800 session 0x560371a005a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:09.520022+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 heartbeat osd_stat(store_statfs(0x4f6c83000/0x0/0x4ffc00000, data 0x2c7f139/0x2eab000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179281920 unmapped: 59457536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:10.520238+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179281920 unmapped: 59457536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 ms_handle_reset con 0x5603711c1400 session 0x56036fc68780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:11.520544+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179306496 unmapped: 59432960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 405 ms_handle_reset con 0x560371966400 session 0x56036fb77e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 406 ms_handle_reset con 0x560370d18000 session 0x5603709af680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 406 ms_handle_reset con 0x56037094f400 session 0x56036f815680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:12.520697+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179306496 unmapped: 59432960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:13.520852+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 406 ms_handle_reset con 0x560371966400 session 0x56036ee2fe00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 406 ms_handle_reset con 0x5603711c1400 session 0x560372e794a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179306496 unmapped: 59432960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 406 handle_osd_map epochs [406,407], i have 406, src has [1,407]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.569534302s of 10.026837349s, submitted: 75
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x5603711c0800 session 0x56036fca4d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3184769 data_alloc: 234881024 data_used: 21569536
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:14.521042+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 heartbeat osd_stat(store_statfs(0x4f6c78000/0x0/0x4ffc00000, data 0x2c828ed/0x2eb4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x560370d18000 session 0x560370adfe00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x56037094f400 session 0x56036efde3c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:15.521202+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179372032 unmapped: 59367424 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 heartbeat osd_stat(store_statfs(0x4f6c73000/0x0/0x4ffc00000, data 0x2c84590/0x2eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x5603711c0800 session 0x56037088d860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:16.521545+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x5603711c1400 session 0x5603708ec960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 heartbeat osd_stat(store_statfs(0x4f6c73000/0x0/0x4ffc00000, data 0x2c84590/0x2eba000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179380224 unmapped: 59359232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:17.521713+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179380224 unmapped: 59359232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371966400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:18.521844+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x560371966400 session 0x5603726e92c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179380224 unmapped: 59359232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x56037094f400 session 0x560372d90780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3186953 data_alloc: 234881024 data_used: 21573632
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:19.521989+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x560370d18000 session 0x560372e78d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 ms_handle_reset con 0x5603711c1400 session 0x56036fcbe000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179380224 unmapped: 59359232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:20.522211+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 408 ms_handle_reset con 0x56036ea7c800 session 0x560372d91860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179396608 unmapped: 59342848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 408 ms_handle_reset con 0x5603711c0800 session 0x5603719e32c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:21.522393+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179404800 unmapped: 59334656 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 408 ms_handle_reset con 0x56036ea7c800 session 0x56037199c960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 408 ms_handle_reset con 0x560370d18000 session 0x56036eeb1a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 408 ms_handle_reset con 0x5603711c1400 session 0x56037199d2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:22.522547+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 408 heartbeat osd_stat(store_statfs(0x4f6c71000/0x0/0x4ffc00000, data 0x2c86151/0x2ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 408 handle_osd_map epochs [408,409], i have 408, src has [1,409]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 ms_handle_reset con 0x5603711c0800 session 0x56036fc68b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 180469760 unmapped: 58269696 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d6f000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 ms_handle_reset con 0x560370d6f000 session 0x560370b170e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 ms_handle_reset con 0x56037094f400 session 0x56036f814f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:23.522689+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d6f000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 ms_handle_reset con 0x560370d6f000 session 0x5603733405a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 ms_handle_reset con 0x56036ea7c800 session 0x560373ceb4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 180822016 unmapped: 57917440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3198473 data_alloc: 234881024 data_used: 21581824
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:24.522849+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.960372925s of 10.330261230s, submitted: 82
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 ms_handle_reset con 0x560370d18000 session 0x5603708ef680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c0800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181895168 unmapped: 56844288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 ms_handle_reset con 0x5603711c0800 session 0x56036f8bed20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:25.523177+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181936128 unmapped: 56803328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 ms_handle_reset con 0x56036ea7c800 session 0x5603708ce960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:26.523315+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 56795136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:27.523467+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 heartbeat osd_stat(store_statfs(0x4f6c4c000/0x0/0x4ffc00000, data 0x2cabc91/0x2ee2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 56795136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:28.523668+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181944320 unmapped: 56795136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3201466 data_alloc: 234881024 data_used: 21622784
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:29.523832+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 410 ms_handle_reset con 0x56037094f400 session 0x56036fca4b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181968896 unmapped: 56770560 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:30.523996+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d6f000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 410 ms_handle_reset con 0x560370d6f000 session 0x5603726f74a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 410 ms_handle_reset con 0x560370d18000 session 0x56036f8872c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181985280 unmapped: 56754176 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:31.524120+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181985280 unmapped: 56754176 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 411 ms_handle_reset con 0x560370d25800 session 0x5603708ed680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:32.524276+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 411 ms_handle_reset con 0x56036ea7c800 session 0x56036fca2780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 411 heartbeat osd_stat(store_statfs(0x4f6c47000/0x0/0x4ffc00000, data 0x2cad89c/0x2ee6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181985280 unmapped: 56754176 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 411 ms_handle_reset con 0x56037094f400 session 0x5603726f74a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d18000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:33.524445+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 411 ms_handle_reset con 0x560370d25800 session 0x560370b170e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d6f000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 411 heartbeat osd_stat(store_statfs(0x4f6c44000/0x0/0x4ffc00000, data 0x2caf31f/0x2ee9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181993472 unmapped: 56745984 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 412 ms_handle_reset con 0x560370d6f000 session 0x56037199c960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3213953 data_alloc: 234881024 data_used: 21639168
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:34.524592+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.772437096s of 10.002509117s, submitted: 104
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182009856 unmapped: 56729600 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 412 ms_handle_reset con 0x560370d18000 session 0x56036f8bed20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:35.524903+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 412 heartbeat osd_stat(store_statfs(0x4f6c40000/0x0/0x4ffc00000, data 0x2cf6e8e/0x2eed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182050816 unmapped: 56688640 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:36.525389+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182050816 unmapped: 56688640 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:37.525551+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 187219968 unmapped: 51519488 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 412 heartbeat osd_stat(store_statfs(0x4f6a72000/0x0/0x4ffc00000, data 0x2ec4e8e/0x30bb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:38.525700+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 413 ms_handle_reset con 0x56036ea7c800 session 0x560370b17e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 54616064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3271663 data_alloc: 234881024 data_used: 22007808
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:39.525893+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 54616064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 413 ms_handle_reset con 0x56037094f400 session 0x5603719430e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:40.526141+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 54616064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:41.526651+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 54616064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:42.526781+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 413 ms_handle_reset con 0x560370d25800 session 0x56036f0d8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d6f000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 413 heartbeat osd_stat(store_statfs(0x4f6763000/0x0/0x4ffc00000, data 0x31d1a89/0x33ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x60cf9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 54616064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:43.526921+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 54616064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:44.527061+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3270762 data_alloc: 234881024 data_used: 22007808
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.506494999s of 10.003254890s, submitted: 61
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 413 ms_handle_reset con 0x560370d6f000 session 0x56036fca21e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184123392 unmapped: 54616064 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:45.527227+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184164352 unmapped: 54575104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 414 ms_handle_reset con 0x56036fd07800 session 0x560371a00780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:46.527489+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 54558720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f634f000/0x0/0x4ffc00000, data 0x324248a/0x33cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:47.527755+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 54558720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:48.528069+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 54558720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:49.528324+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3281238 data_alloc: 234881024 data_used: 22011904
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 54558720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:50.528634+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 54558720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:51.528876+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184180736 unmapped: 54558720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:52.529065+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f634f000/0x0/0x4ffc00000, data 0x324248a/0x33cd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 54550528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:53.529204+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 54550528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:54.529325+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3282230 data_alloc: 234881024 data_used: 22183936
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 54550528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:55.529519+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 54550528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.467461586s of 11.695063591s, submitted: 35
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 414 ms_handle_reset con 0x56036ea7c800 session 0x56036e282b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:56.529669+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 54550528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:57.529886+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 414 ms_handle_reset con 0x56036fd07800 session 0x560373cea5a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 54550528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 414 heartbeat osd_stat(store_statfs(0x4f6348000/0x0/0x4ffc00000, data 0x324249a/0x33ce000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:58.530046+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184188928 unmapped: 54550528 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:05:59.530160+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284058 data_alloc: 234881024 data_used: 22183936
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 415 ms_handle_reset con 0x56037094f400 session 0x56036e2823c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184229888 unmapped: 54509568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:00.530362+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184229888 unmapped: 54509568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 415 ms_handle_reset con 0x560370d25800 session 0x56036fca4d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:01.530558+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d6f000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184229888 unmapped: 54509568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 415 handle_osd_map epochs [415,416], i have 415, src has [1,416]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:02.530725+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184229888 unmapped: 54509568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 ms_handle_reset con 0x560370d6f000 session 0x560373340960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:03.530883+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 heartbeat osd_stat(store_statfs(0x4f6349000/0x0/0x4ffc00000, data 0x3245be8/0x33d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184229888 unmapped: 54509568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:04.531055+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291382 data_alloc: 234881024 data_used: 22233088
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 heartbeat osd_stat(store_statfs(0x4f6349000/0x0/0x4ffc00000, data 0x3245be8/0x33d4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 ms_handle_reset con 0x5603711c1400 session 0x56036f0d83c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184229888 unmapped: 54509568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 ms_handle_reset con 0x560371970400 session 0x5603719e3e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:05.531244+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184238080 unmapped: 54501376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.183650970s of 10.022099495s, submitted: 26
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:06.531490+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184254464 unmapped: 54484992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:07.532018+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184262656 unmapped: 54476800 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 heartbeat osd_stat(store_statfs(0x4f634b000/0x0/0x4ffc00000, data 0x3245bd8/0x33d3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 ms_handle_reset con 0x56036ea7c800 session 0x5603708ba000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:08.532380+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184279040 unmapped: 54460416 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 ms_handle_reset con 0x56037094f400 session 0x56037199c5a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:09.532542+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3282289 data_alloc: 234881024 data_used: 22126592
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 ms_handle_reset con 0x56036fd07800 session 0x56036eeb0960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184295424 unmapped: 54444032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:10.532765+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184295424 unmapped: 54444032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:11.533032+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184311808 unmapped: 54427648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:12.533228+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 416 handle_osd_map epochs [417,417], i have 417, src has [1,417]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184311808 unmapped: 54427648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:13.533499+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 heartbeat osd_stat(store_statfs(0x4f636d000/0x0/0x4ffc00000, data 0x3223608/0x33b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184311808 unmapped: 54427648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 ms_handle_reset con 0x560370d25800 session 0x56036f0d85a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:14.533698+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285494 data_alloc: 234881024 data_used: 22130688
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184311808 unmapped: 54427648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:15.534070+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184311808 unmapped: 54427648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.098764896s of 10.043778419s, submitted: 82
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:16.534247+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 heartbeat osd_stat(store_statfs(0x4f636e000/0x0/0x4ffc00000, data 0x3223608/0x33b0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184352768 unmapped: 54386688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:17.961110+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 ms_handle_reset con 0x56036fd07800 session 0x5603719c1a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184352768 unmapped: 54386688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:18.961246+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184369152 unmapped: 54370304 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284631 data_alloc: 234881024 data_used: 22126592
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:19.961390+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184369152 unmapped: 54370304 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 heartbeat osd_stat(store_statfs(0x4f6679000/0x0/0x4ffc00000, data 0x2d4a608/0x2ed7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:20.961629+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184377344 unmapped: 54362112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:21.961779+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 ms_handle_reset con 0x56036ea7c800 session 0x5603709ae1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184377344 unmapped: 54362112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 heartbeat osd_stat(store_statfs(0x4f6847000/0x0/0x4ffc00000, data 0x2d4a5a6/0x2ed6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,2])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:22.961908+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603711c1400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 ms_handle_reset con 0x56037094f400 session 0x5603726f6b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184377344 unmapped: 54362112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 ms_handle_reset con 0x5603711c1400 session 0x56036eeb0b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:23.962053+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184377344 unmapped: 54362112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3247731 data_alloc: 234881024 data_used: 21942272
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:24.962203+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184377344 unmapped: 54362112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 418 heartbeat osd_stat(store_statfs(0x4f6847000/0x0/0x4ffc00000, data 0x2d4a5b6/0x2ed7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 418 ms_handle_reset con 0x56036ea7c800 session 0x560373ceb680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 418 ms_handle_reset con 0x56037094f400 session 0x560372d91c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:25.962352+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184418304 unmapped: 54321152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.720027685s of 10.172929764s, submitted: 87
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 418 handle_osd_map epochs [419,419], i have 419, src has [1,419]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:26.962544+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184434688 unmapped: 54304768 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 419 ms_handle_reset con 0x560370d25800 session 0x560370adeb40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 419 ms_handle_reset con 0x56036ea7d400 session 0x56036e9d2d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:27.962697+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 419 heartbeat osd_stat(store_statfs(0x4f6841000/0x0/0x4ffc00000, data 0x2c98de4/0x2edc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 419 ms_handle_reset con 0x5603709c5000 session 0x5603719c03c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 419 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184459264 unmapped: 54280192 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:28.962945+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184459264 unmapped: 54280192 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3255954 data_alloc: 234881024 data_used: 21966848
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:29.963090+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 420 ms_handle_reset con 0x560371970400 session 0x56036fc68b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184475648 unmapped: 54263808 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:30.963296+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184475648 unmapped: 54263808 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 420 ms_handle_reset con 0x56036fd07800 session 0x56036fcbf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:31.963458+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 420 heartbeat osd_stat(store_statfs(0x4f683f000/0x0/0x4ffc00000, data 0x2c9a96d/0x2ede000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184483840 unmapped: 54255616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:32.963629+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f683c000/0x0/0x4ffc00000, data 0x2c9c3d0/0x2ee1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [2])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184500224 unmapped: 54239232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:33.963865+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184500224 unmapped: 54239232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3250088 data_alloc: 234881024 data_used: 21856256
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:34.964133+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184500224 unmapped: 54239232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:35.964301+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184500224 unmapped: 54239232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f6861000/0x0/0x4ffc00000, data 0x2c783ad/0x2ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:36.964530+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 421 ms_handle_reset con 0x5603709c5000 session 0x560372d90b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184500224 unmapped: 54239232 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.898504257s of 10.858474731s, submitted: 56
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f6861000/0x0/0x4ffc00000, data 0x2c783ad/0x2ebc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:37.964717+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184516608 unmapped: 54222848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:38.964968+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 422 ms_handle_reset con 0x56036ea7d400 session 0x560372d910e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184541184 unmapped: 54198272 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3253549 data_alloc: 234881024 data_used: 21864448
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Nov 22 04:25:22 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3936043561' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:39.965109+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 422 heartbeat osd_stat(store_statfs(0x4f685e000/0x0/0x4ffc00000, data 0x2c79f2a/0x2ebf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 185589760 unmapped: 53149696 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:40.965285+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 185589760 unmapped: 53149696 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 423 ms_handle_reset con 0x56036ea7c800 session 0x56036f887e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 423 ms_handle_reset con 0x56036ea7c800 session 0x56036eeb10e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:41.965494+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 423 ms_handle_reset con 0x56036ea7d400 session 0x5603726e81e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 62291968 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 423 heartbeat osd_stat(store_statfs(0x4f685c000/0x0/0x4ffc00000, data 0x2c7baa7/0x2ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 423 ms_handle_reset con 0x5603709c5000 session 0x5603726e85a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:42.965750+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 424 ms_handle_reset con 0x56036fd07800 session 0x56036f8861e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 62341120 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:43.965886+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 62341120 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037094f400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004042 data_alloc: 218103808 data_used: 8671232
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 424 ms_handle_reset con 0x56037094f400 session 0x56036fb76780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:44.966020+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176406528 unmapped: 62332928 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:45.966161+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 425 ms_handle_reset con 0x56036ea7c800 session 0x56036fca2b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176414720 unmapped: 62324736 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:46.966335+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 426 ms_handle_reset con 0x56036ea7d400 session 0x56036fcbf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 426 ms_handle_reset con 0x56036fd07800 session 0x560370897a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 426 ms_handle_reset con 0x560371970400 session 0x5603708ce960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176422912 unmapped: 62316544 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:47.966470+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 426 heartbeat osd_stat(store_statfs(0x4f7dc1000/0x0/0x4ffc00000, data 0x170e310/0x195b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176422912 unmapped: 62316544 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.940813065s of 11.166750908s, submitted: 97
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d25800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:48.966601+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 427 ms_handle_reset con 0x560370d25800 session 0x56036ef33a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176439296 unmapped: 62300160 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 427 ms_handle_reset con 0x5603709c5000 session 0x5603726e8f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 427 ms_handle_reset con 0x56036ea7c800 session 0x560372e781e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 427 ms_handle_reset con 0x56036ea7d400 session 0x56036ef32780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013913 data_alloc: 218103808 data_used: 8687616
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:49.966745+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 427 ms_handle_reset con 0x56036fd07800 session 0x5603719881e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 62291968 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:50.966879+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 427 heartbeat osd_stat(store_statfs(0x4f7dc3000/0x0/0x4ffc00000, data 0x170fb21/0x195a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 62291968 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:51.967049+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176455680 unmapped: 62283776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 428 ms_handle_reset con 0x5603710f8c00 session 0x56036ef234a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 429 heartbeat osd_stat(store_statfs(0x4f7dba000/0x0/0x4ffc00000, data 0x17131e3/0x1962000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:52.967213+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 429 ms_handle_reset con 0x56036ea7c800 session 0x56036efde780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 62275584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:53.967348+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 429 handle_osd_map epochs [430,430], i have 430, src has [1,430]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 430 ms_handle_reset con 0x56036ea7d400 session 0x5603719c03c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176455680 unmapped: 62283776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 430 ms_handle_reset con 0x560371970400 session 0x56037199d680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029510 data_alloc: 218103808 data_used: 8695808
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:54.967479+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176455680 unmapped: 62283776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 430 heartbeat osd_stat(store_statfs(0x4f7db8000/0x0/0x4ffc00000, data 0x1714d98/0x1965000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 430 ms_handle_reset con 0x56036fd07800 session 0x56036ef33680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:55.967692+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 430 ms_handle_reset con 0x5603709c5000 session 0x560371943c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176455680 unmapped: 62283776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:56.967863+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176447488 unmapped: 62291968 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 432 ms_handle_reset con 0x56036ea7c800 session 0x560372e79860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:57.968052+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 432 heartbeat osd_stat(store_statfs(0x4f7db0000/0x0/0x4ffc00000, data 0x1718414/0x196c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176455680 unmapped: 62283776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.589333534s of 10.016289711s, submitted: 151
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 432 ms_handle_reset con 0x560371970400 session 0x5603726f6b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:58.968193+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 62275584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 432 ms_handle_reset con 0x56036ea7d400 session 0x56036fca4b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3039384 data_alloc: 218103808 data_used: 8638464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:06:59.968330+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371968400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 433 ms_handle_reset con 0x560371968400 session 0x5603709ae1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176455680 unmapped: 62283776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:00.968724+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 433 heartbeat osd_stat(store_statfs(0x4f7dac000/0x0/0x4ffc00000, data 0x171a01f/0x1971000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 434 ms_handle_reset con 0x5603710f8c00 session 0x56036fd35c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 434 ms_handle_reset con 0x56036fd07800 session 0x56036eeb12c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 62275584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:01.968905+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 62275584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:02.969062+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 62275584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:03.969217+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 435 ms_handle_reset con 0x56036ea7d400 session 0x5603719c1860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176463872 unmapped: 62275584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053231 data_alloc: 218103808 data_used: 8654848
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:04.969411+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176488448 unmapped: 62251008 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:05.969664+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371968400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 436 ms_handle_reset con 0x560371970400 session 0x5603719430e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 62234624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 436 ms_handle_reset con 0x56036ea7c800 session 0x560373340000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 437 heartbeat osd_stat(store_statfs(0x4f7da3000/0x0/0x4ffc00000, data 0x171f35a/0x197a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:06.969818+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 62234624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 438 ms_handle_reset con 0x560371968400 session 0x560372528b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:07.969978+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 62234624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.837962151s of 10.057971001s, submitted: 77
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 438 ms_handle_reset con 0x56036ea7c800 session 0x560372d90000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:08.970219+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 438 heartbeat osd_stat(store_statfs(0x4f7d9b000/0x0/0x4ffc00000, data 0x172293c/0x1980000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 62234624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3065650 data_alloc: 218103808 data_used: 8667136
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:09.970379+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176455680 unmapped: 62283776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:10.970650+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 439 ms_handle_reset con 0x56036fd07800 session 0x5603733410e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176455680 unmapped: 62283776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:11.970803+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 440 ms_handle_reset con 0x560370d70800 session 0x5603708970e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176472064 unmapped: 62267392 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:12.970977+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 440 ms_handle_reset con 0x560371970400 session 0x56036ef3da40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 440 handle_osd_map epochs [440,441], i have 440, src has [1,441]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176472064 unmapped: 62267392 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f756800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 441 ms_handle_reset con 0x56036f756800 session 0x5603709af860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 441 heartbeat osd_stat(store_statfs(0x4f7d96000/0x0/0x4ffc00000, data 0x172615c/0x1987000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:13.971210+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 441 ms_handle_reset con 0x56036ea7d400 session 0x5603708cf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 62242816 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 441 ms_handle_reset con 0x56036ea7c800 session 0x5603719c01e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3079847 data_alloc: 218103808 data_used: 8683520
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:14.971484+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 441 heartbeat osd_stat(store_statfs(0x4f7d90000/0x0/0x4ffc00000, data 0x1727d67/0x198c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176496640 unmapped: 62242816 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f756800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:15.971651+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 441 ms_handle_reset con 0x560370d70800 session 0x56036fca43c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 441 ms_handle_reset con 0x560371970c00 session 0x5603725292c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 62234624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 441 handle_osd_map epochs [441,442], i have 441, src has [1,442]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:16.971809+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 442 ms_handle_reset con 0x560371970400 session 0x56036ef223c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 442 handle_osd_map epochs [443,443], i have 442, src has [1,443]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 443 ms_handle_reset con 0x56036fd07800 session 0x56036ef521e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176504832 unmapped: 62234624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:17.972000+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176513024 unmapped: 62226432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 443 heartbeat osd_stat(store_statfs(0x4f7d8c000/0x0/0x4ffc00000, data 0x172b525/0x1992000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 443 ms_handle_reset con 0x56036f756800 session 0x56036f814f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.001989365s of 10.095813751s, submitted: 76
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:18.972196+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 443 ms_handle_reset con 0x56036ea7c800 session 0x560373ceab40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d70800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 444 ms_handle_reset con 0x56036ea7d400 session 0x56036fd35a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176513024 unmapped: 62226432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3090077 data_alloc: 218103808 data_used: 8699904
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:19.972524+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 444 ms_handle_reset con 0x560370d70800 session 0x56036fd35e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176513024 unmapped: 62226432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 444 heartbeat osd_stat(store_statfs(0x4f7d89000/0x0/0x4ffc00000, data 0x172d0b0/0x1994000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:20.973029+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7c800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176529408 unmapped: 62210048 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:21.973184+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f756800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 444 ms_handle_reset con 0x56036f756800 session 0x560373340000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 62193664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:22.973372+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176578560 unmapped: 62160896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 445 ms_handle_reset con 0x56036fd07800 session 0x5603719c1860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:23.973543+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 176578560 unmapped: 62160896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099834 data_alloc: 218103808 data_used: 8720384
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:24.973746+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 61071360 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 446 heartbeat osd_stat(store_statfs(0x4f7d84000/0x0/0x4ffc00000, data 0x172eec3/0x199a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,1,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 446 ms_handle_reset con 0x560371970c00 session 0x5603719c14a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:25.973902+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 446 ms_handle_reset con 0x56036ea7d400 session 0x56036f8872c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 446 ms_handle_reset con 0x56036ea7c800 session 0x560371943a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 61071360 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:26.974096+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 61071360 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 447 heartbeat osd_stat(store_statfs(0x4f7d81000/0x0/0x4ffc00000, data 0x1730a68/0x199c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:27.974283+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 61071360 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:28.974571+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036f756800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.147115231s of 10.384723663s, submitted: 111
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 447 handle_osd_map epochs [447,448], i have 447, src has [1,448]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 447 handle_osd_map epochs [448,448], i have 448, src has [1,448]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 61071360 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109410 data_alloc: 218103808 data_used: 8732672
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:29.974873+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177668096 unmapped: 61071360 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 448 ms_handle_reset con 0x56036fd07800 session 0x56036ef33a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:30.975068+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 61063168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:31.975274+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 61063168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:32.975395+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 449 heartbeat osd_stat(store_statfs(0x4f7d77000/0x0/0x4ffc00000, data 0x1735b8f/0x19a6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 61063168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:33.975567+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 61063168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:34.975764+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115356 data_alloc: 218103808 data_used: 8744960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 449 handle_osd_map epochs [449,450], i have 449, src has [1,450]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 449 handle_osd_map epochs [450,450], i have 450, src has [1,450]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177676288 unmapped: 61063168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:35.975903+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177692672 unmapped: 61046784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 450 ms_handle_reset con 0x5603710f8000 session 0x56036e9d3680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:36.976071+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 450 heartbeat osd_stat(store_statfs(0x4f7d75000/0x0/0x4ffc00000, data 0x173777a/0x19a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177692672 unmapped: 61046784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:37.976256+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 450 heartbeat osd_stat(store_statfs(0x4f7d75000/0x0/0x4ffc00000, data 0x173777a/0x19a9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177692672 unmapped: 61046784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:38.976363+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.597114265s of 10.076836586s, submitted: 62
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 451 handle_osd_map epochs [452,452], i have 452, src has [1,452]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 61030400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:39.976580+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125577 data_alloc: 218103808 data_used: 8753152
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 61030400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 452 ms_handle_reset con 0x56036ea7d400 session 0x560373ceb2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 452 heartbeat osd_stat(store_statfs(0x4f7d6d000/0x0/0x4ffc00000, data 0x173ad5a/0x19af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:40.976844+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 452 ms_handle_reset con 0x560371970c00 session 0x560370897a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 61030400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:41.977020+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 452 ms_handle_reset con 0x56036f756800 session 0x56036eeb14a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 61030400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:42.977203+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 61030400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:43.977369+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 452 heartbeat osd_stat(store_statfs(0x4f7d6f000/0x0/0x4ffc00000, data 0x173ad5a/0x19af000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 452 handle_osd_map epochs [452,453], i have 452, src has [1,453]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 61030400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:44.978043+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3127641 data_alloc: 218103808 data_used: 8757248
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd07800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 61030400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 453 ms_handle_reset con 0x5603710f8000 session 0x5603726e81e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 453 heartbeat osd_stat(store_statfs(0x4f7d6c000/0x0/0x4ffc00000, data 0x173c92b/0x19b2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:45.978212+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 61030400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:46.978395+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 454 ms_handle_reset con 0x560371970c00 session 0x560372d910e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 454 ms_handle_reset con 0x56036ea7d400 session 0x560370896000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177766400 unmapped: 60973056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 455 ms_handle_reset con 0x5603709c5400 session 0x560373cea780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:47.978608+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 455 ms_handle_reset con 0x56036fd07800 session 0x560371943680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177766400 unmapped: 60973056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:48.978780+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.793768406s of 10.013147354s, submitted: 88
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177766400 unmapped: 60973056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 456 heartbeat osd_stat(store_statfs(0x4f7d63000/0x0/0x4ffc00000, data 0x1741b86/0x19b9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:49.978970+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3137392 data_alloc: 218103808 data_used: 8769536
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 177758208 unmapped: 60981248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 457 ms_handle_reset con 0x56036fd06800 session 0x56036fca2b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 457 ms_handle_reset con 0x5603709c5400 session 0x56036eeb1e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:50.979174+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 59932672 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:51.979380+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371972000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178806784 unmapped: 59932672 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:52.979586+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 459 ms_handle_reset con 0x560371970c00 session 0x56036ef3fa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 59908096 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:53.979745+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 59908096 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:54.980073+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 459 ms_handle_reset con 0x5603710f8000 session 0x560372528960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3145633 data_alloc: 218103808 data_used: 8777728
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 459 heartbeat osd_stat(store_statfs(0x4f7d5c000/0x0/0x4ffc00000, data 0x1746df9/0x19c2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 59908096 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:55.980216+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 460 ms_handle_reset con 0x560371972000 session 0x560372e790e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 59908096 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 460 ms_handle_reset con 0x56036ea7d400 session 0x5603708ce000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:56.980550+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 59908096 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:57.980920+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178831360 unmapped: 59908096 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:58.981109+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 460 heartbeat osd_stat(store_statfs(0x4f7d59000/0x0/0x4ffc00000, data 0x1748992/0x19c5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.302993774s of 10.281996727s, submitted: 102
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178839552 unmapped: 59899904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 461 ms_handle_reset con 0x56036fd06800 session 0x56036ef3c1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:07:59.981254+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3151775 data_alloc: 218103808 data_used: 8794112
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178839552 unmapped: 59899904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:00.981498+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178839552 unmapped: 59899904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:01.981675+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 461 handle_osd_map epochs [461,462], i have 461, src has [1,462]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 461 handle_osd_map epochs [462,462], i have 462, src has [1,462]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178847744 unmapped: 59891712 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:02.981851+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 462 ms_handle_reset con 0x5603709c5400 session 0x560371a012c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178855936 unmapped: 59883520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:03.982009+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f7d53000/0x0/0x4ffc00000, data 0x174c188/0x19cb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 59875328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:04.982272+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3155612 data_alloc: 218103808 data_used: 8810496
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 59875328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:05.982504+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f7d54000/0x0/0x4ffc00000, data 0x174c126/0x19ca000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 59875328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:06.982708+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 59875328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:07.982870+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178864128 unmapped: 59875328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:08.983037+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.920635700s of 10.076663017s, submitted: 102
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 464 heartbeat osd_stat(store_statfs(0x4f7d52000/0x0/0x4ffc00000, data 0x174dcf7/0x19cc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:09.983215+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3161442 data_alloc: 218103808 data_used: 8822784
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 464 ms_handle_reset con 0x5603710f8000 session 0x5603708ba000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:10.983512+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:11.983675+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 464 heartbeat osd_stat(store_statfs(0x4f7d4e000/0x0/0x4ffc00000, data 0x174f786/0x19cf000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:12.983808+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f7d4a000/0x0/0x4ffc00000, data 0x1751205/0x19d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x64df9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:13.983933+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:14.984090+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3163896 data_alloc: 218103808 data_used: 8822784
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f6bab000/0x0/0x4ffc00000, data 0x1751205/0x19d2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:15.984225+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:16.984574+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 466 ms_handle_reset con 0x560371970c00 session 0x56036ee2f2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 466 handle_osd_map epochs [466,467], i have 466, src has [1,467]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:17.984716+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 467 heartbeat osd_stat(store_statfs(0x4f6ba6000/0x0/0x4ffc00000, data 0x17547f3/0x19d6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178872320 unmapped: 59867136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:18.984877+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 467 handle_osd_map epochs [467,468], i have 467, src has [1,468]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178888704 unmapped: 59850752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:19.985029+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3169602 data_alloc: 218103808 data_used: 8814592
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178888704 unmapped: 59850752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f6ba5000/0x0/0x4ffc00000, data 0x175631c/0x19d7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.658248901s of 11.290443420s, submitted: 123
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:20.985202+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178888704 unmapped: 59850752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:21.985352+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178888704 unmapped: 59850752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:22.985521+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 470 heartbeat osd_stat(store_statfs(0x4f6ba0000/0x0/0x4ffc00000, data 0x1759a94/0x19dc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 470 handle_osd_map epochs [471,471], i have 471, src has [1,471]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178888704 unmapped: 59850752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 471 handle_osd_map epochs [472,472], i have 472, src has [1,472]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:23.985688+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 472 heartbeat osd_stat(store_statfs(0x4f6b9b000/0x0/0x4ffc00000, data 0x175d0f4/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 472 ms_handle_reset con 0x560371970c00 session 0x56036f0d9a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 472 heartbeat osd_stat(store_statfs(0x4f6b9b000/0x0/0x4ffc00000, data 0x175d0f4/0x19e2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178888704 unmapped: 59850752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:24.985852+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3180181 data_alloc: 218103808 data_used: 8818688
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178905088 unmapped: 59834368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:25.986001+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178913280 unmapped: 59826176 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 472 ms_handle_reset con 0x56036ea7d400 session 0x5603725294a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 472 heartbeat osd_stat(store_statfs(0x4f6b9c000/0x0/0x4ffc00000, data 0x175d0e4/0x19e1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:26.986091+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 472 ms_handle_reset con 0x56036fd06800 session 0x56036f8154a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178913280 unmapped: 59826176 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:27.986269+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178921472 unmapped: 59817984 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:28.986454+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178970624 unmapped: 59768832 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:29.986601+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 474 ms_handle_reset con 0x5603709c5400 session 0x5603709ae960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371ea5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 474 ms_handle_reset con 0x5603710f8000 session 0x56036e9d2b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3186639 data_alloc: 218103808 data_used: 8822784
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 474 heartbeat osd_stat(store_statfs(0x4f6b95000/0x0/0x4ffc00000, data 0x1760764/0x19e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 59760640 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.505320549s of 10.063724518s, submitted: 132
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:30.986833+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 474 ms_handle_reset con 0x56036ea7d400 session 0x560372e78b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 59760640 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:31.987018+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 474 heartbeat osd_stat(store_statfs(0x4f6b98000/0x0/0x4ffc00000, data 0x1760754/0x19e6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [0,0,0,1,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 178987008 unmapped: 59752448 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 475 ms_handle_reset con 0x560371ea5800 session 0x560373341680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:32.987246+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179011584 unmapped: 59727872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:33.987418+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 59719680 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 475 ms_handle_reset con 0x56036fd06800 session 0x56037199c1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:34.987602+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187927 data_alloc: 218103808 data_used: 8830976
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 59719680 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:35.987795+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179019776 unmapped: 59719680 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:36.987970+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 475 heartbeat osd_stat(store_statfs(0x4f6b95000/0x0/0x4ffc00000, data 0x1762191/0x19e8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 475 handle_osd_map epochs [476,476], i have 476, src has [1,476]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 476 ms_handle_reset con 0x5603709c5400 session 0x56037088d860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179044352 unmapped: 59695104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 476 handle_osd_map epochs [477,477], i have 477, src has [1,477]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:37.988091+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179077120 unmapped: 59662336 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:38.988206+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 ms_handle_reset con 0x5603710f8000 session 0x56037199c000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179077120 unmapped: 59662336 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:39.988367+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3192435 data_alloc: 218103808 data_used: 8835072
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179077120 unmapped: 59662336 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 heartbeat osd_stat(store_statfs(0x4f6b91000/0x0/0x4ffc00000, data 0x17657e1/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:40.988595+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 heartbeat osd_stat(store_statfs(0x4f6b91000/0x0/0x4ffc00000, data 0x17657e1/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179077120 unmapped: 59662336 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:41.988746+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179077120 unmapped: 59662336 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:42.988928+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.298949242s of 12.576763153s, submitted: 88
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 ms_handle_reset con 0x5603710f8000 session 0x56036fc68b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179077120 unmapped: 59662336 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:43.989081+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179077120 unmapped: 59662336 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 ms_handle_reset con 0x56036ea7d400 session 0x560372e781e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:44.989204+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 ms_handle_reset con 0x56036fd06800 session 0x560371a00960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3194249 data_alloc: 218103808 data_used: 8835072
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 heartbeat osd_stat(store_statfs(0x4f6b90000/0x0/0x4ffc00000, data 0x17657e1/0x19ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179093504 unmapped: 59645952 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:45.989372+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179093504 unmapped: 59645952 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:46.989564+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 ms_handle_reset con 0x5603709c5400 session 0x5603708baf00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179109888 unmapped: 59629568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:47.989759+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371ea5800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 478 ms_handle_reset con 0x560371ea5800 session 0x56036ef525a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-mon[75011]: from='client.19401 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:22 compute-0 ceph-mon[75011]: from='client.19403 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Nov 22 04:25:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/405739760' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Nov 22 04:25:22 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1446708326' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179126272 unmapped: 59613184 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:48.989902+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179150848 unmapped: 59588608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 478 handle_osd_map epochs [478,479], i have 478, src has [1,479]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 478 ms_handle_reset con 0x56036ea7d400 session 0x560371a00000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:49.990030+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3202974 data_alloc: 218103808 data_used: 8843264
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 ms_handle_reset con 0x56036fd06800 session 0x560371989a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179175424 unmapped: 59564032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 heartbeat osd_stat(store_statfs(0x4f6b89000/0x0/0x4ffc00000, data 0x1768dc1/0x19f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:50.990227+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179175424 unmapped: 59564032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:51.990378+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 ms_handle_reset con 0x5603709c5400 session 0x5603719890e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179191808 unmapped: 59547648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:52.990501+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.916097164s of 10.154587746s, submitted: 94
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 59539456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:53.990643+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 ms_handle_reset con 0x5603710f8000 session 0x56036f8bf4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 59539456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:54.990837+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3204168 data_alloc: 218103808 data_used: 8843264
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 59539456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:55.991041+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 ms_handle_reset con 0x560371970c00 session 0x5603733412c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 heartbeat osd_stat(store_statfs(0x4f6b8b000/0x0/0x4ffc00000, data 0x1768dc1/0x19f3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 59539456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:56.991187+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 ms_handle_reset con 0x560371970c00 session 0x560370adfa40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 ms_handle_reset con 0x56036ea7d400 session 0x5603719432c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179216384 unmapped: 59523072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 ms_handle_reset con 0x56036fd06800 session 0x56036fd341e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:57.991309+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179232768 unmapped: 59506688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 ms_handle_reset con 0x5603709c5400 session 0x560372d91680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:58.991622+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 479 handle_osd_map epochs [479,480], i have 479, src has [1,480]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:08:59.991814+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3210419 data_alloc: 218103808 data_used: 8851456
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f6b87000/0x0/0x4ffc00000, data 0x176a992/0x19f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:00.992039+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:01.992201+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 480 ms_handle_reset con 0x5603710f8000 session 0x5603708ba780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:02.992378+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:03.992589+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 480 heartbeat osd_stat(store_statfs(0x4f6b88000/0x0/0x4ffc00000, data 0x176a992/0x19f6000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:04.992727+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3209539 data_alloc: 218103808 data_used: 8851456
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:05.992916+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:06.993160+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179240960 unmapped: 59498496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 480 handle_osd_map epochs [480,481], i have 480, src has [1,481]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.920978546s of 14.320011139s, submitted: 49
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:07.993290+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f6b84000/0x0/0x4ffc00000, data 0x176c3f5/0x19f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179257344 unmapped: 59482112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:08.993483+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:09.993646+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214033 data_alloc: 218103808 data_used: 8867840
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:10.993922+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f6b84000/0x0/0x4ffc00000, data 0x176c3f5/0x19f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:11.994108+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:12.994299+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:13.994491+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:14.994668+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214033 data_alloc: 218103808 data_used: 8867840
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x56036ea7d400 session 0x56036ef23e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179265536 unmapped: 59473920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:15.994890+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x56036fd06800 session 0x5603726f6f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x5603709c5400 session 0x5603709afc20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x5603710f8000 session 0x5603709ae1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179273728 unmapped: 59465728 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:16.995157+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x560371970c00 session 0x560373cea3c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x56036ea7d400 session 0x56036f8be000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f6b82000/0x0/0x4ffc00000, data 0x176c4c9/0x19fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x767f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179306496 unmapped: 59432960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:17.995362+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.572349548s of 10.308978081s, submitted: 36
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x56036fd06800 session 0x5603719c1c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x5603709c5400 session 0x5603708ef680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179331072 unmapped: 59408384 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:18.995530+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x5603710f8000 session 0x5603726e8f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179339264 unmapped: 59400192 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:19.995713+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3218406 data_alloc: 218103808 data_used: 8867840
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179339264 unmapped: 59400192 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:20.995866+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x560371970800 session 0x5603726f70e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f6774000/0x0/0x4ffc00000, data 0x176c457/0x19fa000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:21.996018+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x560371970800 session 0x56036fcbe960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:22.996192+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:23.996334+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:24.996507+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3217711 data_alloc: 218103808 data_used: 8867840
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f6775000/0x0/0x4ffc00000, data 0x176c3f5/0x19f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:25.996643+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:26.996807+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:27.996997+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.837226868s of 10.081180573s, submitted: 51
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x56036ea7d400 session 0x5603719c0780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179363840 unmapped: 59375616 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:28.997129+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 ms_handle_reset con 0x56036fd06800 session 0x56036efde780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179372032 unmapped: 59367424 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:29.997273+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f6772000/0x0/0x4ffc00000, data 0x176c4c9/0x19fc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 482 ms_handle_reset con 0x5603709c5400 session 0x56037199d680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3228740 data_alloc: 218103808 data_used: 8880128
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179388416 unmapped: 59351040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:30.997479+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 482 ms_handle_reset con 0x560371135800 session 0x56036ef3e000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179396608 unmapped: 59342848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:31.997620+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 482 ms_handle_reset con 0x56036ea7d400 session 0x560371988780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x5603710f8000 session 0x5603719c1a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x56036fd06800 session 0x5603726e85a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179412992 unmapped: 59326464 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:32.997755+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x5603709c5400 session 0x5603708bad20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179421184 unmapped: 59318272 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:33.997917+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x560371135800 session 0x560371942b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x560371135800 session 0x56036ef3f0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x56036ea7d400 session 0x56036ef22d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179445760 unmapped: 59293696 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:34.998080+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x56036fd06800 session 0x56036fd35680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238031 data_alloc: 218103808 data_used: 8892416
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f6767000/0x0/0x4ffc00000, data 0x176fcf9/0x1a06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x5603709c5400 session 0x560371a010e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x5603710f8000 session 0x5603708bbe00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179445760 unmapped: 59293696 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:35.998252+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x5603710f8000 session 0x56036ee2ef00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x56036fd06800 session 0x560372e79680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x5603709c5400 session 0x5603708ba960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179445760 unmapped: 59293696 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:36.998388+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371970800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 ms_handle_reset con 0x560371135800 session 0x560370adf4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 483 handle_osd_map epochs [483,484], i have 483, src has [1,484]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 484 ms_handle_reset con 0x560371970800 session 0x56036fca3860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 484 ms_handle_reset con 0x56036ea7d400 session 0x5603726e90e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179453952 unmapped: 59285504 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:37.998977+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 484 ms_handle_reset con 0x56036fd06800 session 0x560372d91c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.334898949s of 10.009043694s, submitted: 111
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 484 ms_handle_reset con 0x5603709c5400 session 0x56037199de00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179453952 unmapped: 59285504 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:38.999580+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 484 ms_handle_reset con 0x5603710f8000 session 0x56036fca3860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179453952 unmapped: 59285504 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:39.999981+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236510 data_alloc: 218103808 data_used: 8896512
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f676a000/0x0/0x4ffc00000, data 0x1771784/0x1a04000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371135800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179462144 unmapped: 59277312 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:41.000299+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 484 ms_handle_reset con 0x560371135800 session 0x560372e79680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 485 ms_handle_reset con 0x56036ea7d400 session 0x5603708bbe00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:42.000623+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179478528 unmapped: 59260928 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f6767000/0x0/0x4ffc00000, data 0x17732f3/0x1a06000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:43.000875+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179478528 unmapped: 59260928 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 485 ms_handle_reset con 0x56036fd06800 session 0x56036fd35680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:44.001193+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179486720 unmapped: 59252736 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:45.001346+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179486720 unmapped: 59252736 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3238378 data_alloc: 218103808 data_used: 8904704
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:46.001535+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179486720 unmapped: 59252736 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:47.001665+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179486720 unmapped: 59252736 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f6769000/0x0/0x4ffc00000, data 0x1773291/0x1a05000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:48.001816+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179486720 unmapped: 59252736 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.024219513s of 10.422387123s, submitted: 38
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:49.002000+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179494912 unmapped: 59244544 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x5603709c5400 session 0x5603708bad20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:50.002170+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179511296 unmapped: 59228160 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 heartbeat osd_stat(store_statfs(0x4f6764000/0x0/0x4ffc00000, data 0x1774d56/0x1a09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3245808 data_alloc: 218103808 data_used: 8912896
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x5603710f8000 session 0x5603726e85a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x560370d9a800 session 0x560371988780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:51.002373+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179511296 unmapped: 59228160 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x560370d9a800 session 0x56036ef3e000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:52.002521+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179568640 unmapped: 59170816 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x56036ea7d400 session 0x56036efde780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:53.002688+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179568640 unmapped: 59170816 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x56036fd06800 session 0x5603726f70e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x5603709c5400 session 0x5603719c1c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:54.002840+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179593216 unmapped: 59146240 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:55.003039+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179593216 unmapped: 59146240 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x5603710f8000 session 0x560373cead20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 heartbeat osd_stat(store_statfs(0x4f6763000/0x0/0x4ffc00000, data 0x1774d66/0x1a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3248218 data_alloc: 218103808 data_used: 8912896
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x5603710f8000 session 0x5603709aed20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:56.003199+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179601408 unmapped: 59138048 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:57.003351+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x56036ea7d400 session 0x5603719e3860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179601408 unmapped: 59138048 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x56036fd06800 session 0x560373cea1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:58.003526+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179609600 unmapped: 59129856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x5603709c5400 session 0x5603726f61e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x560370d9a800 session 0x56036fca4b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:09:59.003689+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179642368 unmapped: 59097088 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:00.003853+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179642368 unmapped: 59097088 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3246232 data_alloc: 218103808 data_used: 8912896
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:01.004073+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179642368 unmapped: 59097088 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 heartbeat osd_stat(store_statfs(0x4f6766000/0x0/0x4ffc00000, data 0x1774cf4/0x1a08000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:02.004223+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179642368 unmapped: 59097088 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:03.004515+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179642368 unmapped: 59097088 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:04.004698+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179642368 unmapped: 59097088 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.432947159s of 15.960399628s, submitted: 107
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x560370d9a800 session 0x56036ef32000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:05.004843+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179675136 unmapped: 59064320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 heartbeat osd_stat(store_statfs(0x4f6765000/0x0/0x4ffc00000, data 0x1774d56/0x1a09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3249488 data_alloc: 218103808 data_used: 8912896
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets getting new tickets!
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:06.005134+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _finish_auth 0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:06.005974+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179675136 unmapped: 59064320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 heartbeat osd_stat(store_statfs(0x4f6765000/0x0/0x4ffc00000, data 0x1774d56/0x1a09000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 ms_handle_reset con 0x56036fd06800 session 0x560372e79c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:07.005294+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179683328 unmapped: 59056128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:08.005458+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179683328 unmapped: 59056128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 heartbeat osd_stat(store_statfs(0x4f6764000/0x0/0x4ffc00000, data 0x1774db8/0x1a0a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 487 ms_handle_reset con 0x5603710f8000 session 0x560371942000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:09.005584+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179724288 unmapped: 59015168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 488 ms_handle_reset con 0x5603709c5400 session 0x5603719881e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 488 ms_handle_reset con 0x56036ea7d400 session 0x56036eeb1a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:10.005812+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179740672 unmapped: 58998784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260096 data_alloc: 218103808 data_used: 8933376
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:11.006056+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179740672 unmapped: 58998784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 488 handle_osd_map epochs [488,489], i have 488, src has [1,489]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: mgrc ms_handle_reset ms_handle_reset con 0x560371420c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3005905960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3005905960,v1:192.168.122.100:6801/3005905960]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: get_auth_request con 0x560371135800 auth_method 0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: mgrc handle_mgr_configure stats_period=5
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 489 ms_handle_reset con 0x56036ea7d400 session 0x56036f8be1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:12.006205+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179855360 unmapped: 58884096 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 490 ms_handle_reset con 0x56036fd06800 session 0x560370ade1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 490 ms_handle_reset con 0x5603709c5400 session 0x56037199d680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:13.006357+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 58826752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f6759000/0x0/0x4ffc00000, data 0x177bbac/0x1a14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:14.006554+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 58826752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:15.006790+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 58826752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3263596 data_alloc: 218103808 data_used: 8937472
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f6759000/0x0/0x4ffc00000, data 0x177bbac/0x1a14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:16.006928+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 58826752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:17.007074+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 58826752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 58826752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:18.185692+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f6759000/0x0/0x4ffc00000, data 0x177bbac/0x1a14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 490 heartbeat osd_stat(store_statfs(0x4f6759000/0x0/0x4ffc00000, data 0x177bbac/0x1a14000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 490 handle_osd_map epochs [491,491], i have 491, src has [1,491]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.248514175s of 13.931501389s, submitted: 118
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 491 ms_handle_reset con 0x560370d9a800 session 0x560372d905a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 57753600 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:19.185885+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 57753600 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:20.186061+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3268588 data_alloc: 218103808 data_used: 8937472
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 491 heartbeat osd_stat(store_statfs(0x4f6756000/0x0/0x4ffc00000, data 0x177d647/0x1a17000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 180985856 unmapped: 57753600 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:21.186284+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 491 ms_handle_reset con 0x56036fd07400 session 0x5603726e9a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036ea7d400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 491 ms_handle_reset con 0x560370d19c00 session 0x5603726f7680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 180994048 unmapped: 57745408 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:22.186548+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 492 ms_handle_reset con 0x56036fd06800 session 0x56036fb76d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181002240 unmapped: 57737216 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:23.186747+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181010432 unmapped: 57729024 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:24.186898+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f674d000/0x0/0x4ffc00000, data 0x1780e05/0x1a1f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 493 ms_handle_reset con 0x5603709c5400 session 0x5603726f70e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181018624 unmapped: 57720832 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:25.187059+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3277792 data_alloc: 218103808 data_used: 8949760
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 493 ms_handle_reset con 0x5603710f8000 session 0x5603726e83c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181018624 unmapped: 57720832 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:26.187217+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181018624 unmapped: 57720832 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:27.187401+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 493 ms_handle_reset con 0x560370d9a800 session 0x56036eeb0960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181026816 unmapped: 57712640 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:28.187545+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 493 heartbeat osd_stat(store_statfs(0x4f674e000/0x0/0x4ffc00000, data 0x1780e15/0x1a20000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181026816 unmapped: 57712640 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:29.187721+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 493 ms_handle_reset con 0x56036fd06800 session 0x560371988780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181026816 unmapped: 57712640 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:30.187881+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 493 handle_osd_map epochs [493,494], i have 493, src has [1,494]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.764204025s of 11.605713844s, submitted: 40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 494 ms_handle_reset con 0x5603709c5400 session 0x5603726e85a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3285462 data_alloc: 218103808 data_used: 8962048
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181067776 unmapped: 57671680 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:31.188078+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 494 handle_osd_map epochs [495,495], i have 494, src has [1,495]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 495 ms_handle_reset con 0x560370d9a800 session 0x56036fca4b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181067776 unmapped: 57671680 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:32.188242+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181075968 unmapped: 57663488 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:33.188368+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 495 heartbeat osd_stat(store_statfs(0x4f6745000/0x0/0x4ffc00000, data 0x178450f/0x1a26000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 495 ms_handle_reset con 0x560370d19c00 session 0x56036f8145a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 495 handle_osd_map epochs [496,496], i have 495, src has [1,496]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 496 ms_handle_reset con 0x5603710f8000 session 0x560372d90d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181092352 unmapped: 57647104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:34.188513+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181092352 unmapped: 57647104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:35.188620+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3288714 data_alloc: 218103808 data_used: 8957952
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181092352 unmapped: 57647104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 496 heartbeat osd_stat(store_statfs(0x4f6745000/0x0/0x4ffc00000, data 0x17860d0/0x1a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:36.188755+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181092352 unmapped: 57647104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:37.188890+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 496 ms_handle_reset con 0x5603710f8000 session 0x560373ef2d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 496 handle_osd_map epochs [496,497], i have 496, src has [1,497]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 497 ms_handle_reset con 0x560370d19c00 session 0x56036ef521e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 497 heartbeat osd_stat(store_statfs(0x4f6745000/0x0/0x4ffc00000, data 0x17860d0/0x1a28000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181108736 unmapped: 57630720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:38.188987+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 497 handle_osd_map epochs [498,498], i have 497, src has [1,498]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 498 ms_handle_reset con 0x5603709c5400 session 0x560373340000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 498 ms_handle_reset con 0x560370d9a800 session 0x560372528f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 498 ms_handle_reset con 0x56036fd06800 session 0x5603726e8f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181108736 unmapped: 57630720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:39.189104+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 498 ms_handle_reset con 0x56036fd06800 session 0x5603726e8b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 181108736 unmapped: 57630720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:40.189235+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 498 handle_osd_map epochs [499,499], i have 498, src has [1,499]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.737440109s of 10.021005630s, submitted: 54
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3301101 data_alloc: 218103808 data_used: 8966144
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 499 ms_handle_reset con 0x5603709c5400 session 0x560370897a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:41.189381+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182165504 unmapped: 56573952 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:42.189565+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182165504 unmapped: 56573952 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 499 heartbeat osd_stat(store_statfs(0x4f673b000/0x0/0x4ffc00000, data 0x178b8e3/0x1a32000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:43.189689+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182165504 unmapped: 56573952 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:44.189820+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182165504 unmapped: 56573952 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 499 handle_osd_map epochs [499,500], i have 499, src has [1,500]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 499 handle_osd_map epochs [500,500], i have 500, src has [1,500]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:45.189986+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182173696 unmapped: 56565760 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3303483 data_alloc: 218103808 data_used: 8966144
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 ms_handle_reset con 0x560370d19c00 session 0x56036fb77e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 ms_handle_reset con 0x560370d9a800 session 0x56036fc68780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:46.190151+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182173696 unmapped: 56565760 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 heartbeat osd_stat(store_statfs(0x4f673a000/0x0/0x4ffc00000, data 0x178cfdc/0x1a34000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:47.190329+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182173696 unmapped: 56565760 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:48.190475+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182173696 unmapped: 56565760 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 handle_osd_map epochs [500,501], i have 500, src has [1,501]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:49.190630+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182173696 unmapped: 56565760 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 heartbeat osd_stat(store_statfs(0x4f6736000/0x0/0x4ffc00000, data 0x178ea5b/0x1a37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:50.190761+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182173696 unmapped: 56565760 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306219 data_alloc: 218103808 data_used: 8978432
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:51.190943+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182173696 unmapped: 56565760 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:52.191079+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182181888 unmapped: 56557568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:53.191241+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182181888 unmapped: 56557568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:54.191404+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 heartbeat osd_stat(store_statfs(0x4f6736000/0x0/0x4ffc00000, data 0x178ea5b/0x1a37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:55.191648+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306219 data_alloc: 218103808 data_used: 8978432
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:56.191842+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 heartbeat osd_stat(store_statfs(0x4f6736000/0x0/0x4ffc00000, data 0x178ea5b/0x1a37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:57.192289+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:58.192831+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 heartbeat osd_stat(store_statfs(0x4f6736000/0x0/0x4ffc00000, data 0x178ea5b/0x1a37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:10:59.193125+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:00.193654+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306219 data_alloc: 218103808 data_used: 8978432
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:01.194135+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:02.194587+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 heartbeat osd_stat(store_statfs(0x4f6736000/0x0/0x4ffc00000, data 0x178ea5b/0x1a37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:03.194794+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:04.195006+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:05.195276+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 heartbeat osd_stat(store_statfs(0x4f6736000/0x0/0x4ffc00000, data 0x178ea5b/0x1a37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 500 ms_handle_reset con 0x560371d5ac00 session 0x56036fcbe5a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56036fd06800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3306219 data_alloc: 218103808 data_used: 8978432
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:06.195500+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182190080 unmapped: 56549376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 18.307771683s
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 18.307771683s
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.886304855s of 26.670303345s, submitted: 86
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 18.308208466s, txc = 0x56036f777b00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:07.195672+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 56508416 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:08.195825+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182231040 unmapped: 56508416 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:09.195977+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182239232 unmapped: 56500224 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 501 heartbeat osd_stat(store_statfs(0x4f6736000/0x0/0x4ffc00000, data 0x178ea5b/0x1a37000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 501 handle_osd_map epochs [502,502], i have 501, src has [1,502]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:10.196135+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182247424 unmapped: 56492032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3309865 data_alloc: 218103808 data_used: 8978432
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:11.196561+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 56467456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:12.196763+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 502 heartbeat osd_stat(store_statfs(0x4f6733000/0x0/0x4ffc00000, data 0x179062c/0x1a3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 56467456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:13.197107+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182272000 unmapped: 56467456 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 502 handle_osd_map epochs [502,503], i have 502, src has [1,503]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 502 heartbeat osd_stat(store_statfs(0x4f6733000/0x0/0x4ffc00000, data 0x179062c/0x1a3a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:14.197353+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182280192 unmapped: 56459264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:15.197526+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182296576 unmapped: 56442880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3322151 data_alloc: 218103808 data_used: 8978432
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:16.197687+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182296576 unmapped: 56442880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:17.197885+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182296576 unmapped: 56442880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 503 heartbeat osd_stat(store_statfs(0x4f6731000/0x0/0x4ffc00000, data 0x17920ab/0x1a3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:18.198089+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182304768 unmapped: 56434688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 503 heartbeat osd_stat(store_statfs(0x4f6731000/0x0/0x4ffc00000, data 0x17920ab/0x1a3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.126664162s of 11.983111382s, submitted: 48
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 503 heartbeat osd_stat(store_statfs(0x4f6731000/0x0/0x4ffc00000, data 0x17920ab/0x1a3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:19.198237+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 503 ms_handle_reset con 0x5603710f8000 session 0x56036eeb1680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182304768 unmapped: 56434688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:20.198418+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182304768 unmapped: 56434688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3321271 data_alloc: 218103808 data_used: 8978432
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:21.198681+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182304768 unmapped: 56434688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:22.198854+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182304768 unmapped: 56434688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 503 heartbeat osd_stat(store_statfs(0x4f6731000/0x0/0x4ffc00000, data 0x17920ab/0x1a3d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:23.199101+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182312960 unmapped: 56426496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 503 handle_osd_map epochs [504,504], i have 503, src has [1,504]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 504 handle_osd_map epochs [504,505], i have 504, src has [1,505]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:24.199265+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182337536 unmapped: 56401920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 505 heartbeat osd_stat(store_statfs(0x4f6729000/0x0/0x4ffc00000, data 0x17956df/0x1a43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:25.199447+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182337536 unmapped: 56401920 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3328579 data_alloc: 218103808 data_used: 8990720
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:26.199606+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 56393728 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:27.199782+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182345728 unmapped: 56393728 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:28.199981+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 56385536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 505 heartbeat osd_stat(store_statfs(0x4f672b000/0x0/0x4ffc00000, data 0x17956df/0x1a43000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:29.200146+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 56385536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.803433895s of 10.520994186s, submitted: 42
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:30.200288+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 56385536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3327699 data_alloc: 218103808 data_used: 8990720
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:31.200655+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182353920 unmapped: 56385536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:32.200821+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182362112 unmapped: 56377344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 505 handle_osd_map epochs [505,506], i have 505, src has [1,506]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 505 handle_osd_map epochs [506,506], i have 506, src has [1,506]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:33.200964+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 56360960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 506 heartbeat osd_stat(store_statfs(0x4f6727000/0x0/0x4ffc00000, data 0x1797142/0x1a46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 506 ms_handle_reset con 0x5603709c5400 session 0x56036eeb0b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 506 heartbeat osd_stat(store_statfs(0x4f6727000/0x0/0x4ffc00000, data 0x1797142/0x1a46000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:34.201130+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 56360960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:35.201384+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 56360960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3329630 data_alloc: 218103808 data_used: 8994816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:36.201544+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 56360960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 506 heartbeat osd_stat(store_statfs(0x4f672a000/0x0/0x4ffc00000, data 0x179707e/0x1a44000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 506 handle_osd_map epochs [507,507], i have 506, src has [1,507]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:37.201692+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182394880 unmapped: 56344576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 507 ms_handle_reset con 0x560370d19c00 session 0x560373ef2780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:38.201765+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182394880 unmapped: 56344576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:39.201947+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 182403072 unmapped: 56336384 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.938725948s of 10.013010025s, submitted: 56
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:40.202093+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 55246848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3333181 data_alloc: 218103808 data_used: 9015296
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 507 heartbeat osd_stat(store_statfs(0x4f6727000/0x0/0x4ffc00000, data 0x1798c4f/0x1a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:41.202263+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 507 ms_handle_reset con 0x560370d9a800 session 0x5603708ba780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 55246848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:42.202496+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 55246848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:43.202651+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183492608 unmapped: 55246848 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 507 heartbeat osd_stat(store_statfs(0x4f6727000/0x0/0x4ffc00000, data 0x1798c4f/0x1a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:44.202809+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 55238656 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:45.202931+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 55238656 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3332589 data_alloc: 218103808 data_used: 9015296
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:46.203125+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 55238656 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:47.203275+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183508992 unmapped: 55230464 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 507 heartbeat osd_stat(store_statfs(0x4f6727000/0x0/0x4ffc00000, data 0x1798c4f/0x1a47000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 507 handle_osd_map epochs [508,508], i have 507, src has [1,508]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 508 ms_handle_reset con 0x560371421800 session 0x56036eeb01e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:48.203466+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 55205888 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:49.203620+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183533568 unmapped: 55205888 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.110692024s of 10.129683495s, submitted: 25
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:50.203784+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183541760 unmapped: 55197696 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 508 handle_osd_map epochs [509,509], i have 508, src has [1,509]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345022 data_alloc: 218103808 data_used: 9031680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 509 ms_handle_reset con 0x560371421800 session 0x56036f8be1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:51.204009+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183558144 unmapped: 55181312 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:52.204118+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183558144 unmapped: 55181312 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 509 ms_handle_reset con 0x5603709c5400 session 0x56036ee2f4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 509 heartbeat osd_stat(store_statfs(0x4f671e000/0x0/0x4ffc00000, data 0x179c2a1/0x1a4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:53.204269+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183566336 unmapped: 55173120 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 509 heartbeat osd_stat(store_statfs(0x4f6720000/0x0/0x4ffc00000, data 0x179c23f/0x1a4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 509 handle_osd_map epochs [509,510], i have 509, src has [1,510]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 509 handle_osd_map epochs [510,510], i have 510, src has [1,510]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 510 ms_handle_reset con 0x560370d19c00 session 0x560372529860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:54.204382+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183582720 unmapped: 55156736 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 510 ms_handle_reset con 0x560370d9a800 session 0x56036f886780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603710f8000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 510 ms_handle_reset con 0x5603710f8000 session 0x56037088c960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:55.204660+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183623680 unmapped: 55115776 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3345234 data_alloc: 218103808 data_used: 9039872
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 510 ms_handle_reset con 0x5603709c5400 session 0x56036fd350e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:56.204879+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183631872 unmapped: 55107584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 510 heartbeat osd_stat(store_statfs(0x4f671d000/0x0/0x4ffc00000, data 0x179de10/0x1a51000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:57.205059+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183631872 unmapped: 55107584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 510 handle_osd_map epochs [510,511], i have 510, src has [1,511]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 511 ms_handle_reset con 0x560370d19c00 session 0x560372d90960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:58.205208+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183656448 unmapped: 55083008 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 511 ms_handle_reset con 0x560370d9a800 session 0x560372d90780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 511 handle_osd_map epochs [511,512], i have 511, src has [1,512]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:11:59.205335+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 512 ms_handle_reset con 0x560371421800 session 0x560373ceb860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183689216 unmapped: 55050240 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373440000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 512 ms_handle_reset con 0x560373440000 session 0x56036f887a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373440000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.933547974s of 10.417177200s, submitted: 72
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:00.205476+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 512 ms_handle_reset con 0x560373440000 session 0x560371989c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183689216 unmapped: 55050240 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3355040 data_alloc: 218103808 data_used: 9048064
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 512 heartbeat osd_stat(store_statfs(0x4f6715000/0x0/0x4ffc00000, data 0x17a157a/0x1a56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:01.205741+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183689216 unmapped: 55050240 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:02.205911+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183689216 unmapped: 55050240 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 512 heartbeat osd_stat(store_statfs(0x4f6715000/0x0/0x4ffc00000, data 0x17a157a/0x1a56000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 512 handle_osd_map epochs [512,513], i have 512, src has [1,513]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:03.206168+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183713792 unmapped: 55025664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:04.206358+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183713792 unmapped: 55025664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 513 heartbeat osd_stat(store_statfs(0x4f6714000/0x0/0x4ffc00000, data 0x17a2ff9/0x1a59000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 513 handle_osd_map epochs [514,514], i have 513, src has [1,514]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 513 handle_osd_map epochs [514,514], i have 514, src has [1,514]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:05.206578+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 514 ms_handle_reset con 0x5603709c5400 session 0x5603709af0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183746560 unmapped: 54992896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 514 handle_osd_map epochs [514,515], i have 514, src has [1,515]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3362908 data_alloc: 218103808 data_used: 9048064
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:06.206896+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183771136 unmapped: 54968320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 515 ms_handle_reset con 0x560370d19c00 session 0x56036f8be000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:07.207102+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183771136 unmapped: 54968320 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 515 handle_osd_map epochs [515,516], i have 515, src has [1,516]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 516 heartbeat osd_stat(store_statfs(0x4f670f000/0x0/0x4ffc00000, data 0x17a6773/0x1a5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 516 ms_handle_reset con 0x560370d9a800 session 0x560372d910e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:08.207361+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183779328 unmapped: 54960128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:09.207514+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183779328 unmapped: 54960128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:10.207819+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183779328 unmapped: 54960128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 516 handle_osd_map epochs [516,517], i have 516, src has [1,517]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.264097214s of 10.594307899s, submitted: 57
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370677 data_alloc: 218103808 data_used: 9068544
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:11.208220+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 517 ms_handle_reset con 0x560371421800 session 0x56036fd345a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183795712 unmapped: 54943744 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:12.208477+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 517 ms_handle_reset con 0x560371421800 session 0x560373ef32c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183795712 unmapped: 54943744 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 517 handle_osd_map epochs [517,518], i have 517, src has [1,518]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 517 heartbeat osd_stat(store_statfs(0x4f6707000/0x0/0x4ffc00000, data 0x17a9ded/0x1a66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:13.208682+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183812096 unmapped: 54927360 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:14.208959+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 518 ms_handle_reset con 0x5603709c5400 session 0x5603708970e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 54919168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:15.209197+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 54919168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3373579 data_alloc: 218103808 data_used: 9068544
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:16.209376+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 54919168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:17.209574+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 54919168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:18.209727+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183820288 unmapped: 54919168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 518 handle_osd_map epochs [519,519], i have 518, src has [1,519]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 519 heartbeat osd_stat(store_statfs(0x4f6705000/0x0/0x4ffc00000, data 0x17ab9f6/0x1a69000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:19.210035+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 519 ms_handle_reset con 0x560370d19c00 session 0x5603708cf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183828480 unmapped: 54910976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 519 ms_handle_reset con 0x560370d9a800 session 0x56036eeb1a40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:20.210279+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183828480 unmapped: 54910976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3377829 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:21.210499+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373440000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183828480 unmapped: 54910976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 519 handle_osd_map epochs [519,520], i have 519, src has [1,520]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.965464115s of 11.362928391s, submitted: 59
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 520 ms_handle_reset con 0x560373440000 session 0x560372e79680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:22.210680+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373440000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 520 ms_handle_reset con 0x560373440000 session 0x56036fd350e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 520 heartbeat osd_stat(store_statfs(0x4f66fe000/0x0/0x4ffc00000, data 0x17aef84/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183861248 unmapped: 54878208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 520 ms_handle_reset con 0x5603709c5400 session 0x56037088c960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 520 heartbeat osd_stat(store_statfs(0x4f66fe000/0x0/0x4ffc00000, data 0x17aef84/0x1a6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 520 handle_osd_map epochs [521,521], i have 520, src has [1,521]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 520 handle_osd_map epochs [521,521], i have 521, src has [1,521]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:23.210822+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 54837248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 521 ms_handle_reset con 0x560370d19c00 session 0x560372529860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 521 ms_handle_reset con 0x560370d9a800 session 0x56036f8be1e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:24.210999+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 54837248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:25.211231+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 54837248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383553 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:26.211461+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 54837248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 521 heartbeat osd_stat(store_statfs(0x4f66fb000/0x0/0x4ffc00000, data 0x17b0b45/0x1a71000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:27.211628+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183902208 unmapped: 54837248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 521 handle_osd_map epochs [521,522], i have 521, src has [1,522]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:28.211776+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183926784 unmapped: 54812672 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:29.211952+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 54804480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:30.212216+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 54804480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:31.212526+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 54804480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:32.212750+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 54804480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:33.212969+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 54804480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:34.213149+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 54804480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:35.213291+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 54804480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:36.213451+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183934976 unmapped: 54804480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:37.213570+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 54796288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:38.213933+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 54796288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:39.214105+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 54796288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:40.214228+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 54796288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:41.214414+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 54796288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:42.214574+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 54796288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:43.214709+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 54796288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:44.214899+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183943168 unmapped: 54796288 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:45.215345+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183951360 unmapped: 54788096 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:46.215550+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:47.215733+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:48.215894+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:49.216093+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:50.216270+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:51.216541+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:52.216655+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:53.216819+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:54.216991+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:55.217195+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:56.217402+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:57.217656+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:58.217899+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:12:59.218119+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:00.218315+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183959552 unmapped: 54779904 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:01.218531+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 54763520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:02.218715+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 54763520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:03.218928+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 54763520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:04.219128+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183975936 unmapped: 54763520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:05.219327+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183984128 unmapped: 54755328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:06.219679+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183984128 unmapped: 54755328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:07.219903+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183984128 unmapped: 54755328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:08.220062+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183992320 unmapped: 54747136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:09.220258+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183992320 unmapped: 54747136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:10.220494+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183992320 unmapped: 54747136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:11.220718+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 183992320 unmapped: 54747136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:12.220891+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184000512 unmapped: 54738944 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:13.221122+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184000512 unmapped: 54738944 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:14.221291+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184000512 unmapped: 54738944 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:15.221471+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184000512 unmapped: 54738944 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:16.221630+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184000512 unmapped: 54738944 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:17.221903+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 54730752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:18.222223+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 54730752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:19.222357+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 54730752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:20.222506+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 54730752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:21.222749+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 54730752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:22.222950+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 54730752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:23.223105+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 54730752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:24.223260+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184008704 unmapped: 54730752 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:25.223535+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 54714368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:26.223819+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 54714368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:27.224009+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 54714368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:28.224278+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 54714368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:29.224533+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 54714368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:30.224696+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 54714368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:31.224886+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 54714368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:32.225080+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184025088 unmapped: 54714368 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:33.225325+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184033280 unmapped: 54706176 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:34.225569+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184041472 unmapped: 54697984 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:35.225822+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184041472 unmapped: 54697984 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:36.226095+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385855 data_alloc: 218103808 data_used: 9072640
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x560373ef2780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 184049664 unmapped: 54689792 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x56036eeb0b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:37.226258+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 51986432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:38.226484+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 51986432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:39.226750+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 51986432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:40.226939+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 51986432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:41.227105+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3386015 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 51986432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:42.227254+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 51986432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:43.227399+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 81.430725098s of 81.547950745s, submitted: 44
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 51986432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:44.227520+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x5603709c5400 session 0x56036fcbe5a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:45.227648+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:46.227819+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387843 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f8000/0x0/0x4ffc00000, data 0x17b25b8/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:47.228002+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:48.228173+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:49.228390+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:50.228501+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f8000/0x0/0x4ffc00000, data 0x17b25b8/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:51.228780+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387843 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:52.228948+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:53.229138+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:54.229375+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:55.229566+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f8000/0x0/0x4ffc00000, data 0x17b25b8/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:56.229688+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3387843 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:57.229870+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d19c00 session 0x56036ef325a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:58.230062+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.032030106s of 15.055594444s, submitted: 1
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:13:59.230273+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:00.230511+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66fa000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:01.230782+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d9a800 session 0x56036fc68780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373440000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3385135 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186744832 unmapped: 51994624 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:02.231008+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b25b8/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 188882944 unmapped: 49856512 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:03.231185+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186753024 unmapped: 51986432 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:04.231480+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190947328 unmapped: 47792128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:05.231642+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 47783936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5cb2000/0x0/0x4ffc00000, data 0x21f95b8/0x24bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:06.231846+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487699 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 47783936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:07.232054+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f58b2000/0x0/0x4ffc00000, data 0x25f95b8/0x28bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,0,0,2])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199352320 unmapped: 39387136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:08.232238+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.237817287s of 10.076370239s, submitted: 18
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560373440000 session 0x5603726e8b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:09.232387+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x5603709c5400 session 0x56036ef521e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:10.232569+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:11.232742+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684195 data_alloc: 218103808 data_used: 9142272
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:12.232903+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:13.233065+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:14.233302+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:15.233508+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:16.233723+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684195 data_alloc: 218103808 data_used: 9142272
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:17.233859+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:18.234031+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:19.234238+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:20.234397+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:21.234650+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684195 data_alloc: 218103808 data_used: 9142272
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:22.234861+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:23.235102+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:24.235290+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:25.235444+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:26.235584+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684195 data_alloc: 218103808 data_used: 9142272
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:27.235775+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d19c00 session 0x560372d90d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:28.236108+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:29.236254+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:30.236536+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:31.236742+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684195 data_alloc: 218103808 data_used: 9142272
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:32.236955+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:33.237117+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:34.237258+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:35.237411+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:36.237605+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684195 data_alloc: 218103808 data_used: 9142272
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:37.237810+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:38.238018+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:39.238198+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:40.238495+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb2000/0x0/0x4ffc00000, data 0x41f95b8/0x44bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:41.238917+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3684195 data_alloc: 218103808 data_used: 9142272
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d9a800 session 0x56036f8145a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:42.239104+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186777600 unmapped: 51961856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x56036fca4b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 33.693531036s of 33.780479431s, submitted: 1
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:43.239322+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186785792 unmapped: 51953664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371130400 session 0x560371988780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:44.239489+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186785792 unmapped: 51953664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:45.239659+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186785792 unmapped: 51953664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb0000/0x0/0x4ffc00000, data 0x41f95eb/0x44be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:46.239831+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186785792 unmapped: 51953664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3688630 data_alloc: 218103808 data_used: 9408512
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:47.239986+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 186785792 unmapped: 51953664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb0000/0x0/0x4ffc00000, data 0x41f95eb/0x44be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:48.240096+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 188751872 unmapped: 49987584 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:49.240350+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:50.240519+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:51.240724+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb0000/0x0/0x4ffc00000, data 0x41f95eb/0x44be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764790 data_alloc: 234881024 data_used: 19005440
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:52.240912+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:53.241127+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:54.241318+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:55.246476+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:56.246627+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764790 data_alloc: 234881024 data_used: 19005440
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cb0000/0x0/0x4ffc00000, data 0x41f95eb/0x44be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:57.246862+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:58.247018+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190652416 unmapped: 48087040 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.481119156s of 16.257036209s, submitted: 6
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:14:59.247210+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197017600 unmapped: 41721856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:00.247355+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 201498624 unmapped: 37240832 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3950000/0x0/0x4ffc00000, data 0x41f95eb/0x44be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:01.247549+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 201506816 unmapped: 37232640 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3771774 data_alloc: 234881024 data_used: 19472384
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:02.247850+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 196558848 unmapped: 42180608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:03.248011+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 196558848 unmapped: 42180608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:04.248177+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 196558848 unmapped: 42180608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3cab000/0x0/0x4ffc00000, data 0x41fe5eb/0x44c3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:05.248336+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 196829184 unmapped: 41910272 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 29K writes, 108K keys, 29K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 29K writes, 10K syncs, 2.75 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5128 writes, 13K keys, 5128 commit groups, 1.0 writes per commit group, ingest: 8.17 MB, 0.01 MB/s
                                           Interval WAL: 5128 writes, 2073 syncs, 2.47 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:06.248510+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 196829184 unmapped: 41910272 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3781254 data_alloc: 234881024 data_used: 19472384
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:07.248693+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3b68000/0x0/0x4ffc00000, data 0x43415eb/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 41648128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f3b68000/0x0/0x4ffc00000, data 0x43415eb/0x4606000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:08.248852+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 198139904 unmapped: 40599552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.580132961s of 10.038257599s, submitted: 48
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:09.249038+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 198131712 unmapped: 40607744 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:10.249214+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 41648128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:11.249389+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f33ef000/0x0/0x4ffc00000, data 0x4aba5eb/0x4d7f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 41648128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3874574 data_alloc: 234881024 data_used: 19472384
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371130400 session 0x5603726e83c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x5603709c5400 session 0x5603708bad20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:12.250321+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197091328 unmapped: 41648128 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb0000/0x0/0x4ffc00000, data 0x4ff95eb/0x52be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d19c00 session 0x56036fb76d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:13.250466+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:14.250647+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:15.251197+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:16.251387+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3872993 data_alloc: 234881024 data_used: 19464192
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:17.251817+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:18.252087+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:19.253080+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:20.253504+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:21.253758+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3872993 data_alloc: 234881024 data_used: 19464192
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:22.253956+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:23.254281+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:24.254540+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:25.254834+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:26.255042+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3872993 data_alloc: 234881024 data_used: 19464192
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:27.255357+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:28.255545+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:29.255679+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:30.255889+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:31.256113+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3872993 data_alloc: 234881024 data_used: 19464192
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:32.256268+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:33.256413+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:34.256686+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:35.256921+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:36.257120+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3872993 data_alloc: 234881024 data_used: 19464192
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:37.257275+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:38.257486+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:39.257659+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197099520 unmapped: 41639936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb1000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 30.061391830s of 30.944547653s, submitted: 26
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d9a800 session 0x56036ef3e3c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196f000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:40.257853+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:41.258075+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3876482 data_alloc: 234881024 data_used: 19464192
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:42.258240+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:43.258552+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:44.258696+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:45.258828+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:46.259016+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3877442 data_alloc: 234881024 data_used: 19562496
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:47.259229+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:48.259378+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:49.259859+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:50.260240+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:51.260620+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3877442 data_alloc: 234881024 data_used: 19562496
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:52.260945+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197443584 unmapped: 41295872 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:53.261256+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197451776 unmapped: 41287680 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:54.261443+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197533696 unmapped: 41205760 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:55.261802+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:56.262260+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3890402 data_alloc: 234881024 data_used: 20639744
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:57.262489+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:58.262752+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:15:59.263041+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:00.263265+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:01.263526+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3890402 data_alloc: 234881024 data_used: 20639744
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:02.263713+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:03.263890+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:04.264016+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:05.264165+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197607424 unmapped: 41132032 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:06.264558+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 41115648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3890402 data_alloc: 234881024 data_used: 20639744
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:07.264714+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 41115648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:08.264881+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 41115648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:09.265086+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 41115648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:10.265294+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 41115648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:11.266651+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 41115648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3890402 data_alloc: 234881024 data_used: 20639744
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:12.266840+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 41115648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:13.266992+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197623808 unmapped: 41115648 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.320518494s of 34.355377197s, submitted: 8
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:14.267407+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197648384 unmapped: 41091072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:15.267650+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197648384 unmapped: 41091072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:16.267837+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197648384 unmapped: 41091072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3890402 data_alloc: 234881024 data_used: 20639744
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:17.268030+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197664768 unmapped: 41074688 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:18.268659+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197672960 unmapped: 41066496 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:19.268799+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197689344 unmapped: 41050112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:20.268946+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197689344 unmapped: 41050112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:21.269092+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197689344 unmapped: 41050112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3890402 data_alloc: 234881024 data_used: 20639744
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:22.269246+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2e8e000/0x0/0x4ffc00000, data 0x501d5b8/0x52e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197689344 unmapped: 41050112 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:23.269398+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x560370adf0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x56037196f000 session 0x56036ef3fc20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 40894464 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.855199814s of 10.003141403s, submitted: 114
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:24.269548+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197869568 unmapped: 40869888 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x560373ceaf00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:25.269719+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197885952 unmapped: 40853504 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb2000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:26.269895+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197885952 unmapped: 40853504 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3885762 data_alloc: 234881024 data_used: 21041152
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:27.270029+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197885952 unmapped: 40853504 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb2000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:28.270234+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197885952 unmapped: 40853504 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f2eb2000/0x0/0x4ffc00000, data 0x4ff95b8/0x52bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:29.270407+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197885952 unmapped: 40853504 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:30.270563+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x5603709c5400 session 0x56036fca43c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190570496 unmapped: 48168960 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d19c00 session 0x560372e79e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:31.270747+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409474 data_alloc: 218103808 data_used: 8024064
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:32.270886+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66fa000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:33.271135+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66fa000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:34.271510+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:35.271667+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:36.271848+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409474 data_alloc: 218103808 data_used: 8024064
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:37.272295+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:38.272604+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66fa000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:39.272831+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:40.272980+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:41.273282+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409474 data_alloc: 218103808 data_used: 8024064
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:42.273540+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 190586880 unmapped: 48152576 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d9a800 session 0x5603719c03c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:43.273772+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d9a800 session 0x5603708ed2c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66fa000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:44.273966+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:45.274114+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66fa000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:46.274334+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3409634 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:47.274581+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:48.274756+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.622423172s of 25.083374023s, submitted: 34
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:49.274983+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:50.275179+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x5603709c5400 session 0x56036fb76780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:51.275387+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b260b/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412944 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:52.275613+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:53.275828+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:54.276062+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:55.276242+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:56.276548+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3412944 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:57.276737+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b260b/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b260b/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:58.276934+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:16:59.277107+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:00.277276+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193380352 unmapped: 45359104 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:01.277583+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.758064270s of 12.597835541s, submitted: 9
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f66f9000/0x0/0x4ffc00000, data 0x17b260b/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [0,0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d19c00 session 0x560372d90d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193388544 unmapped: 45350912 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x56036f0d9860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196f000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:02.277771+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3413406 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197713920 unmapped: 41025536 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x56037196f000 session 0x5603708ed4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:03.277965+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5af8000/0x0/0x4ffc00000, data 0x23b25d2/0x2675000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x5603709c5400 session 0x560374656f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193544192 unmapped: 45195264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:04.278109+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193544192 unmapped: 45195264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:05.278302+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193544192 unmapped: 45195264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:06.278498+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193544192 unmapped: 45195264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:07.278747+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652229 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193544192 unmapped: 45195264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:08.278926+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f4500000/0x0/0x4ffc00000, data 0x39ab60a/0x3c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193544192 unmapped: 45195264 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:09.279121+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:10.279270+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f4500000/0x0/0x4ffc00000, data 0x39ab60a/0x3c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:11.279521+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:12.279756+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652229 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:13.279967+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:14.280187+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f4500000/0x0/0x4ffc00000, data 0x39ab60a/0x3c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:15.280347+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:16.280536+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d19c00 session 0x5603746561e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:17.280681+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3652229 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f4500000/0x0/0x4ffc00000, data 0x39ab60a/0x3c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:18.280898+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193552384 unmapped: 45187072 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d9a800 session 0x5603725283c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:19.281051+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x56036fca3860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.881090164s of 17.906024933s, submitted: 74
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371130400 session 0x560372d91680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193560576 unmapped: 45178880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:20.281231+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f4500000/0x0/0x4ffc00000, data 0x39ab60a/0x3c6e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193560576 unmapped: 45178880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:21.281490+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193560576 unmapped: 45178880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:22.281630+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3653891 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193560576 unmapped: 45178880 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f44ff000/0x0/0x4ffc00000, data 0x39ab61a/0x3c6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:23.281744+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193609728 unmapped: 45129728 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:24.281851+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:25.281973+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f44ff000/0x0/0x4ffc00000, data 0x39ab61a/0x3c6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:26.282087+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f44ff000/0x0/0x4ffc00000, data 0x39ab61a/0x3c6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:27.282250+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687331 data_alloc: 234881024 data_used: 13840384
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:28.282363+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:29.282484+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:30.282599+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:31.282743+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f44ff000/0x0/0x4ffc00000, data 0x39ab61a/0x3c6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:32.282911+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687331 data_alloc: 234881024 data_used: 13840384
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:33.283063+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 193839104 unmapped: 44900352 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:34.283246+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.734572411s of 14.744988441s, submitted: 2
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207118336 unmapped: 31621120 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:35.283408+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f41af000/0x0/0x4ffc00000, data 0x39ab61a/0x3c6f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x7a8f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 208560128 unmapped: 30179328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:36.283586+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 202276864 unmapped: 36462592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:37.283752+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3883387 data_alloc: 234881024 data_used: 14561280
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 36413440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:38.283929+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c361a/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 36413440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:39.284028+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 36413440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:40.284207+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 36413440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:41.284408+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 36413440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:42.284612+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3883387 data_alloc: 234881024 data_used: 14561280
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 36413440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:43.284815+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c361a/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 202326016 unmapped: 36413440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:44.284977+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.890553474s of 10.214242935s, submitted: 140
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371130400 session 0x56036ef3ed20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x5603709c5400 session 0x560372ad70e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:45.285398+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d19c00 session 0x56036f0d8f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:46.285556+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:47.285706+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3876894 data_alloc: 234881024 data_used: 14577664
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:48.285846+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:49.285947+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1638000/0x0/0x4ffc00000, data 0x52c360a/0x5586000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:50.286088+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1638000/0x0/0x4ffc00000, data 0x52c360a/0x5586000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:51.286254+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:52.286409+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3876894 data_alloc: 234881024 data_used: 14577664
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:53.286663+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1638000/0x0/0x4ffc00000, data 0x52c360a/0x5586000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:54.286831+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:55.287011+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1638000/0x0/0x4ffc00000, data 0x52c360a/0x5586000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:56.287242+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:57.287367+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3876894 data_alloc: 234881024 data_used: 14577664
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:58.287499+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1638000/0x0/0x4ffc00000, data 0x52c360a/0x5586000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:17:59.287680+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:00.287861+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:01.288032+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:02.288203+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3876894 data_alloc: 234881024 data_used: 14577664
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:03.288347+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1638000/0x0/0x4ffc00000, data 0x52c360a/0x5586000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:04.288541+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:05.288671+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.784885406s of 20.819629669s, submitted: 15
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d9a800 session 0x560373cea5a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:06.288866+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:07.289084+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3878851 data_alloc: 234881024 data_used: 14577664
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:08.289266+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:09.289381+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:10.289549+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:11.289844+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:12.290035+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880451 data_alloc: 234881024 data_used: 14729216
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:13.290162+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:14.290347+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:15.290570+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:16.290751+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:17.290918+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3880451 data_alloc: 234881024 data_used: 14729216
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:18.291114+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203390976 unmapped: 35348480 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:19.291500+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.147248268s of 14.167513847s, submitted: 5
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203546624 unmapped: 35192832 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:20.291620+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:21.291808+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:22.291973+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900043 data_alloc: 234881024 data_used: 16580608
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:23.292146+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:24.292338+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:25.292532+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:26.292737+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:27.292977+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899147 data_alloc: 234881024 data_used: 16580608
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:28.293214+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:29.293399+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:30.293671+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:31.293944+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:32.294101+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899147 data_alloc: 234881024 data_used: 16580608
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:33.294283+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:34.294449+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.397415161s of 15.449017525s, submitted: 20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:35.294572+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f162f000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:36.294698+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:37.294916+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899755 data_alloc: 234881024 data_used: 16547840
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:38.295046+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:39.295193+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f162f000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:40.295350+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:41.295550+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:42.295724+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899755 data_alloc: 234881024 data_used: 16547840
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f162f000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:43.295890+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f162f000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:44.296080+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203636736 unmapped: 35102720 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.196380615s of 10.214682579s, submitted: 14
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x5603719e3c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:45.296199+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203718656 unmapped: 35020800 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f162f000/0x0/0x4ffc00000, data 0x52c362d/0x5587000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [0,1])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560371421800 session 0x5603708cf860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:46.296417+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203726848 unmapped: 35012608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:47.296656+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203726848 unmapped: 35012608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900523 data_alloc: 234881024 data_used: 17186816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:48.296843+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203726848 unmapped: 35012608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:49.296993+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203726848 unmapped: 35012608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:50.297179+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 203726848 unmapped: 35012608 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f1637000/0x0/0x4ffc00000, data 0x52c360a/0x5586000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:51.297389+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199180288 unmapped: 39559168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x5603709c5400 session 0x56036ef521e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:52.297629+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:53.297813+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:54.297982+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:55.298108+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:56.298270+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:57.298513+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:58.298763+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:18:59.298955+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:00.299162+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:01.299413+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:02.299656+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:03.299914+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:04.300147+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:05.300625+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:06.300810+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:07.301039+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:08.301251+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:09.301449+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:10.301653+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:11.301917+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:12.302177+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:13.302452+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:14.302615+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:15.302912+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:16.303101+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:17.303347+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:19.118084+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:20.118287+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:21.118493+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:22.118697+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:23.118827+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:24.118973+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:25.119116+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:26.119303+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:27.119492+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:28.119676+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:29.146701+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25a8/0x1a74000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:30.146932+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:31.147100+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:32.147326+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3436227 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:33.147575+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:34.147789+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199196672 unmapped: 39542784 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:35.148027+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.993957520s of 50.168109894s, submitted: 46
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199204864 unmapped: 39534592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25b8/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:36.148265+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199204864 unmapped: 39534592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 ms_handle_reset con 0x560370d19c00 session 0x56036ef3e3c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:37.148513+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199204864 unmapped: 39534592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3439232 data_alloc: 218103808 data_used: 9138176
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:38.148769+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199204864 unmapped: 39534592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 heartbeat osd_stat(store_statfs(0x4f5149000/0x0/0x4ffc00000, data 0x17b25b8/0x1a75000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:39.148922+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199204864 unmapped: 39534592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 522 handle_osd_map epochs [522,523], i have 522, src has [1,523]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:40.149071+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d9a800 session 0x56036ef3f0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199213056 unmapped: 39526400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:41.149271+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199213056 unmapped: 39526400 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371130400 session 0x5603708ef680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:42.149498+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 39518208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447210 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5143000/0x0/0x4ffc00000, data 0x17b41a8/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:43.149644+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 39518208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:44.149787+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 39518208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:45.150000+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 39518208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:46.150161+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5143000/0x0/0x4ffc00000, data 0x17b41a8/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 39518208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:47.150310+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 39518208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447210 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:48.150553+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 39518208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:49.150728+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371130400 session 0x560373ef32c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x5603709c5400 session 0x560374657680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d19c00 session 0x56036f887860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 39518208 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d9a800 session 0x560373ef3e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.162836075s of 14.283035278s, submitted: 17
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371421800 session 0x560371943e00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:50.150890+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371421800 session 0x560373341c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x5603709c5400 session 0x560370af5680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d19c00 session 0x5603708ef860
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d9a800 session 0x560371a014a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5143000/0x0/0x4ffc00000, data 0x17b41a8/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5022000/0x0/0x4ffc00000, data 0x18d421a/0x1b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:51.151109+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5022000/0x0/0x4ffc00000, data 0x18d421a/0x1b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:52.151388+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465543 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:53.151652+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:54.151808+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:55.152126+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:56.152336+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5022000/0x0/0x4ffc00000, data 0x18d421a/0x1b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:57.152497+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3465543 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:58.152648+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199237632 unmapped: 39501824 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:19:59.152810+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5022000/0x0/0x4ffc00000, data 0x18d421a/0x1b9c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199254016 unmapped: 39485440 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.583348274s of 10.009582520s, submitted: 28
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371130400 session 0x560371989680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:00.152932+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:01.153130+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:02.153273+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478502 data_alloc: 218103808 data_used: 10248192
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:03.153403+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:04.153545+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ff7000/0x0/0x4ffc00000, data 0x18fe23d/0x1bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:05.155722+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ff7000/0x0/0x4ffc00000, data 0x18fe23d/0x1bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:06.156614+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ff7000/0x0/0x4ffc00000, data 0x18fe23d/0x1bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:07.157723+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3478502 data_alloc: 218103808 data_used: 10248192
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:08.158717+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ff7000/0x0/0x4ffc00000, data 0x18fe23d/0x1bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ff7000/0x0/0x4ffc00000, data 0x18fe23d/0x1bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:09.159059+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ff7000/0x0/0x4ffc00000, data 0x18fe23d/0x1bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:10.159338+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:11.160144+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199614464 unmapped: 39124992 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.225677490s of 12.352828979s, submitted: 4
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:12.160519+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ff7000/0x0/0x4ffc00000, data 0x18fe23d/0x1bc7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [0,0,0,0,12])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199770112 unmapped: 38969344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564356 data_alloc: 218103808 data_used: 10571776
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:13.161240+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:14.161517+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4557000/0x0/0x4ffc00000, data 0x239e23d/0x2667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:15.161708+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:16.162029+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:17.162361+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565026 data_alloc: 218103808 data_used: 10571776
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:18.162588+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4557000/0x0/0x4ffc00000, data 0x239e23d/0x2667000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:19.162810+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:20.162970+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:21.163169+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4554000/0x0/0x4ffc00000, data 0x23a123d/0x266a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:22.163462+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563914 data_alloc: 218103808 data_used: 10575872
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:23.163641+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4554000/0x0/0x4ffc00000, data 0x23a123d/0x266a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:24.163853+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4554000/0x0/0x4ffc00000, data 0x23a123d/0x266a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:25.164083+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:26.164243+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:27.164412+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564234 data_alloc: 218103808 data_used: 10584064
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:28.164630+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d19c00 session 0x56036fd34f00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:29.164851+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.239372253s of 17.552064896s, submitted: 90
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371130400 session 0x5603733403c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x5603709c5400 session 0x5603719e21e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4554000/0x0/0x4ffc00000, data 0x23a123d/0x266a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:30.164974+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d9a800 session 0x5603746561e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197722112 unmapped: 41017344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:31.165146+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197722112 unmapped: 41017344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ce2000/0x0/0x4ffc00000, data 0x17b41a8/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:32.165382+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f4ce2000/0x0/0x4ffc00000, data 0x17b41a8/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197722112 unmapped: 41017344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456494 data_alloc: 218103808 data_used: 9158656
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:33.165670+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197722112 unmapped: 41017344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:34.165831+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197722112 unmapped: 41017344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:35.166009+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371421800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371421800 session 0x56036fca43c0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197722112 unmapped: 41017344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x5603709c5400 session 0x560370adf0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:36.166256+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197722112 unmapped: 41017344 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:37.166418+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5145000/0x0/0x4ffc00000, data 0x17b4198/0x1a79000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d19c00 session 0x5603708ee780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d9a800 session 0x560373ceb4a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454382 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:38.166637+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:39.166774+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:40.166940+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:41.167122+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:42.167348+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5146000/0x0/0x4ffc00000, data 0x17b4188/0x1a78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454382 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:43.167608+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:44.167800+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5146000/0x0/0x4ffc00000, data 0x17b4188/0x1a78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:45.167900+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:46.168071+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:47.168196+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5146000/0x0/0x4ffc00000, data 0x17b4188/0x1a78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:48.168390+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454382 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:49.168559+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 197730304 unmapped: 41009152 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371130400 session 0x56036fc694a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x56037196dc00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x56037196dc00 session 0x56037199cd20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:50.168681+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:51.168846+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:52.169068+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:53.169232+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3454382 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5146000/0x0/0x4ffc00000, data 0x17b4188/0x1a78000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:54.169360+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:55.169492+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.166997910s of 26.336990356s, submitted: 43
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:56.169691+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5144000/0x0/0x4ffc00000, data 0x17b41fb/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:57.169854+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x5603709c5400 session 0x56036ee2fe00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:58.170004+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458039 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:20:59.170208+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:00.170381+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:01.170619+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5144000/0x0/0x4ffc00000, data 0x17b41fb/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:02.170917+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:03.171189+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458039 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:04.171334+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:05.171518+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:06.171789+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5144000/0x0/0x4ffc00000, data 0x17b41fb/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:07.172034+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:08.172227+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3458039 data_alloc: 218103808 data_used: 9150464
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199065600 unmapped: 39673856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:09.172364+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d19c00 session 0x560370896780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.234753609s of 13.331366539s, submitted: 5
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f5144000/0x0/0x4ffc00000, data 0x17b41fb/0x1a7a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d9a800 session 0x56036ef32d20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 31268864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:10.172504+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f3945000/0x0/0x4ffc00000, data 0x2fb4197/0x3279000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371130400 session 0x56036f8861e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371973c00 session 0x56037088de00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 39632896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:11.172775+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 39632896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:12.173025+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 39632896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:13.173235+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3653425 data_alloc: 218103808 data_used: 9154560
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 39632896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:14.173477+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 39632896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:15.173681+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f3545000/0x0/0x4ffc00000, data 0x33b4197/0x3679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 39632896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:16.173862+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 39632896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:17.174345+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199106560 unmapped: 39632896 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f3545000/0x0/0x4ffc00000, data 0x33b4197/0x3679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:18.174545+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3653425 data_alloc: 218103808 data_used: 9154560
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199114752 unmapped: 39624704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:19.174794+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199114752 unmapped: 39624704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:20.174955+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199114752 unmapped: 39624704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:21.175173+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199114752 unmapped: 39624704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:22.175375+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371973c00 session 0x56036fc69c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f3545000/0x0/0x4ffc00000, data 0x33b4197/0x3679000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199114752 unmapped: 39624704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:23.175578+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3653425 data_alloc: 218103808 data_used: 9154560
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199114752 unmapped: 39624704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:24.175703+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x5603709c5400 session 0x560372e78780
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d19c00 session 0x56036ef33680
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.986223221s of 15.464441299s, submitted: 23
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199114752 unmapped: 39624704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:25.175913+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560370d9a800 session 0x56036fcbf0e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199114752 unmapped: 39624704 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:26.176049+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f3544000/0x0/0x4ffc00000, data 0x33b41ba/0x367a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199122944 unmapped: 39616512 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:27.176207+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199122944 unmapped: 39616512 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:28.176392+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3655254 data_alloc: 218103808 data_used: 9154560
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199122944 unmapped: 39616512 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:29.176580+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 199163904 unmapped: 39575552 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:30.176712+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f3544000/0x0/0x4ffc00000, data 0x33b41ba/0x367a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:31.176842+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:32.177094+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:33.177235+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696534 data_alloc: 234881024 data_used: 14036992
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:34.177460+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:35.177676+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:36.177892+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f3544000/0x0/0x4ffc00000, data 0x33b41ba/0x367a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:37.178030+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:38.178240+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3696534 data_alloc: 234881024 data_used: 14036992
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:39.178419+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f3544000/0x0/0x4ffc00000, data 0x33b41ba/0x367a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 200376320 unmapped: 38363136 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:40.178573+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.476464272s of 15.920166016s, submitted: 5
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212910080 unmapped: 25829376 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:41.178703+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 213303296 unmapped: 25436160 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:42.178884+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:43.179064+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899062 data_alloc: 234881024 data_used: 14016512
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:44.179323+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f19fd000/0x0/0x4ffc00000, data 0x4ecb1ba/0x5191000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:45.179477+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:46.179588+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:47.179786+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:48.180000+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3899062 data_alloc: 234881024 data_used: 14016512
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:49.180267+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f19fd000/0x0/0x4ffc00000, data 0x4ecb1ba/0x5191000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:50.180612+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 31367168 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:51.181019+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.555446625s of 11.182128906s, submitted: 114
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:52.181195+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:53.181468+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3895558 data_alloc: 234881024 data_used: 14016512
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:54.181643+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f1a2d000/0x0/0x4ffc00000, data 0x4ecb1ba/0x5191000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:55.181864+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371130400 session 0x560372528960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371130400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 ms_handle_reset con 0x560371130400 session 0x56036ef3e000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:56.182080+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:57.182260+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:58.182465+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894686 data_alloc: 234881024 data_used: 14012416
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:21:59.182659+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f1a2d000/0x0/0x4ffc00000, data 0x4ecb197/0x5190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:00.182799+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:01.183004+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:02.183236+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 31358976 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:03.183474+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 heartbeat osd_stat(store_statfs(0x4f1a2d000/0x0/0x4ffc00000, data 0x4ecb197/0x5190000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894686 data_alloc: 234881024 data_used: 14012416
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _renew_subs
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 523 handle_osd_map epochs [524,524], i have 523, src has [1,524]
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.078090668s of 11.711444855s, submitted: 15
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x5603709c5400 session 0x560372ad6960
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 31481856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:04.183671+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x560370d19c00 session 0x560370b170e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 31481856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:05.183941+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2a000/0x0/0x4ffc00000, data 0x4eccd14/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2a000/0x0/0x4ffc00000, data 0x4eccd14/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 31481856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:06.184144+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 31481856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:07.184303+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 31481856 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:08.184551+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3898156 data_alloc: 234881024 data_used: 14020608
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:09.184749+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:10.184911+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:11.185204+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2a000/0x0/0x4ffc00000, data 0x4eccd14/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:12.185530+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2a000/0x0/0x4ffc00000, data 0x4eccd14/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:13.185832+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3898156 data_alloc: 234881024 data_used: 14020608
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2a000/0x0/0x4ffc00000, data 0x4eccd14/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:14.185984+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:15.186196+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2a000/0x0/0x4ffc00000, data 0x4eccd14/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:16.186414+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:17.186671+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 31473664 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:18.186874+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3898156 data_alloc: 234881024 data_used: 14020608
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.958278656s of 15.028451920s, submitted: 2
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x560370d9a800 session 0x5603708961e0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 31285248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:19.187051+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560373552400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 31285248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:20.187202+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 31285248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:21.187328+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207454208 unmapped: 31285248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:22.187512+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:23.187629+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900832 data_alloc: 234881024 data_used: 14061568
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:24.187766+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:25.187931+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:26.188092+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:27.188202+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:28.188332+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3900832 data_alloc: 234881024 data_used: 14061568
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:29.188467+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:30.188585+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:31.188703+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:32.188830+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207462400 unmapped: 31277056 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:33.190239+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 207470592 unmapped: 31268864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3914272 data_alloc: 234881024 data_used: 18251776
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:34.190497+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 209829888 unmapped: 28909568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:35.190627+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 209829888 unmapped: 28909568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:36.190771+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 209829888 unmapped: 28909568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:37.190937+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 209829888 unmapped: 28909568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:38.191095+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 209829888 unmapped: 28909568 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3922272 data_alloc: 234881024 data_used: 18919424
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:39.191260+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:40.191545+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:41.191807+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:42.192087+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:43.192392+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3932512 data_alloc: 234881024 data_used: 22466560
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:44.192548+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:45.192749+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:46.192909+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:47.193070+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:48.193347+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 27205632 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3935392 data_alloc: 234881024 data_used: 23535616
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:49.193537+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:50.193735+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:51.193872+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:52.194032+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:53.194206+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3935392 data_alloc: 234881024 data_used: 23535616
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:54.194480+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:55.194757+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:56.194946+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:57.195102+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:58.195377+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a07000/0x0/0x4ffc00000, data 0x4ef0d14/0x51b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3935392 data_alloc: 234881024 data_used: 23535616
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:22:59.195671+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212516864 unmapped: 26222592 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x560371973c00 session 0x56036eeb0b40
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x560373552400 session 0x56036ef33c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:00.195815+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212647936 unmapped: 26091520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560371973c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.779670715s of 41.788688660s, submitted: 2
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x560371973c00 session 0x560372e785a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:01.196022+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212647936 unmapped: 26091520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:02.196228+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212647936 unmapped: 26091520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2b000/0x0/0x4ffc00000, data 0x4eccd14/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:03.196539+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212647936 unmapped: 26091520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3931476 data_alloc: 234881024 data_used: 23846912
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x5603709c5400
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x5603709c5400 session 0x5603719c0000
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:04.196770+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d19c00
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212647936 unmapped: 26091520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x560370d19c00 session 0x56036fca25a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:05.197081+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212647936 unmapped: 26091520 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2b000/0x0/0x4ffc00000, data 0x4eccd14/0x5193000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:06.197256+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x560370d9a800 session 0x560372e785a0
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: handle_auth_request added challenge on 0x560370d9a800
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 ms_handle_reset con 0x560370d9a800 session 0x56036ef33c20
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:07.197477+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:08.197777+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:09.197964+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:10.198184+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:11.198348+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:12.198605+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:13.198763+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:14.199008+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:15.199293+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:16.199453+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:17.199595+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:18.199737+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:19.200169+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:20.200633+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:21.200836+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:22.201166+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:23.201413+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:24.201719+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:25.201921+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:26.202044+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:27.202186+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:28.202339+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:29.202502+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:30.202797+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:31.203022+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:32.203362+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:33.203542+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:34.203710+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:35.203901+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:36.204085+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:37.204244+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:38.204404+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:39.204601+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:40.204764+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:41.204938+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:42.205119+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:43.205318+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:44.205486+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:45.205649+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:46.205843+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:47.206036+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:48.206283+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:49.206494+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:50.206698+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:51.206906+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:52.207190+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:53.207363+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:54.207509+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:55.207755+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:56.207975+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:57.208169+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:58.208389+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:23:59.208537+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:00.208699+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:01.208813+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:02.209019+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:03.209166+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:04.209336+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:05.209496+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:06.209653+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:07.209800+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:08.209941+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:09.210054+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:10.210123+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:11.210246+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:12.210389+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:13.210510+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:14.210676+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:15.210886+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:16.211066+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:17.211238+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:18.211393+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:19.211474+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:20.211628+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:21.211795+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:22.212528+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:23.212675+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:24.212813+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212656128 unmapped: 26083328 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:25.212949+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:26.213093+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:27.213253+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:28.213354+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:29.213518+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:30.213682+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:31.213809+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:32.213972+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:33.214130+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:34.214255+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:35.214462+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:36.214606+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:37.214737+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:38.214859+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:39.214978+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:40.215204+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:41.215330+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:42.215531+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:43.215690+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:44.215822+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:45.215978+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:46.216130+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:47.216257+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: osd.0 524 heartbeat osd_stat(store_statfs(0x4f1a2c000/0x0/0x4ffc00000, data 0x4eccd05/0x5192000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x903f9c6), peers [1,2] op hist [])
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:48.216393+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212590592 unmapped: 26148864 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Nov 22 04:25:22 compute-0 ceph-osd[88575]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Nov 22 04:25:22 compute-0 ceph-osd[88575]: bluestore.MempoolThread(0x56036d57db60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3930596 data_alloc: 234881024 data_used: 23842816
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:49.216481+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212729856 unmapped: 26009600 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'config diff' '{prefix=config diff}'
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:50.216608+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'config show' '{prefix=config show}'
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'counter dump' '{prefix=counter dump}'
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'counter schema' '{prefix=counter schema}'
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212459520 unmapped: 26279936 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:51.216751+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: prioritycache tune_memory target: 4294967296 mapped: 212574208 unmapped: 26165248 heap: 238739456 old mem: 2845415832 new mem: 2845415832
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: tick
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_tickets
Nov 22 04:25:22 compute-0 ceph-osd[88575]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-11-22T04:24:52.216955+0000)
Nov 22 04:25:22 compute-0 ceph-osd[88575]: do_command 'log dump' '{prefix=log dump}'
Nov 22 04:25:23 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:23 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 22 04:25:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:25:23.044 162689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Nov 22 04:25:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:25:23.045 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Nov 22 04:25:23 compute-0 ovn_metadata_agent[162684]: 2025-11-22 04:25:23.045 162689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Nov 22 04:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Nov 22 04:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/671321059' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 22 04:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 22 04:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 22 04:25:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3936043561' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Nov 22 04:25:23 compute-0 ceph-mon[75011]: pgmap v2337: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:23 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/671321059' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Nov 22 04:25:23 compute-0 ceph-mon[75011]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Nov 22 04:25:23 compute-0 ceph-mon[75011]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Nov 22 04:25:23 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0) v1
Nov 22 04:25:23 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760980623' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 22 04:25:24 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19419 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:24 compute-0 nova_compute[253461]: 2025-11-22 04:25:24.545 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:24 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0) v1
Nov 22 04:25:24 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3414062599' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 22 04:25:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3760980623' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Nov 22 04:25:24 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3414062599' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Nov 22 04:25:25 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0) v1
Nov 22 04:25:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830920273' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 22 04:25:25 compute-0 nova_compute[253461]: 2025-11-22 04:25:25.442 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0) v1
Nov 22 04:25:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4099757911' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 22 04:25:25 compute-0 systemd[1]: Starting Hostname Service...
Nov 22 04:25:25 compute-0 systemd[1]: Started Hostname Service.
Nov 22 04:25:25 compute-0 ceph-mon[75011]: from='client.19419 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:25 compute-0 ceph-mon[75011]: pgmap v2338: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1830920273' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Nov 22 04:25:25 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/4099757911' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Nov 22 04:25:25 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0) v1
Nov 22 04:25:25 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2427214336' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 22 04:25:26 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19429 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:26 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0) v1
Nov 22 04:25:26 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2852995057' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 22 04:25:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2427214336' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Nov 22 04:25:26 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2852995057' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Nov 22 04:25:27 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0) v1
Nov 22 04:25:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2893565019' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 22 04:25:27 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19435 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:27 compute-0 ceph-mon[75011]: from='client.19429 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:27 compute-0 ceph-mon[75011]: pgmap v2339: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:27 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2893565019' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Nov 22 04:25:27 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0) v1
Nov 22 04:25:27 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1789638287' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 22 04:25:28 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19439 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:28 compute-0 podman[314520]: 2025-11-22 04:25:28.341671135 +0000 UTC m=+0.058303693 container health_status 253f6ba519ebf82515aeabfe45be6cddc22433df8c8bd43174ccfab301bb4be4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251118, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 22 04:25:28 compute-0 podman[314521]: 2025-11-22 04:25:28.411614435 +0000 UTC m=+0.119936363 container health_status 995caf71450240854e9a939edcc29eb1efa2781351eeb9c327fa8894fd9f04e6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251118, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=7b76510d5d5adf2ccf627d29bb9dae76, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 22 04:25:28 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19441 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:28 compute-0 ceph-mon[75011]: from='client.19435 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:28 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1789638287' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Nov 22 04:25:29 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0) v1
Nov 22 04:25:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687854135' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 22 04:25:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader).osd e524 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Nov 22 04:25:29 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0) v1
Nov 22 04:25:29 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1491605756' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 22 04:25:29 compute-0 nova_compute[253461]: 2025-11-22 04:25:29.547 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:29 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19447 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:29 compute-0 ceph-mon[75011]: from='client.19439 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:29 compute-0 ceph-mon[75011]: from='client.19441 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:29 compute-0 ceph-mon[75011]: pgmap v2340: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/687854135' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Nov 22 04:25:29 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1491605756' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19449 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002894458247867422 of space, bias 1.0, pg target 0.8683374743602266 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661762551279547 of space, bias 1.0, pg target 0.1998528765383864 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Nov 22 04:25:30 compute-0 ceph-mgr[75294]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Nov 22 04:25:30 compute-0 nova_compute[253461]: 2025-11-22 04:25:30.442 253465 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Nov 22 04:25:30 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0) v1
Nov 22 04:25:30 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791474844' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 22 04:25:31 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:31 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0) v1
Nov 22 04:25:31 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3709041211' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 22 04:25:31 compute-0 ceph-mon[75011]: from='client.19447 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:31 compute-0 ceph-mon[75011]: from='client.19449 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:31 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/1791474844' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Nov 22 04:25:31 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19455 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:31 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19457 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:32 compute-0 ceph-mon[75011]: pgmap v2341: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:32 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3709041211' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Nov 22 04:25:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Nov 22 04:25:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621347734' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 04:25:32 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0) v1
Nov 22 04:25:32 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3807775707' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 22 04:25:33 compute-0 ceph-mgr[75294]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 271 MiB data, 654 MiB used, 59 GiB / 60 GiB avail
Nov 22 04:25:33 compute-0 ceph-mon[75011]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0) v1
Nov 22 04:25:33 compute-0 ceph-mon[75011]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2506683761' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Nov 22 04:25:33 compute-0 ceph-mon[75011]: from='client.19455 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:33 compute-0 ceph-mon[75011]: from='client.19457 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Nov 22 04:25:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/2621347734' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Nov 22 04:25:33 compute-0 ceph-mon[75011]: from='client.? 192.168.122.100:0/3807775707' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Nov 22 04:25:33 compute-0 ceph-mgr[75294]: log_channel(audit) log [DBG] : from='client.19465 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
